id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
247750638
pes2o/s2orc
v3-fos-license
Standard method for performing positron emission particle tracking (PEPT) measurements of froth flotation at PEPT Cape Town Positron emission particle tracking (PEPT) is a technique for measuring the motion of tracer particles in systems of flow such as mineral froth flotation. An advantage of PEPT is that tracer particles with different physical properties can be tracked in the same experimental system, which allows detailed studies of the relative behaviour of different particle classes in flotation. This work describes the standard operating protocol developed for PEPT experiments in a flotation vessel at PEPT Cape Town in South Africa. A continuously overflowing vessel with constant air recovery enables several hours of data acquisition at steady state flow and consistent flotation conditions. Tracer particles are fabricated with different coatings to mimic mineral surface hydrophobicity and size, and a data treatment derived from a rotating disk study is utilized to produce high frequency (1 kHz) location data relative to the tracer activity. Time averaging methods are used to represent the Eulerian flow field and occupancy of the tracer behaviour based on voxel schemes in different co-ordinate systems. The average velocity of the flow in each voxel is calculated as the peak of the probability density function to represent the peak of asymmetrical or multimodal distributions.• A continuously overflowing flotation vessel was developed for extended data acquisition at steady state flow.• The data treatment enabled the direct comparison of different particle classes in the flotation vessel.• The solids flow fields was described by the probability density function of tracer particle velocity measured in different voxel schemes. a b s t r a c t Positron emission particle tracking (PEPT) is a technique for measuring the motion of tracer particles in systems of flow such as mineral froth flotation. An advantage of PEPT is that tracer particles with different physical properties can be tracked in the same experimental system, which allows detailed studies of the relative behaviour of different particle classes in flotation. This work describes the standard operating protocol developed for PEPT experiments in a flotation vessel at PEPT Cape Town in South Africa. A continuously overflowing vessel with constant air recovery enables several hours of data acquisition at steady state flow and consistent flotation conditions. Tracer particles are fabricated with different coatings to mimic mineral surface hydrophobicity and size, and a data treatment derived from a rotating disk study is utilized to produce high frequency (1 kHz) location data relative to the tracer activity. Time averaging methods are used to represent the Eulerian flow field and occupancy of the tracer behaviour based on voxel schemes in different co-ordinate systems. The average velocity of the flow in each voxel is calculated as the peak of the probability density function to represent the peak of asymmetrical or multimodal distributions. Introduction Positron emission particle tracking (PEPT) is a radiation-based technique for measuring the location of a tracer particle with time within a system of interest [14] . It has grown to be well established within science and engineering research, particularly in the mineral separation process of froth flotation [11 , 17] , for which the application of optical measurement techniques is impaired by the high density of the pulp phase and the opaque nature of the mineral-laden bubbles. PEPT location data can be used to describe the Lagrangian trajectory of the tracer particle and to calculate Eulerian descriptions of the flow such as velocity and acceleration fields as well as occupancy and residence times. These are valuable parameters to improve understanding of particle behaviour in complex and turbulent multi-phase systems such as flotation, and for the validation of theoretical models of flow. Another major advantage of PEPT for flotation research is that it can be used to investigate the effect of particle properties, such as size, shape, surface hydrophobicity and density (e.g. [1 , 6 , 7 , 11] ). The tracer particle for PEPT measurements is both labelled with a positron-emitting radionuclide and has physical and chemical properties that represent those of the flotation bulk. As the tracer particle moves around the interior of an experimental vessel, positrons released in the decay of the radionuclide from the tracer annihilate with local electrons to produce pairs of almost back-to-back 511 keV gamma rays. If both gamma rays in any pair are detected in coincidence within a positron emission tomography (PET) camera, a virtual line can be formed to indicate the path along which the annihilation event occurred. This is known as a line of response (LOR), and with many LORs, the 3D position of the tracer can be located; ( X, Y, Z ) in Cartesian co-ordinates. When a tracer particle is moving, the LORs can be analysed in chronological sets to derive the tracer location, which contains the tracer position at a specific time. The most widely used algorithm to find the tracer location, "track", was developed by the University of Birmingham [14] . In more recent times, new location techniques have been developed for PEPT as reviewed by Windows-Yule et al. [18] . The protocol presented here was developed in collaboration between Imperial College London and PEPT Cape Town at the University of Cape Town to enable PEPT measurements in a flotation vessel. The standard method The method consists of a series of sub-steps: flotation methods, PET scanner, tracer particles, PEPT measurement, protocol to locate freely moving particles based on a rotating disk study, calculation of velocity data, co-ordinate systems and time averaging methods. Flotation methods The flotation vessel used for PEPT experiments is cylindrical and formed of acrylic (Stanley Plastics Ltd., UK) as described in Norori-McCormac et al. [13] and is shown in Fig. 1 . The geometry is represented by a standard configuration, with the diameter and height both equalling 180 mm. The vessel contains four baffles with width one-tenth of the vessel diameter, spaced at intervals of 90 °a nd at a height of two thirds of the vessel height. It is agitated with a six flat-bladed Rushton turbine of diameter one-third of the vessel diameter (60 mm), with width and height as one-quarter and onefifth of the diameter, respectively. The impeller is driven with a frequency of 1200 rpm and resulting impeller tip speed of approximately 3.7 m s − 1 . Aeration is provided through a small air reservoir at the base of the vessel. It passes through a frit plate to disperse fine air bubbles, which is in the shape of a disk and fabricated from layers of sintered fine steel mesh with diameter 180 mm and mesh hole size 20 μm (Carbis Filtration Ltd., UK). The vessel is fitted with an external launder to enable continuous operation, with any overflowing material collected and pumped to the base of the vessel. It contains an air blow down system at 100 litres per minute to break down any froth and direct solids to the recycling pump feed. A peristaltic pump recycles slurry from the launder and returns it to the base of the vessel, at a height of 45 mm above the air plate. The recycle feed is directed towards the impeller via a hose barb on the inside of the vessel so that the returned material does not short circuit directly to the launder. Recycling the overflowed solids enables data collection over several hours with the same solid concentration in the pulp. Online measurements of the overflowing froth height and velocity are performed as described by Norori-McCormac et al. [13] . A video camera and optical level sensor are placed over the lip to capture the velocity and height of the overflowing froth. These variables are monitored throughout each experiment to maintain consistent flotation behaviour during the experiment in terms of the air recovery, which is a parameter related to performance in mineral froth flotation [9 , 12] , and has the advantage of being non-invasive. This enables repeat experiments with the same flotation conditions. Fig. 2 shows examples of consistent air recovery over the duration of two experiments with different tracer particles. The air recovery measurements are used to determine when the vessel achieves steady state by the properties of the overflowing froth. The ambient temperature and temperature inside the field of view of the camera are monitored during experiments. The air recovery measurements are used to guide the additional dosing of volatile flotation reagents to enable longer experiments at consistent flotation performance. Positron camera The PEPT experiments are performed at the laboratories of PEPT Cape Town, which houses an ECAT 'EXACT3D' HR ++ (Model: CTI/Siemens 966) PET camera [2] . The vessel is installed in the centre of the field of view of the positron camera via two truck trolleys and a supporting frame as shown in Fig. 3 . Specific aspects of the vessel were designed to enable PEPT experiments with the HR ++ camera, with the vessel diameter and height designed to use the full axial length of the field of view of the camera. The impeller is mounted on the top of the vessel with a right-angled gearbox (Automation International Ltd., UK) to drive the impeller from outside the field of view of the positron camera. The use of steel and other dense construction materials is minimized, with the vessel material and fasteners fabricated from low density plastics such as acrylic to minimize the attenuation of gamma rays detected by the PET camera. The air frit plate is constructed from layers of fine sintered steel meshes to reduce the overall density while maintaining structural strength and to prevent deformation due to the upwards air flow. Tracer particles Consistent vessel performance between repeat experiments enables PEPT measurements of different tracer particles under the same flotation conditions. Tracer particles for flotation studies are fabricated from a core of Purolite NRW100 ion exchange resin, by the method described in Cole et al. [5] . The core sizes start from approximately 350 μm which makes the tracer methods appropriate to study coarse particle flotation with PEPT. These particles can be radiolabelled with an initial activity of up to 1.5 mCi (56 MBq) of the PET radionuclide 68 Ga which decays via positron emission with a half-life of 68 min. The initial radiolabeled activities are measured immediately after fabrication with an ionization chamber (model Capintec CRC-25R). In flotation, surface hydrophobicity and size are two key properties related to performance. Tracer particles for PEPT are modified to represent different flotation particles. The native surface of the resin material is hydrophilic and so can be used as a hydrophilic tracer after labelling. For a silica-based flotation species, the resin particle can be coated with silanised silica (0 < d silica < 50 μm) to create a hydrophobic tracer following the method of Cole et al. [6] . In this case, epoxy resin adhesive is used to fix the silica to the surface of the resin, so the coating can withstand the torturous environment near the impeller and in the peristaltic pump used to recycle overflowing material. The coatings lead to a difference in size and mass between tracer particles. To manage particle size, the core sizes and coating thickness are measured throughout the coating process by microscope image analysis and comparison with monodisperse particle size standards of diameter 98.1 ± 2.8 μm (Whitehouse Scientific Ltd., NIST Standard). PEPT measurements PEPT measurements are performed by recording list-mode data in increments of up to 20 min, based on the maximum file size that can be handled by the data acquisition system. The list-mode data contain a list of lines of response recorded as pairs of positions in 3D of the two detector elements from each coincidence event and a time stamp every 1 ms. The PEPT location data are derived from the list-mode data using the University of Birmingham "track" algorithm [14] which is an iterative method based on slicing the lines of response into bins of size N and finding the closest passing fraction, f . The resulting location data are a series of positions, P ( X, Y, Z ) , with time, t, where the co-ordinates X and Z represent the horizontal dimensions of the measurement, and Y corresponds to the vertical dimension which is offset to have the impeller at Y = 0 mm or a fractional height, Y/H = 0.33, where H is the total height. In this protocol, the choice of parameters for locating a tracer particle are guided by a preliminary measurement with a tracer particle attached to the tip of an impeller blade in the flotation vessel. By using the movement of a tracer particle on a predictable path, it is possible to determine values of N and f to create location data with specific properties. In particular, an optimum value of f can be derived that minimises the statistical uncertainty in the location measurement, and N can be varied with activity to maintain a constant location rate in the data. This standard protocol follows the methods presented in Parker et al. [14] and Volkwyn et al. [16] . Rotating disc study Consider the results from a rotating disk study with a coarse tracer particle of diameter 500 μm radiolabeled with 660 μCi (24.4 MBq) of 68 Ga by ion exchange [5] . The tracer particle was attached to the tip of the impeller in the flotation vessel. The vessel contains attenuating materials, such as stainless steel, acrylic and nylon plastics. List-mode data were acquired over four half-lives of 68 Ga ( t 1 / 2 = 68 min) and location data were determined with the "track" algorithm for a range of different bin sizes, N, and final fraction, f , values, to evaluate the statistical uncertainties of the PEPT measurement as the activity of the tracer decreased. The X and Z co-ordinates of each PEPT measurement were considered as they represent the transaxial and axial dimensions of the field of view of the HR ++ camera and are associated with different geometrical detection efficiencies, and the Y co-ordinate was fixed during experiments. Methods to evaluate the standard uncertainty in the location data Each co-ordinate of the tracer position with time, t, was predicted by the equations, where a is the amplitude of the motion and the radius of the impeller, ν is the frequency of rotation of the impeller, for a phase angle, φ, and center of rotation ( Z 0 , X 0 ). An unconstrained nonlinear optimization was used to find the optimum values of a , ν, φ and ( Z 0 , X 0 ) for each combination of N and f based on an initial guess of the values from equipment settings. The root mean squared differences between the predicted and measured co-ordinates in each dimension with PEPT were calculated as, where The standard uncertainties for each measured co-ordinate of the tracer position were calculated as, Methods to maintain a high location rate The location rate, L , was calculated as the number of locations per second. A target location rate of 1 kHz was set, where the time interval between consecutive locations was approximately 1 ms to correspond to the 1 ms time stamp of the list mode data acquisition of the PET camera, which approaches the smallest increment of the time measurement. With the decay of the PET radionuclide, the number of lines of response recorded decreased with each 1 ms interval. To consistently produce location data with the target location rate, the N value was decreased with the activity of the tracer, as suggested by Chiti et al. [3] . The value of N was chosen to be the mean number of events recorded in each 1 ms interval. Fig. 5 shows this target bin size, N , as a function of the tracer activity, A , at a value of f opt = 0.30. The mean location frequency of a random sample of 50 locations from the data after locating the tracer particle with N and f opt are shown in Fig. 6 which was consistent for the duration of the measurements. Fig. 7 shows the location uncertainty with activity, calculated from the initial activity of 68 Ga as a function of time, for locations derived with f = 0.30 and N values from Fig. 5 . The location uncertainty increases with decreasing activity of 68 Ga and both u (X ) and u (Z) tend to a minimum of 0.025 mm, which is considerably smaller than the size of the particle. The decay fit with high R 2 value suggests that an increase in tracer activity beyond the maximum shown would not lead to a further decrease in the location uncertainty. The fitted decay relationships indicate a minimum activity of 150 to 200 μCi which corresponds to a minimum uncertainty in the location measurement in X and Z for arcs of radius 30 mm. Methods to reduce noise in the location data Smoothing was introduced to reduce high-frequency noise in the location data, using a cubic spline function as suggested by Cole et al. [4] with a compact kernel of finite height which is based on weights w i , for q i > 0 . 5 and The kernel q i is a function of the time of each location and weighting function width, which is equal to the half-kernel width t, The final PEPT co-ordinate ˆ A which corresponds to X, Y or Z, is calculated from the PEPT coordinate ˆ A i at time i , Fig. 8 shows the uncertainty in the measured location with the half width of the kernel t. For both tracer activities considered, 520 μCi and 130 μCi, smoothing with small values of t of around 4 ms reduced the location uncertainty to its minimum value. Methods to evaluate the standard uncertainty in the velocity data The weighted average for the velocity was calculated over 11 adjacent positions as suggested in the six points method by Stewart et al. [15] . The predicted velocities V f it,X (t) and V f it,Z (t) in two coordinates were calculated from the optimum values of a , ν and φ found previously for each location procedure, where The root mean squared differences between the predicted and measured velocity in the X and Z co-ordinates were calculated as The standard uncertainties in the velocity in each dimension were then calculated as Fig. 9 shows the uncertainty in the velocity in X and Z. The results suggest a similar minimum activity of 150 to 200 μCi to achieve a minimum uncertainty in the measurement of the impeller tip speed. Fig. 10 shows the uncertainty in the velocity with the uncertainty in the location for two coordinates X and Z. A quadratic fit is included between the two types of uncertainty, in position and velocity, which suggests that the uncertainty in the velocity can be predicted from the uncertainty in the location of the tracer for the tracer activities included here, as proposed by García-Triñanes et al. [10] . Protocol for locating freely moving tracer particles from PEPT measurements in the flotation vessel Based on the results of the rotating disk study, the parameters for locating freely moving tracer particles in the flotation vessel are f = 0.30 and a target bin size, N , which is related to the mean number of lines of response recorded per 1 ms time interval. The target bin size is also reduced linearly with tracer activity, A , relative to the activity at the start of each listmode file recording to maintain a location frequency of approximately 1 kHz as shown in Fig. 11 for two different PEPT experiments at consistent flotation conditions. The standard uncertainties are relatively large in magnitude (comparable to the order of magnitude of L ) because of the location method. By locating the particle with a fixed sample size of lines per location, the time interval spanned by each sample of lines varies with the attenuating properties of the local media, which change with the non-uniform mass density of the solid and gas suspension in the fluid, and the tracer velocity, which fluctuates with the turbulent properties of the fluid. It should be noted that there is no "optimum" N for PEPT studies when using "track", and varying the value impacts the location rate and uncertainty in the location measurement. The parameters for locating the tracer particle selected here are based on the motion of a tracer particle moving in a circle (radius of 30 mm) at constant velocity (3.7 m/s). This motion is not necessarily characteristic of the full range of motion of the tracer particle inside a flotation vessel and further research is required to develop a scheme that produces a sequence of locations that represent all features in the tracer motion such as arcs of different radii in turbulent vortices and high rates of acceleration. The main motivation for utilizing a location scheme related to the tracer activity is to facilitate the direct comparison of consecutive paths in any region of interest in the flotation vessel, following previous studies such as Yang et al. [19] which reported differences in location uncertainty from measurements of stationary tracer particles with different activities. Following the rotating disc study, list-mode data recorded at activities lower than 150 μCi are not used to produce location data at a location rate of 1 kHz as these are correlated with an increase in uncertainty in location and velocity measurement. The location data are smoothed with a cubic spline kernel of half width of 4 ms to reduce noise in the location data. Examples of the resulting trajectories of two freely moving tracer particles using this protocol are shown in Fig. 12 for a repeat flotation experiment. On this scale, the location data appear as continuous lines due to the high 1 kHz location rate. To visualize the location rate closer to the scale of the tracer particle, trajectories of two tracer particles are also represented in a 1 cm 3 region of interest near the impeller in Fig. 13 . Other co-ordinate systems The measured positions in Cartesian co-ordinates, P ( X, Y, Z ) , are converted to cylindrical polar coordinates, p ( r, θ , z ) , with time, t, where z corresponds to the axial length of the reactor vessel and is equivalent to the vertical component Y in P ( X, Y, Z ) . The relationship between these two co-ordinate systems is illustrated in Fig. 14 . Time averaging methods Around two hours of velocity data per tracer are used to create time averaged 3D Eulerian flow representations with voxel lengths from 2 to 10 mm. A voxel scheme based on Cartesian coordinates is used to calculate horizontal slices of the average tracer velocity with different vertical positions and are represented as average trajectories or streamlines. In this case the voxels are cubes and have equal volume. A voxel scheme in cylindrical polar co-ordinates is used to calculate horizontal and azimuthal slices of the distribution of average velocity component values to calculate streamline and quiver plots. In this case the volume of the voxels increases with radial position as shown in Fig. 15 , and different angular spans can be used to measure average velocity behaviour at different angles around the circumference of the vessel. It is common practice to use a span of 5 °. When using cylindrical polar co-ordinates, four slices of the data in equivalent azimuthal slices in the vessel are taken and summed to produce the time-averaged data arrays, starting at a particular angle plus increments of 90 °, as shown for a starting angle of 16 °in Fig. 16 as the mid-angular position between the baffles. This employs the symmetry of the vessel along the vertical dimension of the field of view of the PET camera and axial dimension of the vessel to increase the number of data points within each voxel. For each voxel, aesthetic histograms of the velocity data are produced to highlight the underlying velocity distribution of each tracer using Doane's formula for the optimum number of bins, k opt [8] , where, and σ γ 1 = 6 ( n − 2 ) ( n + 1 ) ( n + 3 ) . (12) Eq. 12 relates the number of data points, n , and the skewness of the distribution, γ 1 , as determined by the Fisher-Pearson coefficient of skewness which is based on the third standardised moment around the mean. The value of k opt increases with n and with skewness to provide a greater number of bins and thus better resolution to closely reflect the underlying distribution in the data. After determining k opt for each voxel, the bin width, h , was calculated between the maximum and minimum value of each velocity recorded for each voxel, using The modal average of the distribution of tracer velocity in each voxel is preferred to the mean average as it represents the peak velocity in distributions that are asymmetrical or multi-modal. For a more precise value of the peak tracer velocity in each voxel, the probability density function (PDF) of the distribution of all velocity values logged in that voxel is estimated with a Gaussian kernel and the maximum peak is found to the nearest 1 mm/s. Voxel data are excluded from the results when the distribution contains fewer than 25 data points, as this would be insufficient to velocity data to characterize the peak of the PDF of the distribution of velocity. Fig. 17 shows a comparison of the two methods for representing the distribution of measured tracer velocities in a voxel, aesthetic histograms and kernel density estimates (KDE) of the PDF, where the voxel is a region of interest in the vessel to compare the behaviour of hydrophilic and hydrophobic hydrophobic in a 1 cm 3 region of interest in the flotation vessel. Kernel density estimates from two different velocity treatments are shown: "6 pts" from Stewart et al. [15] and "2 pts" is a 1st order forward finite difference scheme. tracer particles. Both methods can be used to find the peak of the distribution, with the KDE tending to find the peak to the nearest 1 mm/s and the histogram to a precision of the bin width. The PDFs of tracer velocity can be used to illustrate the impact of using the six points velocity calculation [15] , which tends to reduce noise in the velocity data propagated from the 3D position and time measurements of the location data. As shown in Fig. 18 , the noise can be considerable when calculated with a first order finite difference scheme as represented by the width of the probability density function (PDF) of the velocity. This leads to extreme values in the PDF, in comparison to the narrower PDF for the six points methods. The six points method is widely used in PEPT for measurements of the average flow behaviour, however the compromise with this method is that it also smooths out turbulent features in the flow. Fig. 19 shows two representations of the streamlines or average trajectories of the average velocity of a hydrophilic tracer particle in an azimuthal slice, which were both derived from the same location data with two different voxel schemes. The streamlines in the 10 mm scheme are smoother and generally continuous, in comparison to the streamlines in the 2 mm scheme which show higher noise levels towards the froth. Therefore, in the standard method, streamlines are plotted for voxel schemes with 10 mm length. Both voxel schemes tend to the same average velocity behaviour for each component as shown in Fig. 20 for a series of vertical profiles with different radius in the vessel. The profiles from the larger 10 mm voxel length show fewer small-scale fluctuations in velocity and tend to underestimate the maximum and minimum velocity values in comparison to the 2 mm voxel length. The profiles from the 2 mm voxel data correspond to a subset (1/25) of the 10 mm voxel data. An example of a horizontal 2D slice through the data in shown in Fig. 21 for a voxel length of 10 mm and span 5 °. The tracer occupancy is represented in three ways: the location density as described by the number of locations per volume, the number of passes of each volume and the residence time of the tracer per volume. Examples of these representations are shown in Fig. 22 , with each normalized by the varying voxel volume. The three representations are similar with highest occupancy underneath the impeller which is also above the air plate. The biggest differences occur around the impeller blade and in the discharge plane. The location density is low near the impeller blades and higher in the discharge stream. The number of passes is high near the impeller and in the discharge plane, whereas the residence time is low both near the impeller blades and discharge stream. The location rate is not constant in the flotation data when using a fixed number of lines of response with the "track" algorithm. It is velocity dependent, with a lower rate in regions of high speed, which may lead to misrepresentations of occupancy with location density in regions with high acceleration as shown in Fig. 22 (a). The number of passes of each voxel is shown in Fig. 22 (b) and shows the tracer visits the voxels near the impeller with high frequency. The residence time near the impeller is also low in Fig. 22 (c), which is also related to the high tracer speed in that region. When comparing occupancy across different experiments, the representations are normalized by the total occupancy of the slice or of the vessel to ensure equivalent sample size between experiments. The tracer may get "stuck" with implication for the total occupancy count, be it time, locations or passes. As an example, hydrophobic tracer particles can get stuck in the stagnant region of the froth in the middle of the vessel and the corresponding occupancy data may need to be removed from the total to avoid skewing the occupancy elsewhere in the vessel. Conclusions This work presents a standard operating protocol for PEPT measurements in a froth flotation vessel with the HR ++ camera at PEPT Cape Town in South Africa. Measurements are performed at consistent flotation conditions as determined by the air recovery, to ensure the behaviour of tracer particles with different properties can be directly compared. A protocol for locating tracer particles was developed from a rotating disk study which locates tracer particles relative to activity. Voxel schemes in Cartesian co-ordinates and cylindrical polar co-ordinates are used for time averaged analyses, to describe the average flow behaviour in the flotation vessel in azimuthal and horizontal slices. The average tracer velocity values are determined from the peak of the PDF of the distribution of velocity values in each voxel as estimated with a kernel density approach, to represent the most probable velocity value in all shapes of distribution, including asymmetrical and multimodal distributions. This protocol enables the wider application of PEPT to investigations of particle behaviour in coarse particle flotation, including the effect of different particle properties, flotation conditions and vessel design.
2022-03-28T15:03:57.936Z
2022-03-01T00:00:00.000
{ "year": 2022, "sha1": "3c64ab0633c2099588664ce99a32df40c921c7ea", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.mex.2022.101680", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e5b68380b05e54e4add0c6c034bd03f9becfdbcc", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
4023412
pes2o/s2orc
v3-fos-license
A white membrane beneath the inner limiting membrane of the retina in a 4-year-old child with ultrastructural evidence: a case report Background Epiretinal membranes (ERMs), secondary to retinal cell proliferation on the retinal surface, usually affect patients over 50 years of age but occur rarely in children. Here we report the case of a 4-year-old patient with a unilateral sub-inner limiting membrane (sub-ILM) membrane mimicking epiretinal membrane with notable ultrastructural features indicating its possible origin from old sub-ILM hemorrhage. Case presentation A 4-year-old boy was admitted with the complaint of poor vision in his right eye, which had been detected at school vision screening performed 6 months earlier. Fundal examination showed a feather-shaped white membrane in the macula of the right eye, and optical coherence tomography (OCT) revealed a thickened retina with a hyper-reflective band on the retinal nerve fiber layer. We suspected epiretinal membrane in the right eye, and pars plana vitrectomy with membrane peeling was performed to improve the patient’s vision. Surprisingly, the membrane was found intraoperatively to be located beneath the intact ILM; it was lifted carefully from the underlying retina as it was strongly adhered to a retinal artery of the superotemporal arcade. Postoperative scanning electron microscopy showed that the membrane consisted of hemosiderin, collagenous fibre and fibrinoid deposits. At follow-up visits, fundal examination and OCT revealed improvement in the retinal structure with disappearance of the hyper-reflective band and reduced retinal thickness. The patient’s visual acuity in the right eye was stable at 20/100 at 1 year post operation. Conclusions The white membrane presented here was found to lie between the intact ILM and the rest of the retina, adhering firmly to the superotemporal vessel arch. Given the ultrastructural findings of the membrane and the medical history, we speculate that the sub-ILM membrane probably developed secondary to a sub-ILM hemorrhage. Background Epiretinal membrane (ERM) is a nonvascular fibrocellular proliferation that occurs on the surface of the retina and causes retinal thickening and wrinkling, leading to visual impairment and metamorphopsia. ERMs are usually idiopathic and occur predominantly in patients over 50 years of age [1,2]. In children and adolescents, however, an ERM is a very rare condition often associated with an underlying etiology such as trauma, ocular inflammation, retinal vascular disease or combined hamartoma of the retina and retinal pigment epithelium [3]. Here we report a case of a unilateral ERM-like membrane with a unique location, just beneath the inner limiting membrane (ILM), in a 4-year-old child. Scanning electron microscopy revealed hemosiderin and collagenous fibres as the main components of the membrane. Case presentation A 4-year-old boy was admitted to our centre with the complaint of poor vision in his right eye, which had been detected at school vision screening performed 6 months earlier. There was no pain, redness or any other discomfort in either eye. The patient was born at full term via uncomplicated vaginal delivery. His ocular, medication, traumatic and familial histories were unremarkable. A general physical examination was normal. An ocular examination revealed a best-corrected visual acuity (BCVA) of 20/100 in the right eye and 20/20 in the left eye. There was no evidence of strabismus. Intraocular pressure and the anterior segment of both eyes were normal. Fundal examination showed a glistening light reflex from a feather-shaped white membrane in the macular region of the right eye (Fig. 1a). The left eye was normal (Fig. 1b). The membrane of about 1.5-papilla disc size was located near the superotemporal arcade vessels, and caused radial wrinkling of the central macula and vascular distortion. The vitreous was clear, and posterior vitreous detachment (PVD) was not evident. The optic disc was unremarkable. Optical coherence tomography (OCT) revealed a hyper-reflective band on the retinal nerve fibre layer (RNFL) in the thickened retina ( Fig. 2a, b). The surface of the retina was nearly smooth and uninterrupted. We suspected that the decreased vision was caused by the membrane in the right eye. The patient was treated by pars plana vitrectomy using a 25 G vitrector with membrane dissection under general anaesthesia. Surprisingly, the surgeon found intraoperatively that the ILM was intact and the white membrane was located just beneath the ILM. The surgeon first peeled away the ILM following indocyanine green staining. The edge of the membrane was then lifted using forceps to separate the membrane from the underlying retina, despite its strong adhesion to a retinal artery of the superotemporal arcade (Fig. 3). There was no obvious retinal bleeding or tears. Air tamponade was used at the end of surgery. Vision training was performed 1 month after the operation. At postoperative follow-up visits, fundus photography and OCT showed successful removal of the membrane and improvement in the retinal structure with disappearance of the hyperreflective band and reduced retinal thickness, respectively (Figs. 1c, d and 2c-f). One year later, the patient's BCVA in the right eye was stable at 20/100. Postoperative scanning electron microscopy revealed that the membrane was composed of collagenous fibre, fibrinoid deposits and cell debris containing clusters of dense iron particles (hemosiderin) (Fig. 4). Combined with the ultrastructural results and the sub-ILM location, we speculated that the organized membrane was caused by sub-ILM haemorrhage. The opaque white membrane (arrow) was located beneath the ILM (triangle). c The thick membrane was peeled away from the rest of the retina. d The membrane, which was tightly adhered to the superotemporal arcade vessels, was then completely dissected from the retinal artery Discussion An ERM arises secondary to the proliferation of cells along the ILM on the retinal surface. ERMs have been well characterized in adults: they are mostly idiopathic, associated with PVD and a defective ILM, and are found mainly in the elderly [1]. They are rare in adolescents and even rarer in children, as PVD is not likely to occur in these populations. The estimated incidence of ERM is 0.54 per 100,000 patients aged < 19 years [4]. The most commonly reported aetiologies are trauma (39%), uveitis (20%) and rare causes such as combined hamartoma of the retina and retinal pigment epithelium (11%); 30% of cases are idiopathic [4]. Other causes such as ocular toxocariasis, retinopathy of prematurity and Coats' disease can also lead to secondary ERM in children [2]. However, none of these conditions were found in the case presented here. In adult patients, ERMs generally have a cellophane-like appearance. In the present case, the fibrotic membrane was white, thick, and opaque enough to obscure the underlying retina. Preoperative OCT images depicted the membrane as a smooth hyper-reflective band located just above the RNFL, without severe involvement of the inner retinal layers, which is different to the appearance of the ERMs commonly seen in elderly patients. The widelyaccepted theory of ERM formation is related to cellular proliferation and phenotypic transition on remnants of the vitreous cortex after anomalous PVD [5][6][7]. However, in this 4-year-old patient, the retinal surface was almost continuous and PVD was not identified. Furthermore, the membrane was located between the relatively intact ILM and the rest of the retina. As a result of this unexpected finding, we analysed the structure and composition of the membrane using scanning electron microscopy, a powerful magnification tool that revealed abundant hemosiderin deposits within the membrane. Hemosiderin is an ironstorage complex found most often in macrophages after phagocytosis of red blood cells and is especially abundant following haemorrhage [8]. Hemosiderin is hardly observed in idiopathic ERMs; previous histological studies on surgically excised ERMs have revealed the main components to be retinal glial or myoblastic retinal pigment epithelial cells [9]. Although some fibrovascular ERMs present in eyes with extensive retinal ischemia may have a primarily vascular composition, such as blood vessels with hemocytes lined by endothelial cells [10], the unremarkable retinal vasculature and medical history of this patient's vascular disease make this case very unlikely. Taken together, it is speculated that the membrane developed after retinal sub-ILM haemorrhage and the gradual absorption and organization of the haemorrhage. The possible causes of sub-ILM haemorrhage in children are Terson's syndrome, shaken-baby syndrome, Valsalva maculopathy and birth-canal compression [11,12]. However, the patient's parents reported no history of baby-shaking, Valsalva maneuver or birth-canal compression. Nevertheless, we cannot rule out the possibility of trauma due to an accidental craniocerebral injury that could have occurred without the parents noticing. The retinal vessels of babies are not fully developed, and a surge in pressure in the intraocular veins, secondary to increased intracranial pressure during a craniocerebral injury, can cause spontaneous rupture of retinal capillaries [13]. This also partially explains the location of the membrane adjacent to the retinal vessels. Conservative management with observation may be suitable for young patients with idiopathic ERMs, while surgical treatment may be indicated for eyes with symptomatic vision disturbances or significant anatomical changes on OCT [14,15]. In our case, pars plana vitrectomy was performed without any complications. The BCVA, although not progressive, remained stable at 1 year postoperatively. The patient is still undergoing visual training, and the final results will be revealed at future follow-ups. Conclusions To our knowledge, this case is the first report of sub-ILM haemorrhage in a child without evident retinal diseases. In contrast to the commonly seen idiopathic or secondary ERMs in terms of location and components, in this child the white membrane was composed of abundant hemosiderin deposits and located beneath the intact ILM. Therefore, we speculate that the white membrane probably developed secondary to the sub-ILM haemorrhage.
2018-03-22T15:25:35.921Z
2018-03-20T00:00:00.000
{ "year": 2018, "sha1": "a24c6f0eb8b6504b8ebe822d7d538179b486d260", "oa_license": "CCBY", "oa_url": "https://bmcophthalmol.biomedcentral.com/track/pdf/10.1186/s12886-018-0748-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a24c6f0eb8b6504b8ebe822d7d538179b486d260", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
216457600
pes2o/s2orc
v3-fos-license
Determination of atomic oxygen state densities in a double inductively coupled plasma using optical emission and absorption spectroscopy and probe measurements A collisional radiative model for fast estimation and monitoring of atomic oxygen ground and excited state densities and fluxes in varying Ar:O2 mixtures is developed and applied in a double inductively coupled plasma source at a pressure of 5 Pa and incident power of 500 W. The model takes into account measured line intensities of 130.4 nm, 135.6 nm, 557.7 nm, and 777.5 nm, the electron densities and electron energy distribution functions determined using a Langmuir probe and multipole resonance probe as well as the state densities of the first four excited states of argon measured with the branching fraction method and compared to tunable diode laser absorption spectroscopy. The influence of cascading and self absorption is included and the validity of the used cross sections and reaction rates is discussed in detail. The determined atomic oxygen state densities are discussed for their plausibility, sources of error, and compared to other measurements. Furthermore, the results of the model are analyzed to identify the application regimes of much simpler models, which could be used more easily for process control, e.g. actinometry. Introduction Oxygen containing plasma are widely used in industry for etching [1,2], cleaning [3], or layer deposition processes [4][5][6][7], but are also a topic in recent scientific developments, e.g. plasma sterilization [8][9][10]. Due to its high reactivity atomic oxygen is of major interested in all these applications but the reliable measurement of atomic oxygen densities and fluxes is still a challenge especially if used for process control. Different techniques have been successfully applied to measure atomic oxygen, e.g. actinometry [11,12], laser-induced fluorescence (LIF) [13,14], VUV absorption [15,16], cavity enhanced absorption spectroscopy (CEAS) [17], or optical emission spectroscopy (OES) [18,19]. However, all of these methods cannot be applied without great technical and/or theoretical effort as well as taking special care regarding their drawbacks and applicable regime. Therefore, the improvement and benchmarking of these methods is important to be able to understand plasma surface and volume processes and the role of atomic densities. In the case of OES, the technical challenges are rather simple as spectrometers and detectors from the VUV to the NIR are achievable with convenient control software [20,21]. Furthermore, the non-invasive measurement does not influence the plasma itself and thereby the atomic oxygen density which is possible in actinometry, LIF or VUV absorption when applied incorrectly or in the wrong regime. However, this advantage is dearly bought with the challenge to develop a correct collisional radiative model (CRM) to reproduce the excitation and relaxation processes yielding the atomic oxygen density in the end (e.g. in [18]). In general, these models need several plasma parameters as input, e.g. the electron density n e , electron energy distribution function (EEDF), or gas temperature T g . In a few regimes, it is possible to apply simple models. These are based on line ratios which often cancel out the need for n e or the EEDF, like in actinometry, determine the plasma parameters from the emission [19,22], or calculate densities using the corona model [20]. However, the validity of these approaches has to be checked precisely as many effects can change the population density of an atomic level e.g. excitation from several levels due to electron impact excitation and quenching, or self absorption in case of emitting transitions. Unfortunately, the majority of cross sections, reaction rates, diffusion constants, and surface coefficients necessary to describe the collisional radiative processes is only determined theoretically (if at all) or in single experiments with unknown reproducibility. Therefore, applying OES in combination with collisional radiative models acquires critical review of the plasma processes as well as of the used data. However, if applied correctly the CRM has the great advantage of physically coupling all determined states and parameters by reasonable relations and building a correct hierarchy of the state densities. This way, unjuistfied statements or conclusions can be avoided which might be necessary in other direct measurement techniques like TALIF. In this work, a collisional radiative model is presented for fast estimation of the volume averaged atomic oxygen ground and excited state densities and fluxes in Ar:O 2 mixtures. The model deals with the population of the first seven states of atomic oxygen and can be used for optimizing the setup for the ground state and a specific excited state density or flux yielding the possibility of process control. However, as will be shown, this would imply a parallel measurement of all necessary plasma parameters for a wide range of application or can only be used for fine tuning of the density by measuring line intensities and applying plasma parameters determined in advance. The CRM is applied in a double inductively coupled plasma system (DICP) used for analyzing the inactivation mechanisms during plasma sterilization [23] and is investigated for photon [24] as well as particle fluxes. The model takes into account several measured plasma parameters obtained with a Langmuir probe (LP), multipole resonance probe (MRP), tunable diode laser absorption spectroscopy (TDLAS), and optical emission spectroscopy, as well as self absorption of emission lines, cascading, diffusion of particles to the walls, and quenching of Ar metastable and resonant states. The handling of the self absorption is discussed in detail. Available cross sections and reaction rates are reviewed regarding their plausibility and reproducibility. The results of the CRM are analyzed for plausibility and possible sources of error as well as for regimes to determine the atomic oxygen density with simplified models. Experimental Setup The CRM is applied in the double inductively coupled plasma system depicted in figure 1. It consists of a stainless steel vessel (diameter: 400 mm, height: 200 mm) with several diagnostic flanges at the half height of the system aligned to the center of the discharge. The top and bottom are sealed with quartz plates with an ICP coil mounted outside the vacuum chamber, respectively. Therefore, the system is powered from two sides and yields a homogeneous plasma zone in the central region of the vessel. All data used in the CRM are obtained from measurements at a pressure of p=5 Pa, generator power of P=500 W, Ar flow rate of 100 sccm, and O 2 flow rates of 2.5 sccm, 5 sccm, 7.5 sccm, 10 sccm, and 20 sccm. A data set in pure oxygen with a flow Figure 1. Schematic of the DICP and the used diagnostics [25]. The VUV spectrometer is depicted on the left side directly connected to the vacuum system. rate of 20 sccm is used as well. The setup is described in detail in [26]. Several plasma parameters have been determined and are used in the CRM. The electron density in the center of the discharge was measured using the MRP as well as the LP [26]. The electron energy distribution functions of the different Ar:O 2 mixtures was analyzed using the LP [26]. TDLAS determined the volume averaged state density of the Ar 1s 5 metastable state as well as the Doppler width of the transition yielding the gas temperature T g in the Ar plasmas. The state density measurements were compared with the branching fraction method which utilizes several Ar lines from the 2p y states (3p 5 4p manifold) to the 1s x states (3p 5 4s manifold) to determine the state densities of the 1s x resonant and metastable states [27]. Two spectrometers (VUV and UV/VIS/NIR) were used to determine the volume averaged emissivity in photons cm −3 s −1 of the discharges in the range from 116 nm to 860 nm yielding the intensities of the 130.4 nm, 135.6 nm, 557.7 nm and 777.5 nm emission lines of atomic oxygen. The details of the spectrometers and their response calibration are described in [24]. The uncertainties of the input parameters used for the model are given at the end of the following section. Collisional radiative model The presented CRM is based on the idea to minimize uncertainties in the state density estimation by including measured parameters from different complementary diagnostics. The possibility to implement absolute line intensities offers the potential to calculate certain state densities in advance which reduces the number of variables in the model calculation and, thus, sources of error due to unknown or insufficient reaction rates. Furthermore, cascading and self absorption are taken into account as they can have a significant effect on the state populations. Atomic oxygen states, excitation, and transitions The energy diagram of atomic oxygen is depicted in figure 2 (left). The CRM takes into account the first seven states of atomic oxygen listed in table 1 with their respective energy. The optical transitions included in the model, the statistical weight of the upper level g p and lower level g k , the wavelength λ pk , Einstein coefficient A pk , and if absolute line intensities are available, are given in table 2. Furthermore, the multiplet structure of the transitions is shown which has to be taken into account for self absorption and will be handled in detail in the section of the escape factor calculation. However, due to the resolution of the VUV spectrometer and to keep the model simple, multiplet levels are summarized to one level and their average wavelength in the rest of the model. Arrows in green in figure 2 show transitions with low transition probabilities (135 nm, 557 nm). Therefore, the effect of self absorption can be neglected for these levels as will be shown later. The transitions marked in red exhibit high Einstein coefficients (130.4 nm, 777.5 nm, 844.6 nm) and the effect of self absorption has to be considered. The excitation and deexcitation of the levels can be influenced by electron impact, quenching, spontaneous emission, and self absorption. The electron impact excitation of atomic oxygen from the ground state, higher lying states as well as through dissociative excitation of O 2 is considered if reliable cross sections are available. The collision of Ar1s x with atomic oxygen in the ground state can lead to the The details of the escape factor and Einstein coefficient handling due to the presence of the multiplet structure is given in the escape factor section. For a quick overview of the model, a simplified representation of the generation and loss terms yielding the rate equations at steady-state conditions for the state densities at steady-state conditions is depicted in table 3. The detailed processes and their rate coefficients are listed in the respective sections. The left hand-side represents the loss terms of the respective level due to electron impact excitation, quenching at the wall or with other particles, or radiative transitions. The right hand-side are the generation terms to the considered level from other levels by electron impact, quenching or emission. Because the densities of O(2p 4 1 S, 3s 5 S, 3p 5 P) are determined previously the model simplifies to five rate equations of the remaining levels O(2p 4 3 P, 2p 4 1 D, 3s 3 S, 3p 3 P), and O 2 . The details of the cascading and self absorption as well as the discussion of and comments on the used cross sections and rate coefficients will be given in the following paragraphs. Solution of the rate equations at steady-state conditions The model consists of five rate equations at steady-state conditions with the loss terms on the left and the generation terms on the right hand-side (see table 3). If all input parameters, like densities, line intensities, and rate coefficients would be precisely known an exact solution of the rate equations would be possible. However, as all input parameters are based on measurements or simulations only an approximate solution (regression) which minimizes the residuals of loss and generation term in each rate equation can be achieved. The variables in the model are the densities of the Transitions of atomic oxygen included in the CRM with their respective statistical weight of the upper level g p and lower level g k , Einstein coefficient A pk , and wavelength λ pk [28]. In the CRM, multiplet lines are handled as one transition with their averaged wavelength. Transition g p , g k A pk /s −1 λ pk /nm Measured states O(2p 4 3 P, 2p 4 1 D, 3s 3 S, 3p 3 P), and O 2 which all could not be determined in advance from the absolute line intensities. Because of quenching processes between different atomic oxygen levels and the calculation of the escape factor the rate equations are nonlinear. Therefore, to determine the approximate solution with the lowest error the nonlinear leastsquares solver lsqnonlin in Matlab © is used. However, the convergence of the solver was only possible for a fixed O 2 density. Thus, the O 2 density was recalculated after each run by subtracting the atomic O density of the model from the O 2 density determined by the ideal gas law and used as new fixed parameter in the next run. This procedure was repeated until the O 2 density change was less than 1%. Cascading The effect of cascading is illustrated in figure 2. Besides the direct excitation of an excited state via electron impact the level density is also influenced by higher lying levels which deexcite to the respective state. This effect is called cascading and can significantly influence the transition intensity depending on the excitation rates into the higher states and the following cascading to the observed level. For atomic oxygen a detailed analysis of the cascading pathways has been performed by Julienne and Davis [32]. In general, the restriction of a multiplet change for spontaneous emission in the LScoupling yields a dominant contribution of the triplet states to the O(3p 3 P) level whereas the quintet states mainly contribute to the O(3p 5 P) level. O(3p 3 P) is connected to O(3s 3 S) via the 844.6 nm transition and O(3p 5 P) to O(3s 5 S) via emission at 777.5 nm. Therefore, the cascading from high energy states directly to O(3s 3 S) and O(3s 5 S) can be neglected as most of the contribution is viaO(3p 3 P) and O(3p 5 P). To account for the effect of cascading in the CRM the electron impact excitation rates to the O(3p 3 P) state (direct and dissociative) are increased by the excitation rates to all triplet states above O(3p 3 P) from the respective levels using a cascading factor. The cascading factor is determined by the ratio of the excitation rate to O(3p 3 P) and the excitation rates to levels above. This approach assumes that relaxation in the states above O(3p 3 P) is only via radiative processes and is not influenced by e.g. quenching. The influence on the O(3p 5 P) level through cascading is already included due to the state density determination using the absolute line intensity. Therefore, the cascading factor is only used to increase the excitation rates from states below O(3p 5 P) to take Table 3. Simplified representation of the generation and loss terms of the atomic oxygen states and molecular oxygen yielding the rate equations at steady-state conditions for the state densities. The left column shows the respective state to which the loss and generation terms in the row belong to. E.g. the first row shows the rate equation of the O(2p 4 3 P) state with its loss terms due to collisions with electrons, walls, and Ar excited states as well as generation terms due to quenching of the O(2p 4 1 D) and O(2p 4 1 S) states by O(2p 4 3 P), O 2 , Ar, and at the walls, due to radiation transitions from upper levels, and due to dissociation of O 2 by electrons and Ar excited states. State Loss Generation Generation and losses are included in the measured intensity I 130.4 nm : into account the excitation to the cascading states. For the 130.4 nm, 135.6 nm, 777.5 nm, and 844.6 nm lines emission cross sections are available which already include the cascading contribution from higher levels. In the case of dissociative electron impact excitation of O(3s 3 S) and O(3s 5 S), only emission cross sections are available. Therefore, the excitation rate for 130.4 nm and 135.6 nm are lowered by the emission cross section of 844.6 nm and 777.5 nm, respectively, to determine only the direct electron impact excitation to the O(3s 3 S) and O(3s 5 S) levels. This is necessary as the cascading to these levels is already included in the model via the emission of 777.5 nm and 844.6 nm. Cross section, reaction rates, and diffusion The analysis and selection of cross sections is of particular interest as unreliable data significantly falsifies the resulting level densities. Itikawa and Ichimura [36] and Laher and Gilmore [37] summarized and evaluated the known cross section more than 25years ago. Unfortunately, many cross sections lack a number of experiments to verify the data and the measured values often deviate significantly by a factor of 2 or more, especially for higher lying levels. This situation has not changed much to the present day regarding experimental data. Therefore, a critical evaluation of the cross sections, measured and calculated, is necessary to achieve reliable results. The processes taken into account and their cross sections are listed in tables 4and 5. Most of the atomic data is based on more recent calculations by Zatsarinny and Tayal [33] performed with the B-Spline R-Matrix approach. Their calculations are in good agreement to recommended cross sections summarized by Laher and Gilmore [37]. Therefore, excitation cross section from the metastable levels O(2p 4  1 D) and O(2p 4 1 S) are expected to be most reliable from them. Excitation from O(3s 5 S, 3s 3 S, 3p 5 P) to higher levels are taken from Barklem [34]. His calculations from the metastable levels are in excellent agreement with Zastarinny and Tayal. Dissociative reaction rates resulting either in two atoms in the ground state or in one ground state and one O(2p 4  1 D) atom are taken from Fulleret al [18]. They determined reaction rates for both processes based on recommended cross sections of Itikawaet al [38] and measurements of Cosby [39]. The dissociative process resulting in one ground state and one O(2p 4  1 S) atom is taken from McConkeyet al [35]. Several emission cross sections were determined in the literature especially regarding the dissociative excitation of molecular oxygen. These cross sections are useful to determine the influence of cascading on the atomic state densities and are taken from recent reviews of Itikawa [40] and McConkeyet al [35]. The ionization cross section of atomic oxygen is taken from Laher and Gilmore [37]. Although the included data was critically evaluated continuous comparison with other data is necessary. For example, the dissociative cross sections of Itikawa and Cosby vary from data of Phelps [41][42][43] in the low energy range by up to one order of magnitude. Furthermore, Zastariny and Tayal recently published updated calculations for atomic oxygen cross sections. [44]. Thus, even if a set of cross sections yields reasonable results it is possible that two imprecise cross sections only compensate each other. Therefore, for improving and validating the CRM, extensive Table 4. Electron impact excitation processes. The rate coefficients were calculated assuming a Maxwellian electron energy distribution and are fitted over an energy range of T e =1.5-4 eV. Reactions Rate coefficientsm 3 s −1 references sensitivity analysis of the influence of the different cross sections and comparison with other diagnostics will be necessary. However, this is out of the scope of this publication. Besides electron impact excitation, quenching processes significantly influence the state densities. The quenching rates of the first four excited states of Ar 1s 5 , 1s 4 , 1s 3 , and 1s 2 by molecular oxygen were reviewed by Velazcoet al [30]. The quenching leads to dissociation of the molecule yielding one oxygen atom in the O(2p 4 The determination of the diffusion rates is of particular interest due to the low pressure regime. Here, the diffusion rates are high and possible deexcitation or recombination at the chamber walls contributes to the state densities of the different levels. After Morgan and Schiff [45], the diffusion coefficients D of atomic oxygen in argon and atomic oxygen in molecular oxygen at 1 Pa and 300 K are 2.88×10 8 m 2 s −1 and 3.06× 10 8 m 2 s −1 , respectively. The rate coefficient for diffusion of atomic oxygen to the walls k diff in a cylindrical vessel can be determined with where Λ 0 is the diffusion length, V DICP the volume of the vessel, A DICP the surface area of the vessel, α wall the sticking or Table 5. Electron impact excitation emission processes which include cascading processes. The rate coefficients were calculated assuming a Maxwellian electron energy distribution and are fitted over an energy range of T e =1.5-4 eV. Reactions Rate coefficientsm 3 2.8×10 −18 m 3 s −1 [53] recombination coefficient, L DICP the height of the cylindrical vessel, and r DICP the radius [11,46,47]. D is determined for each Ar:O 2 gas mixture by weighting the diffusion coefficients by the feed gas ratio. The recombination coefficient on stainless steel is based on Gudmundsson and Thorsteinsson [47] who reviewed known data and is in the range of α Steel =0.15 for the used regime. However, half of the reactor is made of quartz with a much lower recombination coefficient of around α quartz =1× 10 −2 [48,49]. Therefore, an effective coefficient of α wall =0.1 is assumed. In case of metastable atomic oxygen, α wall is set to unity, as the energy is lost at the chamber walls. In both cases, the exact value of α wall is not known as it also depends on the wall material, temperature, and morphology and the state of the colliding particle. But due to the lack of data this simple assumptions are used. Self absorption The effect of self absorption can drastically effect the state densities of the involved states as well as the measured intensity of the emission line. It occurs especially for atoms in their ground state as it is often the state with the highest density. However, it is also possible for highly populated excited states, e.g. metastable states. Radiation emitted during the transition from a higher level p to the ground state k is called 'resonance emission', which is easily reabsorbed by ground state atoms of the same species to the upper level p. In consequence, this process reduces the number of photons measured outside the plasma and is taken into account by adding a correction called escape factor γ pk (n k ) depending on the density of the absorbing level n k ( ) ( ) g = I n n A . 6 pk p pk k pk Calculating the escape factor is a complex task as it depends on the geometry of the system, density profiles of the emitting and absorbing species and spectral line profiles [54][55][56][57][58][59]. Therefore, an empirical formula determined by Mewe with l pl the thickness of the observed plasma volume and the absorption coefficient κ pk (n k ) defined as pk k pk pk p k k pk 2 where λ pk is the wavelength of the transition, g p and g k the statistical weights of the two energy levels and P pk the spectral line profile. κ pk (n k ) strongly scales with the Einstein coefficient and transitions with low emission probabilities, like 3s 5 S−2p 4  3 P (135.6 nm) and 2p 4  1 S−2p 4  1 D (577.7 nm), are not affected by self absorption. To handle the calculation of the escape factors correctly, the multiplet components of the transitions have to be taken into account. Due to the low resolution of the VUV spectrometer, it is not possible to separate the 130.4 nm line into its three multiplet components at 130.2 nm, 130.5 nm, and 130.6 nm. Because each of the three lines has its own Einstein coefficient their escape factors have to be calculated separately as well. The same effect applies for the 777.5 nm and 844.6 nm transitions. In the case of the 130.4 nm line, the ground state consists of three multiplet components and the upper level of one component (see table 2). Thus, the emission intensity of all three multiplet lines I is given as where I 1 , I 2 , I 2 are the intensities of the three multiplet lines, n p is the upper state density, γ i (n ki ) is the escape factor of the emission line to the lower level k multiplet component i, n ki is the density of the lower level k multiplet component i, and A pki is the Einstein coefficient from the upper level p to the lower level k multiplet component i. The sum of all n ki components is the total lower state density n k . However, due to the low spectrometer resolution n ki cannot be determined. Therefore, the statistical weights of the multiplet components g i are used to estimate the density distribution in the multiplet and using only the density of the total multiplet of the lower level n k yielding This way, the correct handling of the escape factors is possible using only the total state density of the lower level k, here, the ground state 2p 4  3 P. In case of the 777.5 nm and 844.6 nm transitions the lower level consists only of one multiplet component and the upper level of three components (see table 2). Therefore, the intensity of the transition is In addition to only using the statistical weights to estimate the multiplet distribution, the Boltzmann factor can be taken into account to calculate the equilibrium distribution. However, for the 2p 4  3 P level, the difference of the statistical weights (0.56, 0.33, 0.11) and the equilibrium at 700 K (0.64, 0.28, 0.08) is not too severe. For the 3p 5 P and 3p 3 P multiplets the equilbrium distribution is nearly identical to the statistical weight due to the tiny energy difference of the levels. Furthermore, the existence of an equilibrium distribution of the ground state and especially of the excited states is unknown as it is related to the number of collisions of atomic oxygen atoms and can be strongly influenced by different processes (e.g. dissociation of O 2 , electron impact excitation of 2p 4 3 P). Thus, only the statistical weights are used in the approximation for estimating the multiplet distribution of the levels and future investigations are necessary to check the applicability of this assumption. In total, this procedure enables the use of the low resolution spectra as well as keeping the CRM simple because the multiplet densities do not have to be taken into account. Because of its simplicity equation (7) is very useful to be included in CRMs in the low pressure rf plasma regime where the dominant line broadening mechanism is Doppler-broadening which depends on the gas temperature. However, the correctness of the empirical approach has to be considered. Sushkovet al [64] and Siepa [65] evaluated the approximation of Mewe and compared it with complex calculations. Both found good agreement between the empirical formula as long as the spatial density profiles of the upper and lower level are similar. This assumption is prone to error especially in the case of resonance emission to the ground state as the ground state level O(2p 4  3 P) and the excited state O(3s 3 S) are affected differently. For example, the excitation to the excited state is reduced near the chamber walls due to low electron densities and temperatures. Furthermore, collisions with the wall can also lead to quenching to the ground state increasing the ground state density near the chamber wall while the excited state is less populated. On the other hand, the low pressure regime leads to a fast diffusion of particles and a homogenization of the densities and profiles. Sushkovet al [64] calculated the worst case scenario where the excited state is localized at the center of the discharge while the lower level is spatially uniform. In this case, the empirical formula used in this study is approximately a factor of 2-3 off to the correct escape factor. However, this case is physically unrealistic and due to the size of the used chamber similar profiles should be present in the most part of the observed volume. The same holds for the gas temperature which is probably slightly reduced near the chamber wall but should be quite uniform in most of the observed plasma. Therefore, we assume the correctness of the empirical formula to be in the range of a factor of 2. Input parameters The CRM is based on several measured parameters listed in table 8: Absolute line intensities The absolute line intensities of the 130.4 nm, 135.6 nm, 557.7 nm, and 777.5 nm transitions were measured using two spectrometers: a VUV monochromator (Jobin-Yvon AS50, 116 nm-320 nm) and an echelle spectrometer (LLA Instruments ESA4000, 200 nm-860 nm). The spectrometer were calibrated by two calibrated standards: a D 2 lamp (Hamamatsu X2D2 L9841) calibrated from 116 nm-400 nm at the electron storage ring BESSYII of the Physikalisch-Technische Bundesanstalt (PTB), Berlin, Germany, and a tungsten ribbon lamp (OSRAM WI 17/G) calibrated from 350 nm-2500 nm at the manufacturer. During the continuous DICP operation the experimental error is determined by the inaccuracy of the calibration and the performed measurements. The calibration accuracy of the standards is 14% at 116.0 nm-120.4 nm, 36% at 120.6 nm-122.6 nm, 14% at 122.8 nm-170 nm, 7% at 172 nm-350 nm, 2.3% at 380 nm, 1.6% at 600 nm, and 2.3% at 780 nm. The intensity deviation of the measured spectra was less than 10% [24]. The correct measurement of the 557.7 nm dipole-forbidden transition is a possible source of error. In general, great effort is necessary to construct sources which emit the 557.7 nm line while suppressing other intense lines to correctly measure its intensity [66][67][68]. Figure 3 shows the spectrum of the discharge at a gas mixture of Ar:O 2 100:7.5 sccm with a zoom to the 557.7 nm emission. The intensity is just above the noise level of the spectrometer and is roughly 3.5 orders of magnitude lower than the 777.5 nm emission. The possibility to measure this intensity difference is connected to the sensitivity of the spectrometer which has its maximum around 550 nm and low sensitivity around 800 nm. Thus, as the very intense argon and oxygen lines are in the insensitive infrared range, high exposure times are necessary for a sufficient signal which in parallel allows to measure weak signals in the 550 nm range. Furthermore, the line is also not influenced by the + O 2 first negative system with band heads of the (2-1) and (3-2) transitions located at 559.8 nm and 556.7 nm, respectively, [69] which are not visible in the spectrum. Therefore, the correct measurement of the 557.7 nm line is assumed. Electron densities and EEDFs The electron density was determined using a multipole resonance probe (MRP) and was compared to a Langmuir probe (LP) which also measured the EEDF [26]. The results indicated that the determination of the electron density measurement of the LP was disturbed by the etching of the tungsten wire in oxygen containing plasmas while the shape of the EEDF was not affected. Therefore, the electron density values determined by the MRP are used in this study. For the Ar:O 2 mixtures the EEDFs showed a Maxwellian shape in the measurement range of the LP from 0 eV-15 eV. In pure oxygen the low energetic part was disturbed by insufficient rf floating potential compensation. However, extrapolating a Maxwell distribution from the high energy range, the same electron density as measured with the MRP could be determined yielding the indication that a Maxwell distribution is valid in the low energy range. To estimate the validity of the Maxwell distribution above 15 eV, the line intensities of the Ar lines at 727.3 nm, 794.8 nm, 826.5 nm, and 852.1 nm are calculated for direct excitation and compared to the measured line intensities. The intensity I Ar is given by Ar Ar Ar with n Ar the argon density, γ Ar the escape factor of the respective line determined using the known 1s x state densities (see below), and ( ) k T em e Ar the emission rate coefficient for excitation from the ground state [70]. To estimate the influence of step-wise excitation from the 1s x states [71] the rate coefficients are compared using the known 1s x densities and the Ar ground state density. If step excitation is less than 10%, direct excitation is assumed to be the dominant mechanism which applies for the Ar:O 2 mixtures of 100:7.5 sccm, 100:10 sccm, and 100:20 sccm. Unfortunately, the cross sections for excitation from the 1s x states are not well known. Therefore, the procedure is only applied to the Ar:O 2 mixtures where direct excitation strongly dominates. Table 7 shows the averaged ratios of the calculated to measured line intensities for direct excitation. At 100:7.5 sccm and 100:10 sccm the difference is less than 30% indicating a Maxwellian EEDF above 15 eV. At 100:20 sccm the measured and calculated values differ significantly suggesting that because of inelastic collisions with molecular oxygen the EEDF is deviating from the Maxwellian shape. Thus, the results of the model have to be discussed regarding a possible non-Maxwellian EEDF in the case of Ar:O 2 100:20 sccm. Argon state density and gas temperature The first four excited states of argon with two metastable (1s 5 , 1s 3 ) and two resonance levels (1s 4 , 1s 2 ) were determined using the branching fraction method and checked for correctness using laser absorption spectroscopy (LAS) of the 1s 5 level at 772 nm [27]. Additionally, the Doppler profile of the argon absorption was used to determine the gas temperature in the argon mixtures. Regarding the state densities both measurement techniques yield the same trend and result in a maximum deviation of 40% of the 1s 5 state. The gas temperature in pure oxygen could not be determined and is assumed to be 700 K based on rotational temperature measurements in H 2 and N 2 in the same system at 5 Pa and 500 W. Molecular oxygen density In the pure oxygen discharge it was possible to measure the molecular oxygen ground state density in absorption at 154 nm where absorption of O 2 ( 1 Δ) is negligible. For this, a D 2 lamp (Hamamatsu X2D2 L9841) was flanged on the opposite side of the vessel to the VUV spectrometer and the signal was measured with and without plasma. For calculating [27], electron density and temperature [26], gas temperature [27], and absolute line intensities [24]. Results Using the plasma parameters determined with the LP, MRP, OES, and TDLAS, the state densities of the first seven states of atomic oxygen are calculated. The results are shown in figure 4 amended by the overall atomic density as well as the molecular oxygen density. At low oxygen concentrations (2.5 sccm) the atomic oxygen density is determined by the ground state O(2p 4 3 P) and the first metastable state O(2p 4 1 D), both yielding around 4×10 18 m −3 . For higher oxygen content the ground state density increases and peaks in the pure molecular case at 3×10 19 m −3 while the O(2p 4 1 D) density drops by more than one order of magnitude to 2×10 17 m −3 . In general, all excited states O(2p 4 1 S, 3s 5 S, 3s 3 S, 3p 5 P) show a density maximum at 5 sccm or 7.5 sccm O 2 flux and decline afterwards. The lowest density is given for the highest energy level O(3p 3 P) and the densities ascend with decreasing energy gap to the ground state. However, this is not true for the 3p 5 P level which exhibits even higher densities than 3s 3 S for Ar:O 2 mixtures up to 100:10 sccm. This effect is most-likely induced by the high 3s 5 S density and the resulting self absorption of the 3p 5 P−3s 5 S transition increasing the 3p 5 P density. Figure 5 depicts on the left side the atomic and molecular oxygen density, the measured molecular oxygen density in the pure molecular discharge as well as the degree of dissociation α diss : Discussion To evaluate the results of the collsional radiative model, the discussion will be structured into three parts. First, the quality of the result of the least-squares solver is analyzed by calculating and comparing the loss and generation terms of each atomic state using the state densities determined by the model. In the ideal case, generation and loss terms should be identical and deviations yield possible hints which processes need to be adjusted. Second, the escape factors as well as selected states are analyzed to investigate the excitation pathways and changes in the excitation mechanisms with varying gas mixture. This analysis helps to identify the relevant processes to simplify the model and reduce the number of lines that need to be measured. Furthermore, the range of applicability of relative emission intensity diagnostics like actinometry can be investigated. Third, additional possible sources of error are discussed which could significantly influence the model, like the formation of ArO( 1 D) excimers during Ar and O(2p 4 1 D) interaction, collisions of atomic oxygen excited in higher states, and the shape of the EEDF. Least-squares results/solution of the rate equations Using the measured 135.6 nm, 557.7 nm, and 777.5 nm line intensities, the densities of O(2p 4 1 S), O(3s 5 S), and O(3p 5 P) are determined in advance and reduce the complexity of the system of rate equations at steady-state conditions. In case of correct measurements and Einstein Table 9. Results of the atomic oxygen CRM. T g is taken from the TDLAS measurement in Ar:O 2 and estimated in the pure oxygen discharge. figure 6 depicts the ratio of the loss term to the generation term of each rate equation. In case of the ground state O(2p 4 3 P), the rate equation is solved nearly perfectly in the argon mixtures with a maximum deviation of loss and generation rate of 4%. In pure oxygen the generation term is 14% larger than the loss term. With increasing oxygen content, the main loss mechanism of the O(2p 4 3 P) state changes from electron impact excitation to diffusion and recombination at the chamber walls. As the recombination coefficient is only poorly known, this effect might cause the lesser consistency of the generation and loss term. Furthermore, the dominant generation mechanism of O(2p 4 3 P) changes to dissociation of O 2 (see figure 7). As mentioned in the discussion of the cross sections, the used rate coefficient might be erroneous in the threshold region and causes the increase the of generation term. A similar trend is visible for the O(2p 4 1 D) state rate equation. The agreement of the generation and loss terms are very good for the argon mixtures with a maximum deviation of 6%, but around 35% higher generation in the pure oxygen discharge. The main generation process of the O(2p 4 1 D) state is the electron impact excitation from the ground state O(2p 4 3 P). With increasing oxygen content the dissociative excitation from O 2 gets more important and is responsible for up to 28% of the O(2p 4 1 D) state population. The dominant loss mechanism of the O(2p 4 1 D) state shifts completely from electron impact and quenching at the walls, by O(2p 4 3 P), Ar, and O 2 to quenching by O 2 (see figure 7). Because the dissociative excitation process gets more important with increasing oxygen content, again the problem of an incorrect cross section in the threshold region might be responsible for the increasing error as was also discussed for the O(2p 4 3 P) state. The ratios of the generation and loss rates of O(3s 3 S) and O(3p 3 P) always match perfectly because of the weak coupling to any of the other state densities. The O(3s 3 S) density is determined by matching its value to the measured line intensity of the 130.4 nm emission (see table 3). Here, the only coupling is due to the escape factor of the 130.4 nm transition. Thus, if the escape factor decreases the O(3s 3 S) density increases to compensate the change in self absorption. In case of the O(3p 3 P) level many processes contribute to its generation. However, the dominant loss mechanism is spontaneous emission to O(3s 3 S) and the quenching by Ar can be neglected. Thus, because the loss process is not coupled to other levels the O(3p 3 P) density is matched in a way that the loss term compensates the generation term. The rate equation of O 2 shows an increase of the generation term compared to the loss term with increasing oxygen content from −17% to 32% ,thus, the loss term is larger in the first place but afterwards the generation term dominates. The only generation term is the recombination at the chamber walls and around 85%-97% of the loss is due to electron impact dissociation depending on the gas mixtures. Based on the clear trend with increasing oxygen content either the recombination coefficient is too low at high oxygen concentrations or is changed in the presence of high argon concentration. In general, the strongest divergence of the rate equations is present in the pure oxygen discharge as well as in the Ar:O 2 100:20 sccm mixture. In both cases the existence of a Maxwellian EEDF above 15 eV is highly questionable as mentioned in the sections of the input parameters. Therefore, the insufficient solution of the rate equations at high oxygen content might also be attributed to false reaction rates because of insufficient knowledge of the EEDF shape. To check the plausibility of the results, the degree of dissociation in the pure oxygen discharge is compared to measurements in other ICP discharges with metal (or stainless steel) walls. At similar power densities (40 W l −1 -50 W l −1 ) and pressures (1.3 Pa-7.65 Pa) [18,73,74], degrees of dissociation of 1%-2% are measured compared to 3% determined by the CRM. All discharges only have one ICP coil at the top in contrast to the setup used in this study which exhibits a second coil at the bottom. Furthermore, the largest volume is also given in the DICP setup. Thus, the higher dissociation rate is possibly connected to the reduced loss rates to the walls because of the different volume to surface ratio as well as to the second electron heating zone. However, significantly higher dissociation rates have been reported (e.g. 16% in [17] using a set of 128 high flux magnets to confine the plasma) which can be connected to a different plasma generation or larger glass surface with significantly reduced recombination coefficients. Furthermore, the O 2 density measured with VUV absorption fits exactly the density predicted by the model. As the atomic oxygen density only slightly influences the O 2 density due to the low degree of dissociation, the consistency indicates the application of the correct gas temperature of 700 K which could only be assumed from measurements in pure H 2 and N 2 under the same conditions. Escape Factors and Excitation Pathways The escape factors of the potentially self absorbing transitions at 130.4 nm, 777.5 nm, and 844.5 nm are depicted in figure 8. The escape factor of the 130.4 nm resonance line is inversely connected to the ground state density and decreases with increasing O 2 and O(2p 4 3 P) density from 12% down to 2%. Thus, 88%-98% of the emitted photons are reabsorbed and the determination of the ground state density without taking into account self absorption would be falsified by up to a factor of 50. Interestingly, although the O(3s 3 S) density is significantly increased by the self absorption effect of the 130.4 nm line, the escape factor of the 844.6 nm line, which deexcites to O(3s 3 S), is constantly 1 and no reabsorption of the photons is present. However, due to the metastable level O(3s 5 S) which exhibits significantly higher densities than the O(3s 3 S) level, the 777.5 nm transition is affected by self absorption. In the Ar:O 2 mixtures the escape factor is in the range of 42%-63% and is nearly negligible in the pure oxygen discharge with 95%. The problem of the validity of the empirical escape factor used in this study has already been discussed. However, its reasonable applicability for excited states has been demonstrated in the case of the branching fraction method to determine the Ar(1s x ) states in different systems [62,64] and also in the used setup [27]. Because the empirical formula fits well to more complex calculations if the spatial distribution of the upper and lower level is the same [64,65], the escape factors of the 777.5 nm and 844.6 nm transitions are mostlikely correct. This is the case as both upper and lower level are excited levels which are probably equally influenced by electron densities and temperatures as well as quenching processes at the walls. For the same reason the escape factor of the 130.4 nm line is prone to error. Here, the lower level is the ground state to which excited states get quenched at the walls and which is less depopulated in the case of decreasing electron densities and temperatures near the walls during the H-mode of the DICP [75]. As demonstrated by Sushkovet al [64] the maximum error is in the range of a factor of 3 if the upper state is only present in the center of the discharge and the lower level is evenly distributed. Thus, the error of the 130.4nm transition is likely in the range of a factor of 1-3 depending on the spatial density profiles. Nevertheless, the very low escape factors clearly demonstrate the necessity to include the effect of self absorption in CRM for determining atomic densities, especially for resonance lines but also sometimes for transitions in excited state. The analysis of the excitation of the O(3p 3 P) level yields the possibility to simplify the determination of the ground state O(2p 4 3 P) density. Figure 8 shows the change of the excitation pathways depending on the gas mixture. In the pure oxygen discharge more than 98% of the excitation of the O(3p 3 P) level is from direct excitation from the ground state or from dissociative excitation. Furthermore, the effect of self absorption can be neglected for the 844.6 nm transition (3p 3 P−3s 3 S) under the presented discharge conditions. Thus, the ground state density in the pure oxygen discharge could be monitored just by one single emission line if the electron density and temperature are known. An even more robust approach might be the use of actinometry. Here, only relative intensities of an Ar line excited from the ground state as well as the 844.6 nm line of atomic oxygen are necessary. Figure 8 reveals that for small admixtures of Ar to O 2 the excitation mechanisms of the O(3s 3 S) level do not change. Thus, because the ratio of the line intensities cancel out the electron density and temperature, the ground state atomic oxygen density could also be monitored with a relative measurement and without knowledge of the electron parameters. Further sources of error Three other possibilities exist which can falsify the results and need to be discussed. First, the shortening of the O(2p 4 1 S) lifetime due to collisions with Ar. During the collision of O(2p 4 1 S) and Ar both atoms form a ArO( 1 S) excimer. In case of ArO the potential curves of ArO( 1 S) and ArO( 1 D) are very similar [76]. Therefore, the wavelengths of the transitions O(2p 4  1 S)  O(2p 4 1 D) and ArO( 1 S)  ArO( 1 D) are nearly identical. As the lifetime of the transition is significantly shortened during the interaction with Ar (from 1.26 s to 1×10 −5 −1×10 −7 s) [77] it is possible that most of the 557 nm emission originates from ArO( 1 S) excimers and not from atomic oxygen, thus falsifying the O(2p 4 1 S) density determination. To estimate the influence of the process, its share on the overall emission is calculated. The collision cross section of Ar and O(2p 4 1 S) is estimated using the hard-sphere model [78] to σ ArO =5.4×10 −20 m 2 . The collision frequency is therefore ν ArO =n Ar σ ArO v Ar,therm =2.6×10 4 s −1 using the gas temperature for the thermal velocity and Ar gas density n Ar in Ar:O 2 100:5. As the increased transition probability is only valid during the interaction of Ar and O, the interaction time τ int is determined with the width of the potential well (r ArO ≈0.4nm) and the thermal velocity of atomic oxygen ( Second, the high energetic states above O(3p 3 P) might be influenced by collisions with other oxygen atoms. Similar to ArO, two oxygen atoms form a collisional complex, i.e. molecular oxygen, with a potential curve depending on the atomic states of both atoms. During the reduction of the intermolecular distance a predissociative point can be reached and the molecule can dissociate to different atomic states than at the beginning of the process (e.g.: O(2p 4 3 P)+O(2p 4 1 D)  O 2 *  O(2p 4 3 P)+O(2p 4 3 P)). The first level above 3p 3 P is 4s 5 S with an energy of 11.83 eV [37]. Comparing the potential curves of molecular oxygen given in [69], the potential curve of O(2p 4  3 P)+O(4s 5 S) would need an energy of roughly 17 eV. In this range, only ionic states of O 2 exist, thus, this process is not possible for oxygen atoms in states >O(3p 3 P) and a possible influence on the cascading must not be taken into account. Third, besides incorrect cross section, quenching rates or missing processes, the increasing inaccuracy of the loss and generation terms at high oxygen concentrations of the O(2p 4 3 P, 2p 4 1 D) and O 2 rate equations might also be the result of the deviation of the EEDF from a Maxwellian shape. The comparison of the calculated and measured intensities of four different Ar lines supports this assumption for the high energy range above 15 eV. Furthermore, in the pure oxygen dischage the LP characteristic is distorted by uncompensated rf oscillations and prevents measurement of the correct shape in the low energy range. Although the distortion is not too severe and the comparison with the MRP suggests the presence of a Maxwell distribution in this range (see above), the error would directly affect the excitation of the O(2p 4 1 D) level as its energy gap to the ground state is only 1.97 eV. Therefore, different diagnostics need to be applied to perform meaningful sensitivity analysis in the future and benchmark and improve the presented CRM. Conclusion The presented collisional radiative model for Ar:O 2 mixtures in low-pressure plasmas estimates the state densities and fluxes of the first seven states of atomic oxygen using measurements from several diagnostics as input parameters. The model takes into account cascading from higher levels as well as self absorption of transitions with large Einstein coefficients and diffusing processes. The used cross sections and reaction rates are critical evaluated to minimize potential sources of error and include as many processes as possible. The measurements of absolute intensities are used to determine and fix the state densities of O(2p 4 1 S, 3s 5 S, 3p 5 P) in advance. The variables of the set of rate equations at steady-state conditions are the remaining states O(2p 4 3 P, 2p 4 1 D, 3s 3 S, 3p 3 P), and O 2 which are estimated by minimizing the error in each rate equation using a least-squares algorithm (regression). The previous determination of state densities significantly reduces the complexity of the model as less rate equations have to be solved with several reaction rates unknown or of poor quality. The plausibility of the model is analyzed by evaluating the deviation of the generation and loss terms of each level using the determined state densities, discussing the dominant excitation and loss processes and escape factors as well as comparing the results to the VUV absorption of O 2 and other measurements from the literature. The determined atomic oxygen state densities show a reasonable trend and realistic quantitative values. With small admixtures of oxygen in argon, the dissociation of the molecules is significant (approx. 30%) due to the high electron density dominating the dissociation by electron impact compared to dissociation by Ar metastables. With increasing oxygen content, the dissociation degree is reduced to roughly 3% in the pure oxygen discharge, which is a typical value for molecular plasmas in chambers with metal (or stainless steel) walls. Furthermore, the analysis shows the possibility of determining and monitor the ground state density with a much simpler model under a specific plasma regime using only the absolute intensity of the 844.6 nm line if the electron density and temperature are known. However, the model also demonstrates the range of applicability of actinometry using Ar where only relative line intensities are necessary and no electron parameters need to be determined. Both techniques can be applied much easier if the correct plasma regime is present and are much more attractive for process control. Summarizing, the presented collisional radiative model for atomic oxygen in Ar:O 2 mixtures provides reasonable results to estimate and monitor ground and excited state densities and fluxes, and the degree of dissociation which all can be used for surface process control. However, to check and improve the validity of the model the densities need to be benchmarked with other diagnostics, e.g. TALIF, to identify missing processes or imprecise reaction rates.
2020-03-12T10:16:43.183Z
2020-04-06T00:00:00.000
{ "year": 2020, "sha1": "17396c92f83120acce7bddfa3da346fbab28ece1", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1361-6595/ab7cbe", "oa_status": "HYBRID", "pdf_src": "IOP", "pdf_hash": "faad1442c08dc29aa68e14c2c7c143b921a597d5", "s2fieldsofstudy": [ "Physics", "Chemistry" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
266314375
pes2o/s2orc
v3-fos-license
Abiotic Stress Tolerance in Crop and Medicinal Plants Climate change and the increased need for crop production highlight the urgent importance of introducing crops with increased tolerance to adverse environmental conditions [...]. Introduction Climate change and the increased need for crop production highlight the urgent importance of introducing crops with increased tolerance to adverse environmental conditions [1].Many studies have focused on creating and studying various crop species (genotypes, varieties, cultivars, hybrids, etc.) resistant to different abiotic stress factors, especially drought, salinity, light, extreme temperatures, heavy metals, etc., applied alone or in combination.Breeding and genetic modification methods intended for crop improvement have created many plant species with greater resistance to abiotic stress [2].The non-genetic approach to enhancing crop yields in stressful environments involves the use of exogenous phytoand biostimulants [3], as well as primary and secondary plant metabolites [4,5].Since the effectiveness of these strategies in improving plant stress tolerance has been proven, they have the potential for widespread application in the future.In addition to these strategies, a lot of attention has been paid to protecting plants' photosynthetic function under abiotic stress [6,7].There is evidence to suggest that the use of strategies to improve the photosynthetic performance under stress conditions can increase plant yields, which has led to a growing interest in studying photosynthetic tolerance as a tool to enhance plant production under adverse environmental conditions [6].Moreover, environmental stress has a strong impact on the photosynthetic membranes of plants, causing damage on multiple levels by affecting the ultrastructure of thylakoid membranes, pigment content, protein and lipid compositions [7].This fact emphasizes the importance of studying the adaptation mechanisms of photosynthetic apparatus to achieve a deeper understanding of plant stress responses, which will be useful in the actual selection of crop-tolerant genotypes. This Special Issue, "Abiotic Stress Tolerance in Crop and Medical Plants" (Volume I and II), collects papers on new approaches to the development of strategies to increase the abiotic stress tolerance of crop and medicinal plants.It also focuses on studying the photosynthetic adaptation mechanisms in strategic crops and medicinal plants to changing environmental conditions for the fast detection and screening of their stress tolerance in the context of climate change.The papers published in the present Special Issue (consisting of 27 original articles and 2 reviews) address various environmental stress factors such as drought, salinity, light stress, cold stress, heavy metal toxicity, etc., applied individual or in combination.They provide important insights into the underlying mechanisms of plant tolerance, as well as practical ways to alleviate the harmful effects of environmental stress by different means such as plant metabolites, signaling molecules, phytoprotectants, biostimulants, etc.Some papers also demonstrate the adaptation of different plant genotypes to individual or combined stress factors.The insights provided by all of these studies will help us to better understand the tolerance mechanisms of plants against various abiotic stress factors, helping to ensure future food security. Tolerance Mechanisms in Crop Plants Unfavorable environmental changes affect the biochemical and physiological processes, growth, and development of crop plants and thus can significantly reduce crop yield and quality.Crop plants have developed a wide set of responses to tolerate environmental stress depending on their capacity for adaptation [1,5].In this Special Issue, several articles explore the morphological, biochemical, and physiological responses of important crop plants (or different genotypes) and their adaptation to environmental stress, as well as the different ways to increase their resistance to drought stress (contributions 1-5), osmotic and salt stress (contributions 6-12), and the combined effects of drought and salinity (contributions [13][14][15]. Information about the application of exogenous biostimulants and phytohormones for improving crop stress tolerance is also included in this Special Issue.Other articles published in this Special Issue deal with the mitigation of heavy metal stress, showing that the application of trehalose alleviates cadmium toxicity in mung bean plants by enhancing the photosynthetic activity and antioxidant defense system (contribution 18), and 5-aminolevulinic acid increases lead tolerance in sage plants (contribution 19).Zishiri et al. (contribution 20) identified several maize genotypes (inbred lines) with genetic variations conducive to aluminum tolerance and explains that they could be used by breeders in maize breeding programs to reduce yield losses. The review paper by Giraldo Acosta et al. (contribution 21) proposes the application of melatonin as a natural safener in herbicide treatments of crop plants, highlighting its excellent capabilities to reduce herbicide damage and activate antioxidant defense.Melatonin has been described as a hormonal molecule that can stimulate the functions of plants under various abiotic and biotic stresses. Tolerance Mechanisms in Medicinal Plants Abiotic stress factors such as drought, salinity, high light, extreme temperatures, etc., can also reduce the quality and productivity of medicinal plants by disrupting their biochemical, metabolic, and physiological processes [8,9].It has been also established that the application of various biostimulants like phytohormones, plant-growth-promoting Rhizobium, nanomaterials, and biochar can improve the resistance of medicinal plants to stress by stimulating the biosynthesis of primary and secondary metabolites and phytohormones and increasing their chlorophyll contents, antioxidant potential, and nutrient uptake, thereby reducing oxidative stress [9].This Special Issue also includes studies on the tolerance mechanisms of medicinal plants, as well as different treatments that can reduce the harmful effects of abiotic stresses to achieve the high-quality production of medicinal and aromatic plants under environmental stress.The review by Hlongwane et al. (contribution 22) highlights the effectiveness of plant-growth-promoting rhizobacteria in alleviating the harmful effects of abiotic stress factors such as salt and drought in the medicinal plant Lessertia frutescens, whose curative ability is related to its enriched phytochemical composition, which includes amino acids, flavonoids, and triterpenoids.The study by Sichanova et al. (contribution 23) evaluates the influence of different concentrations of two types of nanofibers (derivatives of aspartic acid with silver ions) on the growth parameters, antioxidant activity, and steviol glycoside content of micropropagated Stevia plants.The authors of this study suggest that the application of silver salt nanofibers appears to be an effective strategy for enhancing the presence of metabolites relevant to human health and addressing various abiotic and biotic stresses. Szekely-Varga et al. (contribution 24) establish the stress responses and the relative tolerance of two commercial lavender varieties to drought and salinity, showing the relevant mechanisms involved in their tolerance.They also describe the possibility of using biochemical stress biomarkers for the quick screening and selection of lavender genotypes better adapted to climate change scenarios. El-Sherbeny et al. (contribution 25) discuss the morphoanatomical features and biochemical responses (such as an increase in the contents of phenols, flavonoids, alkaloids, and tannins, and increased antioxidant activity) of two medicinal vascular plants species-Artemisia monosperma and Limbarda crithmoides-developing in the arid coastal habitats of Egypt.The authors describe the adaptation mechanisms used by these plant species and provide insights into the defense and survival strategy of these plant species under extremely harsh conditions.Zhao et al. (contribution 26) indicate that light intensity has a regulatory role in the increasing accumulation of flavonoids, which allows the alpine herbal plant Sinopodophyllum hexandrum to adapt to the elevated altitude associated with high-light intensity.It has been also found that higher light intensities are correlated with greater flavonol, flavonoid, and anthocyanin contents as well as with higher anthocyanin/total flavonoid and anthocyanin/total flavonol ratios. In another study, the tolerance mechanisms of the medicinal and aromatic plant clary sage (Salvia sclarea) against excess zinc (Zn) stress were evaluated by studying observed changes in leaf pigment and phenolic content, photosynthetic performance, nutrient uptake, and the characteristics of the leaf structure (contribution 27).This study reveals that clary sage is an appropriate plant for the phytoextraction of Zn from polluted soils, as well as for the phytoremediation of heavy-metal-contaminated soils.In addition, El-Shora et al. (contribution 19) suggest that antioxidant defense mechanisms can improve the heavy metal tolerance of sage plants (Salvia officinalis) and recommend the application of 5-aminolevulinic acid to alleviate lead stress. Conclusions The present Special Issue provides useful insights into the complex interactions between plants and the changing environment, shedding light on the different strategies that crop and medicinal plants use to adapt to and mitigate the harmful effects of abiotic stresses, which would have crucial effects on sustainable food and pharmaceutical production.This Special Issue also presents studies of new tolerant crop genotypes and different eco-friendly ways to improve the tolerance of plants under unfavorable environmental conditions.The effectiveness of different phytoprotectants and/or biostimulants in inducing effective tolerance mechanisms in plants against environmental stress is also discussed.The sharing of such valuable insights must continue to help develop a sustainable future agriculture that is better adapted to environmental changes and environmental pollution. I express my deepest gratitude to all authors who accepted the opportunity to present their research in this Special Issue and thank them for their efforts in studying abiotic stress tolerance in plants. Rady et al. (contribution 1) propose the use of exogenous gibberellic acid and diluted bee honey as biostimulants to ameliorate the drought tolerance of bean plants, and in their study, they achieved improved growth and productivity under water-deficient conditions.Al Kahtani et al. (contribution 6) demonstrate the possible effectiveness of applying Bacillus thuringiensis and silicon to endow lettuce plants with tolerance to salinity.Stassinos et al. (contribution 7) suggest that seed priming with spermidine influences the responses to salt stress of three rapeseed cultivars and demonstrate an improvement in their tolerance to high-saline conditions.Another study by Stefanov et al. (contribution 12) discusses the protective effects of sodium nitroprusside on the photosynthetic function of sorghum plants subjected to salt stress.Kunene et al. (contribution 4) show that a drought-tolerant Bambara groundnut genotype can be recognized during the early growth stage by screening for drought-tolerance markers, and this knowledge can be used for improving crop production.Yue et al. (contribution 15) propose that OsmiR535 has the potential to be a target for the genetic editing of plants' drought and salt tolerance which can be used as a new marker for molecular breeding in rice plants.Elkelish et al. (contribution 16) report that the exogenously applying ascorbic acid enhances the cold stress tolerance of tomato plants.Popova et al. (contribution 17) reveal that alternative electron pathways are involved in the photosynthetic responses to highlight intensity and low temperature by studying the acclimation of two Arabidopsis thaliana species (wild-type and mutant lut2) to both stress factors.
2023-12-17T16:04:36.226Z
2023-12-01T00:00:00.000
{ "year": 2023, "sha1": "b30a0429a65fd8970cb06063ddfa3d435e1122a6", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "97e024b9a1a9e82997944b6bbbce7896ac48c7e1", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
14280889
pes2o/s2orc
v3-fos-license
Reduced Lateral Mobility of Lipids and Proteins in Crowded Membranes Coarse-grained molecular dynamics simulations of the E. coli outer membrane proteins FhuA, LamB, NanC, OmpA and OmpF in a POPE/POPG (3∶1) bilayer were performed to characterise the diffusive nature of each component of the membrane. At small observation times (<10 ns) particle vibrations dominate phospholipid diffusion elevating the calculated values from the longer time-scale bulk value (>50 ns) of 8.5×10−7 cm2 s−1. The phospholipid diffusion around each protein was found to vary based on distance from protein. An asymmetry in the diffusion of annular lipids in the inner and outer leaflets was observed and correlated with an asymmetry in charged residues in the vicinity of the inner and outer leaflet head-groups. Protein rotational and translational diffusion were also found to vary with observation time and were inversely correlated with the radius of gyration of the protein in the plane of the bilayer. As the concentration of protein within the bilayer was increased, the overall mobility of the membrane decreased reflected in reduced lipid diffusion coefficients for both lipid and protein components. The increase in protein concentration also resulted in a decrease in the anomalous diffusion exponent α of the lipid. Formation of extended clusters and networks of proteins led to compartmentalisation of lipids in extreme cases. Introduction Lipid-protein interactions play an important role in the function and organisation of membrane proteins, either through macroscopic bilayer properties or via individual protein-lipid interactions [1][2][3].For certain proteins, e.g.those involved with the regulation of membrane composition or maintaining an asymmetric leaflet distribution, the necessity of such interactions is evident, whilst for others that depend on lateral pressure or local bilayer deformation for function the interaction may be more subtle [4].In order to understand the mode of action of these processes requires that we need to characterise not just the static structure of membranes but also their dynamic behaviour. Cell membranes are crowded environments: the majority are composed of up to ca. 50% protein by mass corresponding to a membrane area fraction of ca.25% or more occupied by proteins [5].A similar degree of crowding may be found in membranes studied in vitro [6] or used in membrane protein based biosensors [7].In addition to crowding per se, the spatial and compositional complexities of membranes may result in the formation of membrane protein clusters [8].Much discussion as to the nature of cluster formation has centred around the formation of lipid rafts in certain membranes [9], but it should be noted that lateral interactions of crowded membrane proteins are a more general property of cell membranes [10] and are of importance in e.g.bacterial [11,12] as well as mammalian cell membranes. There has been considerable experimental and computational interest in crowding effects in cells in general (e.g.[13][14][15]).In particular there have been a number of computational and theoretical treatments of crowding in cell membrane environments (e.g.[4,[16][17][18]).Molecular dynamics simulations of crowded membrane systems have been relatively limited, in part due to their high computational demand, although such simulations of simple models of membrane proteins (e.g.[19]) have yielded valuable insights into peptide effects on lipid domain formation. MD simulations have also been used to explore in detail the diffusion of membrane lipids, demonstrating the existence of correlated flows and motions within the bilayer [20][21][22][23] and of anomalous diffusion of lipids [24,25].More recently such studies have been extended to membranes including (single) protein molecules, revealing co-diffusion of protein and associated lipids, especially in membrane proteins such as the Kv channel which have a rather unique transmembrane architecture which leads to tight binding of a significant number of specific lipids [26].This study takes a further step into understanding the dynamics of complex proteins and lipids in crowded membrane protein systems, and complements recent studies [27] focussing on anomalous diffusion.More generally, the current study should be seen in the context of a number of simulation studies exploring the influence of lipid bilayer thickness on membrane protein aggregation (e.g.[28][29][30]), and the effects of protein clustering on diffusive behaviour of lipids [27] and of membrane proteins [31].The 'in plane' dynamic properties are likely to have important biological implications on higher level modelling of processes such as membrane protein sorting [32] and protein-induced membrane vesiculation [33]. In this study we have concentrated on a series of E. coli outer membrane proteins (OMPs): OmpA, NanC, FhuA, OmpF, and LamB.OMPs have a variety of functions especially for transport of solutes across the outer membrane (see Supporting Information Table S1) and offer a number of advantages as model systems which balance biological realism with relative simplicity.The OMPs all share a b-barrel architecture and so are unlikely to undergo any significant conformational change during the simulations.At the same time they are sufficiently diverse enough in size, oligomerisation state, and surface chemistry (see Supporting Information Table S1) to make a comparison worthwhile.In these simulations we employ a lipid bilayer composed of two lipids (POPE and POPG) representing the inner leaflet composition of the bacterial outer membrane [34].Again this is a compromise between biological realism and simplicity.In vivo the outer leaflet is almost exclusively lipopolysacharride (LPS) which is a 5 to 6 tailed lipid and the inner leaflet is composed of POPE, POPG and cardiolipin.The bilayer we use here while not including LPS is perhaps more representative of the more common biological membranes containing a majority of two tailed lipids. Simulations The simulations were designed to mimic the extent of protein crowding in bacterial outer membranes.Thus simulations were performed with between 1 and 16 OMPs in a bilayer of approximate dimensions 2856285 A ˚2 (corresponding to ca. 2500 lipids).This yields a protein density ranging from 1000 to 20,000 mm 22 , corresponding to a fraction of the membrane area occupied by protein (h) ranging from ca. 2% to ca. 50%.The upper level is comparable to the area fraction for OMPs in bacterial outer membranes [35,36], in OMP-based biosensor membranes [7], and in recent high-speed AFM studies of OmpFcontaining membranes [6].The lower level is comparable to that employed in recent experimental studies of lateral diffusion of membrane proteins in vitro [37]. Five different OMPs were used, ranging in radius of gyration (R gyr ) from 10 to 30 A ˚(see Supporting Information Table S1).For each protein, simulations were run in two bilayer environments (POPE and POPE/POPG) with 1, 4, 9 or 16 proteins in the bilayer patch (see Methods for details).Each simulation was run for at least 3 ms.This provides us with a substantial body of simulation data (a total of ca. 100 ms of simulation time) on which to base our analysis.However during the course of the analysis it became apparent that the dynamics of the two bilayer environments were identical and so only data from the mixed POPE/ POPG bilayer is shown. Lipid Diffusion Two dimensional lipid diffusion was initially studied in a lipid only POPE:POPG bilayer with no embedded proteins to characterise ''bulk'' properties.Fitting the lipid center of mass (COM) mean square displacement (MSD) versus time to equation 3 (see Methods) produces a straight line resulting in an exponent a = 0.99 indicating that diffusion is normal (Supporting Information Fig. S1).Subsequently fixing a equal to unity and re-fitting gives D = 8.5610 27 cm 2 s 21 .There appears to be deviation from normal diffusion only for small t (,20 ns) where the MSD is elevated, this also corresponds to where the MSD of individual head groups diverges from the lipid COM. By calculating the distribution of lipids from their initial position after observation time (Dt; Supporting Information Fig. S2 we calculate effective diffusion coefficients for different observation times, using equation 1 (see Methods), to categorise this more effectively.As expected the two-dimensional lipid diffusion coefficient is a function of the observation time (Fig. 1A) mirroring the result from above with large diffusion coefficients at low Dt converging to circa 8.5610 27 cm 2 s 21 at Dt.50 ns. When individual head-group particles are used to track the lipid diffusion rather than the centre-of-mass (COM), the diffusion coefficients at small Dt (and MSD at small t) are elevated even further For smaller observation times, when compared to COM values, the increase in diffusion is exaggerated for PO4 2 particles and further for NH3 + and GLH.The values for GLH (neutral) and NH3 + are identical indicating that particle electrostatics are not playing an important part in the mode of diffusion in the bulk.It further suggests that at low Dt we are sampling the particle vibrations (previously described [38,39] as rattling in box) and that they are largely (but not completely) averaged out when considering the COM of the entire lipid, slightly less when considering the PO4 2 particle (bonded at both ends) and the even less for the NH3 + and GLH particles (only bonded to one other particle, see Fig. 1B). When we introduce an OMP into the system and calculate the COM diffusion of the outer leaflet phospholipids as a function of distance from the OMPs (Fig. 2), it is evident across all OMPs that diffusion close to the OMP is retarded in comparison to diffusion in the ''bulk'' (as has been seen for e.g. the Kv channel protein [26]).This retardation due to the proximity of the OMP is observed to penetrate as far as the 20-30 A ˚annulus from the protein surface, beyond which bulk diffusion is observed.The retardation is a little more marked around the larger trimeric proteins (OmpF ; with a value of ca.40% that in the bulk at the surface of the protein) than it is around the smaller NanC and OmpA proteins (where the surface lipids diffuse ca.60% as fast as the bulk lipid).Whether this is because of steric hindrance of the trimeric proteins presenting a less smooth surface with concave regions conducive to trapping phospholipids is not immediately Author Summary Biological membranes are selective barriers which control the entry/exit of molecules to/from the interior of a cell.They are composed of a lipid bilayer in which are embedded many membrane proteins.Whilst the individual components of membranes are relatively well characterised, the lateral organization and dynamics of the membrane remain less well understood.The lateral mobility of constituent membrane species affects many processes, including how quickly proteins complexes form and protein recruitment occurs, how quickly lipids can be modified/lysed, and the formation of disordered and ordered microdomains.Biological membranes can contain as much as 50% protein.The dynamics of these crowded environments differ greatly from the sparsely populated membranes often studied in silico or in vitro.We use molecular dynamics computer simulations to quantify how mobility within the membrane decreases as the protein concentration increases.We calculate a baseline diffusion of both lipids and selected bacterial outer membrane proteins in the simplest of systems, namely a single protein in a large lipid bilayer patch.In this case diffusion can be correlated with the size of the protein.We observe how proteins affect the mobility of adjacent lipids.As the protein concentration within our systems increases we show that diffusion of both the proteins and lipids is reduced. clear.However, we do note that in previous studies of the Kv channel protein [26], which has an exceptionally infolded surface which results in tight binding of lipids [40], an even greater degree of retardation of lipid diffusion is seen. When the same analysis is applied except using individual PO4 2 , NH3 + and GLH outer leaflet particles instead of the COM (Supporting Information Fig. S3 for NanC and OmpF systems) the retarded diffusion is again observed for all Dt, and is greater for GLH/NH3 + than for PO4 2 .For the close annular lipids around NanC the diffusion coefficients of the GLH/NH3 + particles (identical except charge) are identical whereas the GLH/NH3 + diffusion coefficients at small Dt around OmpF are not: the vibrations of the NH3 + particles are dampened by the large number of acidic residues in the vicinity.This is of interest, as it suggests a role for electrostatic interactions between protein and lipids in modulating the motion of the latter. As the diffusion coefficients are characterised based on the initial lipid position relative to the protein if the observation time is much greater than the lipid residence time around the protein then lipids that start very close to the protein will also sample environments much further away.This is the reason that for long observation times the diffusion coefficients are not a function of distance from protein.The diffusion coefficients we calculate could be used in a model to predict the likely destination of a lipid from a given starting configuration. Leaflet Asymmetry Membrane proteins are often asymmetric in terms of the distribution of charged residues between inner and outer leaflet 'bands' interacting with lipid headgroups (as reflected in e.g. the 'positive inside rule' for a-helical membrane proteins [41]).This is also the case for most OMPs, in part reflecting the asymmetric nature of the lipid composition of the outer membrane [34].Thus in all of the OMPs in the current study, there are more charged sidechains in the outer than in the inner 'interfacial band' on the surface of the protein (see Fig. 2 insets and Supporting Information Table S1).If we measure the charge asymmetry from our simulations by calculating the number of charged side-chains that pass within 5 A ˚of each leaflet's (outer/inner) headgroup particles NanC has the smallest disparity (21/13) and OmpF has the greatest (72/18).We may exploit this difference between NanC and OmpF to explore the influence of electrostatic interactions on lipid dynamics. Comparing lipid diffusion coefficients between the inner and outer leaflets reveals that the mobility of annular phospholipids in the outer leaflet is generally less than that in the inner leaflet for all OMPs studied other than NanC (Fig. 3 and Supporting Information Fig. S4).The extent of asymmetry in diffusion between the leaflets can be estimated by examining the composition of the charged residues within each protein.The OMPs with fewer charged residues proximal to the lower leaflet headgroups than residues proximal to the upper leaflet headgroups exhibit a higher asymmetry.The extreme cases are NanC (little asymmetry) and OmpF (most asymmetry).NanC has a relatively high density of charged residues in the vicinity of the inner leaflet.(It is also more rotationally asymmetric than the other OMPs in terms of outwardly facing charged and aromatic residues.) The asymmetry is directly reflected in the time averaged contacts made between the PO4 particles and the protein (Fig. 3 CD) and the time averaged lipid density around the OMPs (Fig. 4 and Supporting Information Fig. S5).For all OMPs the outer leaflet lipids exhibit a high density ring of PO4 headgroups directly around the protein sometimes extending out to a second ring (particularly FhuA; Fig. 4).The particularly high density regions (up to 5 times the bulk density) are roughly equally and densely spaced in the outer leaflet.In the inner leaflet whilst there is still an increase in density directly around the proteins there is a distinct lack of these regularly spaced high density regions (except perhaps in NanC; Supporting Information Fig. S5), and no evidence of any second radial density peak. Thus, we can see that inner/outer leaflet asymmetry in the distribution of residues on the surface of the protein can in turn introduce dynamic asymmetry into the membrane as a whole.For the systems studied, such immobilisation is largely due to electrostatic interactions of protein and lipid headgroups.The exact location of the interaction density hotspots also gives us an insight into the location of potential binding sites where OMPlipid interactions may have a functional importance, such as the E. coli outer membrane enzymes OmpT [42].Other properties of the bilayer in the vicinity of the protein such as bilayer thickness show deviations from bulk properties extending out to 20 A ˚from the protein surface (see Fig. 4C and Supporting Information Fig. S6).This will be important when we come to explore more crowded membranes in which the separation between adjacent proteins falls within this distance. Individual Protein Diffusion The two dimensional rotational and translational diffusion coefficients of all five OMPs were calculated.The translational diffusional coefficients are shown in Fig. 5A as functions of the inverse radius of gyration of the proteins.The OMP:phospholipid number ratio within these simulations was ca.1:2500 which ensured two things; firstly that we are calculating diffusion of single proteins in a ''bulk'' bilayer; and secondly that interactions between periodic images were minimised.The latter is important as a previous MD study [20] has demonstrated the strong influence that system size has on the calculated lipid diffusion).Whilst not a realistic environment bearing in mind the crowded nature of in vitro and in vivo membranes, these values provide a benchmark against which to compare more complex simulations. Both the rotational and translational diffusion display a roughly linear correlation with the logarithm of the inverse of the radius of gyration (i.e.ln(R gyr 21 )) (see Fig. 5AB and Supporting Information Fig. S7).Although as noted above this correlation is seen in a low protein concentration not representative of an in vivo environment, it is to some extent consistent with recent experimental studies of the dependence of membrane protein diffusion rates on protein size [37,[43][44][45], providing an additional degree of confidence in the CG model, although there remains some debate as to the preferred theoretical explanation of these data. Multiple Proteins: Lipid & Protein Diffusion in Crowded Bilayers By increasing the number of protein molecules within a membrane patch we are able to explore the effect of increasing the degree of crowding on both lipid and protein diffusion.In Fig. 6A we show the effect of increasing protein concentration on average phospholipid COM diffusion coefficients.Each system was run initially for 1 ms with x-y restraints on all Ca particles, resulting in lipids diffusing amongst a grid of static OMPs.Subsequently, restraints were lifted from all but one central particle per OMP resulting in a further 1 ms simulation of lipids diffusing within a grid of freely rotating OMPs.Before the final 1 ms all restraints were lifted allowing the OMPs to diffuse laterally amongst the lipids.As could be anticipated from results above, the lipid diffusion coefficients are reduced when more proteins are present as more lipids are within 30 A ˚of a protein.More surprisingly, given the range of individual protein sizes and the dynamics of the systems in terms of clustering, the effect of protein crowding can be captured by collapsing all data points onto a single line fitted to the (translational) diffusion coefficient versus the protein area fraction (h) of the bilayer (Fig. 6A and Supporting Information Fig. S8).Interestingly, the first two simulations where the proteins were forced to remain on an initial grid showed only a slight reduction in lipid mobility compared to the fully unrestrained system.This suggests that the protein mobility in the unrestrained system is far lower than that of the lipids and can be treated as relatively immobile on the timescale of lipid diffusion.This effect is exaggerated for larger OMPs or clusters of small OMPs.Thus, in a crowded system comparable to an in vivo membrane (i.e.h ca.0.5) it is clear that lipid will be immobilized by a factor of ca.2.5 relative to bulk. There has been considerable discussion recently of anomalous diffusion of lipids in both pure lipid bilayers [24,25,46] and in bilayers containing membrane proteins [15,27].We believe that the effect of crowding on anomalous diffusion may require a more detailed exploration, using larger simulation systems in order to capture the intricacies involved on longer timescales [15] than is possible in the present study.However, in a preliminary analysis of whether increased protein concentration led to the onset of anomalous diffusion (as suggested by [27]) we calculated a from the MSD of all lipids in each system.This analysis reveals a decrease in the anomalous diffusion exponent a as the protein area fraction h is increased (Fig. 6C).We note that for the most crowded bilayer (h.0.4) the anomalous diffusion exponent can be as low as a,0.8.In agreement with recent studies of crowded (h = 0.34) bilayers containing the NaK channel protein [27] we suggest that the decrease in a is likely a result of restriction of lipid motions due to the proteins surrounding them (see below).In a recent review of experimental and computational studies of membrane diffusion [15] it is noted that whilst there are multiple possible sources of anomalous diffusive dynamics in membranes, the evaluation of the anomalous diffusion parameter a derived from experimental measurements on cellular membranes is challenging.Our simulations suggest that protein crowding within a membrane is a possible source of the observed anomalous diffusion, whilst noting the large difference in timescales between simulation and experimental, and in vitro and in vivo timescales. We may also examine the effect on protein diffusion of the crowded membranes (Fig. 6A).From these data it is clear that a significant reduction in protein mobility occurs at higher h values.Thus at h ca.0.5 (comparable to cell membranes) translational diffusion coefficients can be reduced to 10% or less of those in bulk membranes (see e.g. points for FhuA and OmpF in Fig. 6A).This provides direct support from the extrapolation from in vitro data at lower h made by e.g.[37] and is consistent with data from FRAP measurements in mammalian cells [47].Indeed, our simulations and analysis of high h OmpF-containing membranes are in agreement with the hindered diffusion of OmpF seen in recent high speed AFM of reconstituted OmpF containing membranes [6] although the longer time scale (ca. 1 s) of the latter precludes quantitative comparisons. We note that during these simulations protein clustering takes place, and so each system may not to be at equilibrium in terms of protein-protein interactions.The relative probability of aggregation occurring within the simulations is controlled by two opposing effects: the size (and hence lateral speed) of each individual protein and the total protein area fraction (h) of the bilayer.For the 363 arrays shown in Fig. 6B, FhuA clusters most rapidly due to a smaller distance needed to travel to encounter another protein whereas NanC and OmpA have to travel further before colliding due to their reduced size.The FhuA 363 and NanC 464 arrays are roughly equal in terms of h and here the increased speed of the NanC molecules dominates causing more rapid clustering.For some of the slower systems (262 OmpF) clustering is not observed at all over a 1 ms simulation timescale.Thus, the equilibrium state of each system is likely to consist of a large cluster (or network) of proteins surrounded by lipids (cf.[6]).For some of the more crowded systems long two-dimensional 'chains' of interacting OMPs are formed that can stretch across periodic boundaries forming one continuous network (Fig. 6A inset)).All lipids are trapped within distinct regions of this network bounded by OMPs, thus severely reducing the long-time diffusion of the lipids through compartmentalisation.We also note a strong orientational preference for 'tip-to-tip' interactions, where the interaction surface is limited to a single monomer from each trimer, within our crowded OmpF simulations in contrast to the more varied interactions seen in [6].However, our simulations have not yet sampled full equilibrium conformations and it is likely that the long-term (ms) evolution of these systems may see rearrangements of the protein-protein interfaces. Discussion We have shown that within our coarse grained lipid simulations the long term diffusive behaviour of bulk lipids is normal with a value of D = 8.5610 27 cm 2 s 21 .This compares reasonably well to other reported coarse grained (e.g.[27,30]) and atomistic (e.g.[24]) simulation values.At short time-scales an anomalous regime exists that is characterised by particle and molecule vibrations.The short term sub-diffusive regime up to 20 ns and the transition to normal Fickian diffusion is also in good agreement with a recent atomistic study of DMPC [24].Whilst the coarse grained potential captures both of these regimes it also potentially enables us to explore far longer time scales with more lipids and more complex membrane components (i.e.LPS in the bacterial outer membrane or cholesterol in a mammalian membranes).This is essential if we are to attempt to describe the dynamics of even simple in vitro systems and eventually in vivo bilayers.However, increasing the complexity of the membrane is challenging in terms of computational resources, as the time required for system equilibration may increase substantially, and sampling issues arise for slow moving components.In particular, given observations here and elsewhere [27] of the effects of clustering of interacting proteins in crowded systems on anomalous diffusion of lipids, it will be of considerable interest to extend studies to large crowded systems containing multiple species of lipids and proteins.Such studies will also enable more detailed examination of the effect of clustering on the anomalous diffusion of proteins as has been observed in simplified models of membrane protein oligomerization [31] Experimental studies vary widely in the reported diffusion coefficients of both lipids and proteins depending on the experimental technique used.For example, relatively long timescale (millisecond) FRAP data predicts lower diffusion coefficients than shorter time-scale (sub-nanosecond) QENS data [48].Comparison between FRAP data are also complicated by the large variety of the membrane environments studied (GUVs, supported bilayers, in vivo membranes etc.).However, recent FRAP studies of ''crowded'' GUVs by Ramadurai et al. [37] yielded lipid (DOPC/DOPG: 1.1610 27 cm 2 s 21 ) and protein (e.g.LacY: 0.4610 27 cm 2 s 21 ) diffusion coefficients in reasonable agreement with those above.Interestingly, these authors reported some degree of anomalous diffusion (a ca.0.9) of protein at their higher (3000 mm 22 ) degrees of crowding. In summary, large scale simulations allow us to probe directly 'in vivo' regimes which are difficult to address by direct in vitro experiments.Thus simulations provide a link between structural level biophysics and studies of complex membranes [6] and cells [49], enabling us to understand emergent spatial and temporal complexities of cell function consequent upon crowding of membrane components. Methods All simulations were run using Gromacs 4.5.3[50] (www.gromacs.org),and a local modification [51][52][53] of the MARTINI coarse-grained force field [54,55].Note that throughout this paper we report simulation times directly without the application of a scaling factor of 4. The following OMPs (pdb code) were downloaded from www. rcsb.org,stripped of crystallographic waters and other non-protein molecules, and loop regions completed where needed with Modeller 9v8 [56]: FhuA (1BY3), LamB (1AF6), NanC (2WJQ), OmpA (1BXW) and OmpF (2OMF).The atomistic structures were then converted to a coarse-grained structure [54,55] using the CG protocol described previously [53], with a CG particle representing typically four heavy atoms.Each OMP was energy minimised and embedded into a preformed equilibrated bilayer in 161, 262, 363 and 464 grids (where possible).Each membrane patch consists of ca.75,000 particles with ca.2500 lipids with overall periodic dimensions of 28562856105 A ˚. Two bilayer compositions were used: POPE, and a mixture of POPE:POPG (3:1) though only the POPE:POPG data is presented in this work.Na + counter ions were added to achieve a neutral electric charge and solvent and lipids equilibrated around the protein for 100 ns with the protein Ca particles restrained in the x-y plane. Production runs were then carried out in 3 stages, gradually lifting the restraints on the proteins.For the first 1 ms all Ca particles were restrained (allowing neither rotational or translational diffusion), for the second 1 ms one central particle was restrained in the x-y plane (allowing only rotational diffusion) and for the final 1 ms all restraints were lifted.The reasoning behind this approach was that it would allow an investigation into the effect of embedded objects (OMPs) with varying degrees of mobility on the lipid dynamics.The 161 simulations were extended for a further 5 ms (with no restraints) to provide better sampling in the calculation of the diffusion of individual OMPs.All protein positional restraints were of harmonic form with a restoring force of 1000 kJ mol 21 nm 22 imposed in the x and y directions.We have provided an mdp file in the Supporting Information (Supporting Information data file S1) for a typical 161 protein in bilayer simulation. All production simulations are run at 313K ms with Berensden semi-isotropic coupling [53] at 1 bar and separate temperature coupling for the solvent, lipids and protein.A timestep of 20 fs was used, electrostatic interactions are smoothly shifted from zero at 12 A ˚and Lennard-Jones interaction from 9-12 A ˚.An elastic network model [57] was used to constrain Ca particles within 7 A of each other with a force constant of 10 kJ mol 21 A ˚22 to ensure the that the b-barrel structure was preserved. MDanalysis [58] and in house scripts were used for most trajectory manipulation and analysis.Visualisation was performed in VMD [59] and Pymol [60]. Calculation of Diffusion Coefficients Lipids.The two dimensional diffusion coefficients of the lipids in all simulations was calculated in two ways. (1) A fit (at observation time = Dt) to the two-dimensional probability distribution (Eqn. 1) of lipid centre of mass (COM): where P is the probability density function of r after Dt, D is the 2dimensional diffusion coefficient, Dt is the observation time, and r is the lateral displacement over the period Dt.This approach has previously been used by Niemela ¨et al. [26].In this method = Dt can be considered a ''true'' observation time as only particle coordinates at t and t+Dt are used -everything else is discarded.When applied to lipid membrane systems this method often results in different diffusion coefficients at different observation times suggesting diffusion is non linear. (2) Diffusion is extracted from a plot of mean square displacement versus time by fitting D a (units cm 2 s 2a ) and a.In this study the mean square displacement of the COM of each lipid and the individual head group particles was calculated.For the special case of normal diffusion a = 1 and D reverts to conventional units of cm 2 s 21 (Eqn.2): Sub-diffusion is present when a,1 (Eqn.3) as has been observed other studies of lipid membranes: where D a is the 2-dimensional translational diffusion coefficient, r is the lateral displacement at time t, and a is the anomalous diffusion exponent.The MSD of sub-trajectories of length Dt are averaged over the entire trajectory to improve the convergence.No ''restart'' time was implemented as large time correlations are present within the system.Proteins.Protein diffusion coefficients are calculated from the mean square displacement (MSD) as a function of time according to the diffusion equations 2 and 4. Dt sub-trajectory values between 1 ns and 200 ns were examined.This method was chosen rather than fitting to a probability distribution due to sampling issues as each simulation only contains one OMP, a single relatively slow moving entity.6 ms of trajectory was used in the calculation of each OMP diffusion coefficient.To calculate a standard error, diffusion coefficients were calculated for 661 ms blocks within each trajectory.The COM of the entire protein was used to calculate translational diffusion.Rotational diffusion was based upon the vector between the COM of two trans-membrane halves of the protein split equally down the middle: where D rot is the 2-dimensional rotational diffusion coefficient, and Q(t) is the rotation at time t. Figure 1 . Figure 1.(A) Lipid diffusion coefficients as a function of observation time (Dt) for: lipid Center of Mass (red line), and for the phosphate (PO4, green lines) and choline (NH3, blue line) or glycerol (GLH pink line) particles of the headgroup.These diffusion coefficients were estimated from a 5 ms simulation of a lipid bilayer containing a 3:1 mixture of POPE/POPG.(B) Coarse-grained structure within POPE and POPG lipids illustrating the particle types.doi:10.1371/journal.pcbi.1003033.g001 Figure 2 . Figure 2. Phospholipid center of mass diffusion coefficients for the outer leaflet of the bilayer as a function of distance from protein and of observation time.Each point represents the diffusion of lipids within annuli of 10 A ˚width (i.e. a point at 5 A ˚represents lipids within the first annulus 0-10 A ˚from the protein surface).The data on each plot are calculated from 6 ms trajectories of a single protein in a 3:1 POPE:POGE bilayer.Error bars are the standard errors of 661ms sub-trajectories, Inset on each plot is the protein investigated showing acidic (red) and basic (blue) surface residues.The proteins and their PDB ids are: (A) OmpA (1BXW), (B) NanC (2WJQ), (C) FhuA (1BY3), and (D) OmpF (2OMF).doi:10.1371/journal.pcbi.1003033.g002 Figure 3 . Figure 3. Leaflet asymmetry of diffusion coefficients illustrated for NanC and OmpF.Ratio of inner to outer leaflet center of mass diffusion coefficients for (A) NanC and (B) OmpF as a function of distance from protein at differing observation time.Error bars are the standard errors of 661 ms sub-trajectories, Asymmetry can be seen in the OmpF simulations for distances from the protein of ,20 A ˚. (C,D) Ca trace representations of the corresponding proteins coloured on time averaged number of protein contacts (cutoff 7 A ˚) to lipid phosphate particles on a blue (0%) to red (100%) scale.Corresponding diagrams for all five proteins can be found in the supporting information, Fig. S4.doi:10.1371/journal.pcbi.1003033.g003 Figure 4 . Figure 4. Time averaged two-dimensional phosphate particle densities (A ˚22 ) around FhuA for the outer (A) and inner (B) leaflets.Proximal acidic/basic residues are shown as blue/red points.The Ca trace is shown in black.Corresponding diagrams for all five proteins can be found in the supporting information, Fig. S4.(C) Time averaged two-dimensional bilayer distortions from bulk thickness in the vicinity of FhuA.Bilayer thickness is calculated based on the minimum distance between the two closest PO4 particles in opposing leaflets.Acidic/basic residues are shown as blue/red points.The Ca trace is shown in black.doi:10.1371/journal.pcbi.1003033.g004 Figure 5 . Figure 5. (A) Translational and (B) rotational diffusion of the five OMPs as a function of the logarithm of their inverse radius of gyration (ln(R gyr 21 )) for varying observation time (Dt).The proteins are from left to right along the x-axis: LamB, OmpF, FhuA, NanC and OmpA.The standard deviations of the diffusion coefficients calculated from 661 ms sections of each 6 ms trajectory are shown as error bars.doi:10.1371/journal.pcbi.1003033.g005 Figure 6 . Figure 6.(A) Center of mass diffusion of lipids (circles; left hand axis) and proteins (crosses; right hand axis) as a function of area fraction of bilayer occupied by protein (h), for Dt = 20 ns.Magenta = OmpA system; dark blue = NanC; red = FhuA; cyan = OmpF.The inset figure shows a snapshot of the 464 FhuA system after 1 ms, illustrating how lipids are compartmentalised by a boundary of contiguous interacting proteins.(B) Clustering in the 363 and 464 simulations, shown using the mean cluster size.(C) Anomalous diffusion exponent a as a function of area fraction of bilayer occupied by protein (h).doi:10.1371/journal.pcbi.1003033.g006
2016-01-29T17:58:53.149Z
2013-04-01T00:00:00.000
{ "year": 2013, "sha1": "7fe7a45872c5c6c7e4297bd29b0f233693324977", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1003033&type=printable", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7fe7a45872c5c6c7e4297bd29b0f233693324977", "s2fieldsofstudy": [ "Biology", "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Chemistry", "Medicine", "Computer Science" ] }
257337131
pes2o/s2orc
v3-fos-license
BAMBI: A new method for automated assessment of bidirectional early-life interaction between maternal behavior and pup vocalization in mouse dam-pup dyads Vital early-life dyadic interaction in mice requires a pup to signal its needs adequately, and a dam to recognize and respond to the pup’s cues accurately and timely. Previous research might have missed important biological and/or environmental elements of this complex bidirectional interaction, because it often focused on one dyadic member only. In laboratory rodents, the Pup Retrieval Test (PRT) is the leading procedure to assess pup-directed maternal care. The present study describes BAMBI (Bidirectional Automated Mother-pup Behavioral Interaction test), a novel automated PRT methodology based on synchronous video recording of maternal behavior and audio recording of pup vocalizations, which allows to assess bidirectional dam-pup dyadic interaction. We were able to estimate pup retrieval and pup vocalization parameters accurately in 156 pups from 29 dams on postnatal days (PND) 5, 7, 9, 11, and 13. Moreover, we showed an association between number of emitted USVs and retrieval success, indicating dyadic interdependency and bidirectionality. BAMBI is a promising new automated home-cage behavioral method that can be applied to both basic and preclinical studies investigating complex phenotypes related to early-life social development. Vital early-life dyadic interaction in mice requires a pup to signal its needs adequately, and a dam to recognize and respond to the pup's cues accurately and timely. Previous research might have missed important biological and/or environmental elements of this complex bidirectional interaction, because it often focused on one dyadic member only. In laboratory rodents, the Pup Retrieval Test (PRT) is the leading procedure to assess pup-directed maternal care. The present study describes BAMBI (Bidirectional Automated Mother-pup Behavioral Interaction test), a novel automated PRT methodology based on synchronous video recording of maternal behavior and audio recording of pup vocalizations, which allows to assess bidirectional dam-pup dyadic interaction. We were able to estimate pup retrieval and pup vocalization parameters accurately in 156 pups from 29 dams on postnatal days (PND) 5,7,9,11,and 13. Moreover, we showed an association between number of emitted USVs and retrieval success, indicating dyadic interdependency and bidirectionality. BAMBI is a promising new automated home-cage behavioral method that can be applied to both basic and preclinical studies investigating complex phenotypes related to early-life social development. Introduction Neonatal mouse pups depend on their dam for nutrition, thermoregulation, and protection (Nowak et al., 2000). They produce acoustic signals to communicate their vital needs, and particularly ultrasonic vocalizations (USVs) are essential to evoke maternal care behaviors, such as retrieval in pups that have dangerously strayed from the nest (Wöhr et al., 2010;Bornstein et al., 2017). Early-life USVs can be used to study the genetic and neural basis of early-life communication and to assess early-life communicative defects and their impact on social development (Hahn and Lavooy, 2005;Scattoni et al., 2009;Reynolds et al., 2017;Potasiewicz et al., 2019). Moreover, early-life maternal care has been shown to affect the pup's physical and functional development in a very broad sense (Caldji et al., 2000;Meaney et al., 2000;Braungart-Rieker et al., 2001;Shin et al., 2008;Curley and Champagne, 2015). Establishing effective bidirectional communication does not only require that the pup is able to signal distress effectively, but also that the dam is able to perceive, process and respond accurately and timely to these cues (Shin et al., 2008;Bornstein et al., 2017). The separation-induced USV test has been used to assess the quantity and quality of pup USVs after separation from its dam and litter (Hahn and Lavooy, 2005), but it is essentially a unidirectional behavioral assay that focusses on the pup. On the other hand, assays such as the pup retrieval test (PRT) or USV playback tests center on maternal behaviors, such as search and retrieval (Sewell, 1970;Smotherman et al., 1974;Ehret and Haack, 1982;Ehret, 1992Ehret, , 2005. Some studies implemented both unidirectional procedures separately, but assessed statistical association afterward Bowers et al., 2013;Abuaish et al., 2020). Combining both procedures in one behavioral assay has several advantages. First, the behavioral readouts can be sampled in a single assay, which reduces workload, and microenvironmental variability, originating from differences in animal transportation and handling, for example (Sukoff Rizzo and Silverman, 2016;Ey et al., 2020). Second, communication and social competence can be investigated as a bidirectional process in the same animals (Vogel et al., 2019). Third, the complex interaction between deficits in dam and pup can be investigated (Kelly et al., 2000). The latter is particularly important in rodent models of disorders with early-life communication deficits, such as autism or fetal alcohol syndrome (Kelly et al., 2000;Bosque Ortiz et al., 2022). Therefore, we present BAMBI (Bidirectional Automated Mother-pup Behavioral Interaction test), a combined, automated approach to assess early-life communicative bidirectionality in laboratory mice. The automated PRT, as described in Winters et al. (2022), was expanded with simultaneous recording and automated detection of pup USVs. Materials and methods Animal housing and breeding C57BL/6J mice from Janvier Labs (Le Genest-Saint-Isle, France) and the KU Leuven Animal Facility (Leuven, Belgium) were time-specifically bred and kept at a 12/12-h light-dark cycle (lights on at 7 a.m.), with ad libitum water and food in conditioned rooms (22 • C, humidity 30%). The morning after mating was considered as gestational day (GD) 0.5. On GD0.5, dams were housed individually for the remainder of the pregnancy and pregnancies were confirmed between GD7.5 and GD10.5 by recording weight evolution based on Heyne et al. (2015). All experimental procedures were approved by the Animal Ethics Committee of KU Leuven (P028/2018), in accordance with European Community Council Directive 86/609/EEC, the ARRIVE guidelines and the ILAR Guide to the Care and Use of Experimental Animals. Experimental groups In compliance with the reduction principle, mice for the present methodological work were obtained from an independently designed pharmacological study, in which pregnant dams were injected with valproic acid sodium salt (VPA) in order to generate pups representing a neurodevelopmental model of autism. Pups were pharmacologically treated to attempt a rescue of the behavioral impairment. More specifically, pregnant dams (N = 44) received a single subcutaneous injection with 60 mg/ml VPA (Sigma Aldrich, Taufkirchen, Germany) dissolved in saline solution on GD12.5. The day of birth was considered as PND0. To standardize nest composition, nests were culled to six pups on PND0 and every nest needed to have at least four pups with both sexes present. These restrictions resulted in 29 dams with viable progeny and a total of 156 pups for further testing. Pups were subcutaneously injected daily from P1-7 with a low (0.5 mg/kg) or a high (2 mg/kg) dose of THIQ (N-[(1R)-1-[(4-Chlorophenyl)methyl]-2-[4-cyclohexyl-4-(1H-1,2,4-trazol-1-ylmethyl)-1-piperidinyl]-2-oxoethyl]-1,2,3,4tetrahydro-3-isoquinolinecarboxamide; Bio-techne, Abingdon, UK) or a PBS-DMSO control vehicle [doses adapted from Mastinu et al. (2018)]. THIQ was dissolved in PBS and 3.5% DMSO. In total, nine litters (48 pups) were injected with low THIQ dose, 10 l (50 pups) with a high THIQ dose and 10 l (58 pups) with the PBS-DMSO control vehicle. The pharmacological effects are not the focus of the present study and will be described in a separate study. In the current study the aim is to present a proof of principle demonstration of the feasibility and validity of a new automated method for behavioral testing of early life mother-pup bidirectional interactions. Since pharmacological effects were not relevant for the present study, we employed a general linear model (GLM) in which drug effect was set as fixed factor, in order to correct for drug effects (see Calculation of parameters and statistics), and the three drug groups (low-dose THIQ, high-dose THIQ and vehicle) were pooled into one group. Image of the BAMBI testing setup. BAMBI test was performed in the home-cage and included a cup on PND7-13 to prevent the pups from crawling back into the nest. An ultrasound microphone was placed approximately 5 cm above the test pup's corner in order to minimize interference from USVs emitted by the pups in the nest (in the opposite corner). A video camera was mounted above the home-cage. For the present study, animals were divided into the following groups: Dams (n = 29) and pups (n = 156). For the pup sex effect analysis, animals were divided into three groups: dams (n = 29), male pups (n = 72) and female pups (n = 84). For the subsequent analyses investigating the general behavioral interactions between mother and pups, we corrected pup sex effects through a GLM model in which pup sex was set as fixed factor and we pooled male and female pups together. Mice were tested at five time-points: pup postnatal day (PND) 5, 7, 9, 11, and 13. Pup retrieval test (PRT) protocol The PRT was performed as described previously by Winters et al. (2022). Briefly, the test is performed in the homecage which is placed inside a Styrofoam box (Figure 1; 370 mm × 300 mm × 330 mm) to create a visually isolated environment. A single pup was removed from the nest and placed in a clean, glass cup pre-heated to 35 • C using a heating pad. A trial was started by a beep when the dam was on the nesting site. Hereafter, the pup was placed in the furthest corner from the nest. Trials had a maximum duration of 100 s after the beep, and when the pup was not retrieved within this time, it was returned to the nest by the experimenter. On PND7-13, since pups had developed more mature motor skills, they were kept from crawling back into the nest by placing them in a cup (Figure 1; 90 mm diameter and 55 mm height) as described by Esposito et al. (2019). Per dam, the PRT was repeated six times on PND5, and due to practical limitations four times per dam on PND7-13. For all test ages, pup sex was counterbalanced per dam and pups were not marked during this test to avoid odor interference. Pups thus could not be identified, meaning that a pup might have been tested more than once. Maternal trial sequence was defined as the order of trials within a dam on a specific testing day. During each trial, PRT performance was scored by the experimenter performing the test (two experimenters in total) for latency to retrieval (s) and retrieval success (0 = not retrieved; 1 = retrieved). Ultrasonic vocalization recording and pre-processing Pup USVs were recorded using an ultrasound microphone (Dodotronic Ultramic UM250K, Rome, Italy) connected to a personal computer equipped with Avisoft SASLab Lite software (Avisoft, Bioacoustics, Berlin, Germany). The microphone was placed approximately 5 cm above the pups' corner or cup to minimize interference by USVs emitted by the pups in the nest. USVs were recorded for 100 s, with a sampling rate of 250 kHz and 16 bits. Audacity R open-source software (Version 3.1.3) 1 was used to remove DC (direct current) offset and a equalization (EQ) curve filter was used to remove all signal below 40 kHz (Figure 2). Synchronization of USV and behavioral pup retrieval recording A Foscam C2 IP-camera (EUport, Wageningen) was mounted over the home-cage to record maternal behavior. Per dam, one video was recorded including six PRT trials on PND5 or four PRT trials on PND7-13. A PRT trial started after the dam was back on the nest, and a beep, manually played on a smartphone, introduced the placement of the pup in the furthest corner of the home-cage. Synchronization of behavioral and audio data was done by identifying the beep using frame-precision Shotcut software (version 22.10.25,Meltytech,LLC). 2 This means video or audio signals can be split precisely per individual frame. Here, videos were recorded using a frame rate of 30 frames per second (fps) and videos were split before the first frame in which the beep was audible. The end of the video was defined 100 s after the first frame with an audible beep. Similarly, the audio recordings were recorded in Avisoft with a sampling rate of 250 kHz, whereas Shotcut was used to remove signal before the beep using a frame rate of 25 fps. Again, the start of the recording was defined as the sampling point before the first fragment in which the beep was audible. For USV recordings, the end was not defined as Avisoft automatically ended sampling after 100 s. Exemplary spectrogram of ultrasonic vocalizations emitted by pups. Frequency (Hz) is shown on the y-axis over a fixed time of 1,000 ms on the x-axis. Automated detection of pup USVs using deep audio segmenter Ultrasonic vocalization detection was performed using a custom-build model with Deep Audio Segmenter (DAS; 29). DAS 0.26.7 was installed in an Anaconda environment with Python 3.9.13 and training was performed using the DAS notebook on Google Colaboratory [COLAB; (Steinfath et al., 2021)]. Thirtytwo audio recordings were pseudo-randomly selected across two different experiments at five ages (postnatal day 5, 7, 9, 11, and 13) and both sexes. Across all recordings, 2,189 pup vocalizations were manually annotated as segments which is equivalent to 67.7 s of pup USVs. Per audio recording, 80% of the annotated USVs were assigned to the training dataset, 10% to the testing dataset and 10% to the validation dataset. A pup USV network was trained in Google COLAB and structural training parameters were chosen based on Steinfath et al. (2021) and can be found in Supplementary File 1. DAS automatically stops training as the validation loss of the model has not improved in 20 epochs (Steinfath et al., 2021). The pup vocalization model did not improve after 44 epochs and performance of this detection model was assessed using precision, recall, F1 scores and overall accuracy. Precision is the percentage of "true cases" per "detected cases." Recall on the other hand is the percentage of "true cases" per "manually annotated cases." The F1 score is the harmonic mean of precision and recall. Data were post-processed for quality control using a custombuild R script to resolve inaccuracies. In the videos, the time was recorded between the first detectable beep segment and the first frame where the hand of the researcher was completely out of the setup after placing the pup in it. All detected USVs were removed from the recording during this time interval. Expanded body part tracking The resulting dataset included 212 frames and was used to retrain the original network from Winters et al. (2022). DeepLabCut 2.2b8 [DLC; (Mathis et al., 2018)] was installed in an Anaconda environment with Python 3.7.7 on a laptop equipped with an Intel Core i5-8350U CPU and 8 GM RAM and Windows 10 64-bit operating system. Training, evaluation and analysis of the expanded model was performed using DLC in Google COLAB. 3 Learning strategy and performance evaluation of the PRT pose estimation model The PRT dam-pup tracking model developed by Winters et al. (2022) was trained to track only PND5 pups in a home-cage without a cup. As C57BL/6J pup body shape changes significantly between PND5 and PND13, and the use of a cup is a significant context change that elicits different maternal poses, the model needed to learn these changes. A two-phase hybrid learning strategy was used similar to Gorssen et al. (2022). In the first phase, fourteen extra single trail video recordings were selected because of their variability in pup age and/or modulated home-cage environment. Fifteen frames per video were extracted using k-means clustering in DLC and labeled manually. Additionally, using the original model 10 extra outlier frames per video were extracted using the DLC "jump" algorithm. Labels in these outlier frames were manually refined and frames were only annotated if both dam and pup were visible. The resulting dataset included 212 frames and the original PRT model was retrained with a 95:5 train/test ratio using the same features as Winters et al. (2022). The model was trained for 47 000 iterations and had a mean pixel error over all body parts of 4.29 px for the training dataset and 14.35 px for the test dataset. In the second training phase, all original labeled data was combined with the data from the first training phase. The output model from the first training phase was then re-trained using the entire dataset with a 95:5 train/test ratio. After 18 000 iterations, the model had a mean pixel error over all body parts of 6.34 px for the training dataset and 10.11 px for the test dataset. Applying a p cutoff (p = 0.10) improved mean pixel error on the training dataset to 5.54 px (or 1.96 mm), and 8.82 px (or 3.13 mm) for the test dataset. Average pixels per millimeter did differ between the original dataset and the data used to extend the dataset. Distance calculation, performed in Simple Behavioral Analysis [SimBA; (Nilsson et al., 2020)] as described by Winters et al. (2022), showed an average of 2.27 (SD = 0.3) px/mm in the original dataset, and an average of 2.87 (SD = 0.16) px/mm for the newly annotated data. A custom-build R script was used to post-process the data (quality control) and to estimate retrieval time. First, a time correction was applied to ensure tracking started at the precise moment the pup was placed in the nest. Hereafter, the rolling median (90 frames) of the distance of pup to the nest was calculated to correct for inaccurate tracking in the first seconds of the PRT. The first frame where the rolling median >85 mm was determined. If this was not the first frame, the distance of pup to nest for all previous frames was set to 85 mm, as pups started at least >85 mm from the nest at the start of PRT. Frames with a mean pup tracking probability over all body parts <0.01 were discarded, as these estimates were considered unreliable. Next, a smoothing algorithm was used to approximate the distance of a pup to the nest using the stat_smooth function in R (loess method) with a smoothing factor of 0.25. Observed values deviating more than 15 mm from the smoothing estimate were set to missing. After these quality control steps, retrieval time was estimated as the first frame a pup entered the nest. Calculation of parameters and statistics A custom-build R-script (RStudio, Inc., Boston, MA) was used to allow direct comparison between parameters of video and audio analysis. Mean USV duration was calculated as the mean duration of all USVs emitted by the same pup within one trial. USV rate before retrieval was calculated as shown below: USV rate ( USVs s ) = Number USVs before retrieval Retrieval time (s) Statistical analyses were performed using the GLM package in R for (binomial) regression models and survival package in R for survival analysis via multivariate Cox regression for the trait retrieval success. All models were corrected for USV rate or average USV duration (covariate), sex (fixed effect), day of testing (fixed effect), maternal trial (covariate), and experimental condition (fixed effect). Performance evaluation of DAS audio detection The USV detection algorithm accomplished an overall accuracy of 99.7%. Noise was predicted with a precision of 99.8 %, a recall of 99.9%, and a F1 score of 99.9%. Pup USVs were predicted with a precision of 94.3%, a recall of 90.2%, and a F1 score of 92.2%. Validation of retrieval parameters To validate the performance of the automated PRT, automatically estimated retrieval times were compared with manual recordings. Retrieval success was estimated with an accuracy of 90.4% (95% CI = 87.9-92.5), a sensitivity of 81.0% and specificity of 94.4%. The confusion matrix (Table 1) showed inconsistencies in the prediction of retrieval success of 65 of 670 (9.7%) data entries. After visual inspection, 8 files (Manual: pup not retrieved; Automated: pup retrieved) involved pups walking themselves back into the nest; for 11 files the automated retrieval estimation was more accurate than manual scores; and for 46 files manual scores were more accurate than automated estimations due to tracking errors. For estimated retrieval time, Pearson correlations between manual recordings and automated analysis were high (r = 0.86). However, estimates using video analysis were on average 2.4 (SD = 17.8) seconds faster than manual recordings. Within test day (PND5-13), Pearson correlations ranged between r = 0.80 and r = 0.92 (PND5: r = 0.80; PND7: r = 0.80; PND9: r = 0.92; PND11: r = 0.92; PND13: r = 0.86). To establish translatability for the current methodology, manual and automated recordings with a difference larger than 30 s were flagged based on the distribution of differences ( Figure 3A). A total of 54 records were flagged of which 31 previously inspected retrieval inconsistencies and the remaining 23 records were visually inspected ( Figure 3B). To ensure methodological correctness, 41 automated pup retrieval time estimations were corrected to their manual estimation. Also, pups that walked themselves into the nest were removed from the dataset as bidirectional behavior might be affected. The final dataset is visualized in Figure 3C and the confusion matrix is shown in Table 2. This corrected dataset had an accuracy of 95.1% (95% CI = 93.2-96.6), sensitivity of 89.6% and specificity of 97.28%. Correlations between USV parameters Correlational analysis showed that the total number of USVs emitted before retrieval was correlated with the USV rate before retrieval (r = 0.84; p < 0.001), mean USV duration (r = 0.44; p < 0.001) and first USV event (r = −0.30; p < 0.001). The same pattern was observed for separate test days (Supplementary Figure 2). Repeatability of traits over test days Repeatability of traits was assessed by looking at the Pearson correlation matrix within a trait over time for the mean value of pups with the same sex within dams (Supplementary Figure 3). For maternal retrieval time, repeatability was generally moderate to high for consecutive test days, significant and consistently positive (r = 0.32-0.63; p < 0.05-0.001). The correlations suggest that dams who retrieve their pups faster on PND7 generally also will do so on the other days of testing. Pearson correlations between Post-processing quality control of retrieval time estimations. (A) Histogram representing the difference in seconds between manual and automated estimations of the retrieval time. Automated estimations of retrieval time were on average 2.4 s faster than manually registered estimations. (B) Scatterplot displaying the relationship between raw manual and automated estimations. Differences smaller than 30 s were accepted and shown in blue, whereas differences larger than 30 s were flagged for visual inspection. (C) Scatterplot displaying the relationship between corrected manual and automated estimations. After visual inspection of the flagged estimates of figure, the final estimate was either accepted (red) or corrected to the manual estimation. PND5 and the other days of testing were the lowest which might be due to the fact that this was the only day in which the cup paradigm was not used. Repeatabilities for USV rate and mean USV duration were similarly assessed. Correlations were less pronounced, although most correlations were positive (Supplementary Figures 3, 4). Particularly PND7 gave moderate correlations with the other test days for USV rate (r = 0.34-0.50; p < 0.05-0.001) and for mean USV duration (r = 0.39-0.59; p < 0.01-0.001) although not with PND13 data (r = 0.12). For latency to first USV emission, no clear pattern was observed although most correlations were positive (Supplementary Figure 5). Analysis of pup sex effect No significant differences between pup sexes were found for USV rate before retrieval (p = 0.81), indicating that the number of USVs, proportioned to the retrieval time, was comparable between pup sexes. However, USVs emitted by male pups had a significantly shorter duration compared to the USVs emitted by females (p < 0.001). Nevertheless, this did not seem to affect maternal behavior. No significant effect of pup sex on maternal retrieval was observed (p = 0.07). Analysis of bidirectionality Correlational analysis of PND5-13 data combined (Supplementary Figure 6), indicated a positive association Manual retrieved Automated not retrieved 172 13 Automated retrieved 20 465 between pup retrieval time and the amount of USVs the pup emitted (r = 0.54; p < 0.001), suggesting that pups that vocalized more were retrieved later. Hereafter, we looked at USV emission rate (number of USVs/retrieval time) and the number of USVs recorded during the first 10 s of the test (USVs 10 sec ), as most pups were retrieved after 10 s (5 pups <10 s). This was done to correct for the fact that pups that are retrieved slower, also have more time to emit USVs. However, retrieval time was still positively correlated with USV emission rate (r = 0.24; p < 0.001). Interestingly, a significantly positive correlation was also found between retrieval time and USVs 10 sec (r = 0.23; p < 0.001). Hereafter, we performed correlational analyses for each day separately, to exclude the use of the cup and/or age as cofounding variables for these results (Supplementary Figure 6). For total number of USVs emitted before retrieval, moderate, positive correlations were found with retrieval time for all testing days (r = 0.45-0.61; p < 0.001). This suggests that pups with a higher amount of vocalizations were generally retrieved later. Next, a correction for retrieval time was made by either looking at USV rate or USVs 10 sec . Here, a significant positive correlation was only found on PND7-9 (r = 0.31-0.33; p < 0.01) for USV rate and on PND7 and PND13 (r = 0.21-0.29; p < 0.05) for USVs 10 sec . It should be noted that non-retrieved pups were assigned a retrieval time of 100 s, which might bias correlations. The previous results query whether there is a difference in the number of vocalizations emitted by pups that are retrieved and those not retrieved. Binomial regression analysis of PND5-13 data combined, showed that the USV rate was a significant predictor of retrieval success (HR = 0.58; p < 0.001), which was also indicated by the boxplot (Figure 4B). The hazard ratio (HR) of 0.58 indicates that a USV rate increase of 1 USV/s reduces the probability of being retrieved by 42%. Hereafter, analyses were performed for each day separately, to exclude the use of the cup and/or age as cofounding variables for these results. Figure 5 shows that median USV rate was higher in non-retrieved pups than in retrieved pups, although this difference was small on PND5 and PND13. Binomial regression analyses confirmed these results with negative estimated HR's on each test day (HR = 0.46-0.86) with only significant effects found Results of bidirectionality analysis. (A) Survival plot representing the estimated probability to be retrieved over time in the PRT per maternal trial. Both retrieval time and chance to be retrieved increased as the maternal trial number increased, suggesting a maternal learning effect. (B) Boxplot showing the number of USVs emitted per second when pups are either not retrieved or retrieved. Pups with a higher USV rate had a higher probability not to be retrieved (p < 0.001). (C) Boxplot showing the mean duration of USVs per pup when pups are either not retrieved or retrieved. Pups with a higher mean USV duration had an increased chance not to be retrieved (p < 0.001). (D) Mean plot showing the mean retrieval time per maternal trial per day. Although retrieval time decreases significantly for trials within days (p < 0.001), the learning effect was not significant between days (p = 0.22). (E) Mean plot showing the mean USV rate per maternal trial sequence per day. USV rate was not affected by repeated trials (p = 0.59), whereas test day significantly did (p = 0.02). on PND7, PND9, and PND11. The range of HR between 0.46 and 0.86 over separate test days indicates that a USV rate increase of 1 USV/s reduces the probability of being retrieved by 14-54%. Furthermore, we wanted to see whether this could be explained by a few poorly retrieving dams (i.e., dams retrieving on fewer than 50% of the trials), such dams were removed from the dataset (n = 7 dams). However, the effect of USV rate on retrieval success was still significant after removing poorly retrieving dams (p < 0.001). As shown in Supplementary Table 1, some pups (n = 67) did not vocalize before retrieval, although 64 of these pups were still retrieved by their dams. Of these 64 trials, 48% occurred on PND5, 20% on PND11, and 19% on PND13. Retrieval without pup vocalization is more common in pups with repeated maternal measurements, i.e., with a later position in maternal trial sequence within a litter ( Figure 4A). Moreover, the sequence of maternal trial was found to influence retrieval success significantly (Figure 4A; The significant effect of maternal trial suggests a learning effect, and as such, provides another possible explanation for the faster retrieval in pups that have a lower vocalization rate. That is, exposing a dam to multiple trials might affect her retrieval behavior and/or might affect pup vocalization rate. However, as shown in Figure 4E, USV rate was not significantly affected by maternal trial (p = 0.59), although test day did (p = 0.02). Over all days, a maternal learning effect was found to be statistically significant (Figure 4A; HR = 1.19; p < 0.001). The HR indicates that an increase in maternal trial by one increases the probability of pup retrieval by 19%. As shown in Figure 4D, this maternal learning effect was manifest within repeated trials on the same day (p < 0.001), but did not translate between days (p = 0.22). USV rate vs. retrieval success for each test day separately. (A) Plot with linear regression of USV rate vs. retrieval success scored as a binary variable for each test day separately. For all test days (PND5-13), USV rate was higher in non-retrieved pups than in retrieved pups, although regression estimates were close to 0 (horizontal regression line) for PND5 and PND13. Lastly, the average duration of pup vocalizations was positively correlated with retrieval time (r = 0.14; p < 0.001), which was most pronounced on PND7-11 (Supplementary Figure 7). Pups emitting USVs with a longer average duration had a lower probability of being retrieved ( Figure 4C). The estimated effect in a binomial model was −0.053 (p < 0.001) which corresponds with a decreased hazard by a factor of 5% for one extra millisecond of USV. Discussion Bidirectional dam-pup dyad interactions are critical for pup survival. However, most studies investigated dyadic members and behaviors unilaterally Abuaish et al., 2020). In the current study, we describe BAMBI (Bidirectional Automated Mother-pup Behavioral Interaction test) to assess bidirectional dam-pup interaction in laboratory mice. This approach combines the automated PRT described by Winters et al. (2022) with synchronous ultrasonic audio recording and subsequent automated USV detection. At first, we demonstrated the transferability of the previously established dam-pup model to a novel experiment with different traits. Further, a model was developed to detect simultaneously recorded pup USVs with high accuracy. Lastly, we applied this methodology on PRT data sampled on PND5,7,9,11,and 13. Indeed, through synchronous video recording of maternal behavior and audio recording of pup vocalizations, BAMBI allowed to test bidirectional early-life mother-pup interactions in an unprecedented way. We were able to expand the publicly available model , and optimized its performance for PRT data with different subject and environmental traits such as the inclusion of a cup. We used a hybrid learning strategy to increase variability relatively fast while minimizing bias. This hybrid learning strategy combined manual annotation of k-means selected frames and refinement of outlier frames selected by the DLC "jump" algorithm. In our first attempts, these newly annotated data were added to the annotated dataset of Winters et al. (2022) and retrained. However, pose estimation performance on videos with novel traits was insufficient (data not shown). We hypothesized this might be due to representation bias whereas the original dataset with robust PRT poses on PND5 outweighed the novel dataset with higher pose variability (Krishnan et al., 2021). Therefore, we used a two-step learning approach similar to Gorssen et al. (2022). In a first step, the original model was retrained only with the newly annotated data, whereas in a second step, all annotated data were used to ensure the algorithm performed well on both the original and new data. The automated retrieval estimate can be seen as proof-of-concept and had a high accuracy of 90.4% over all test days. For future research, two remarks on this learning approach should be kept in mind. First, the train and test error after the second retraining step should be interpreted and reported with caution. That is, all data has been used in previous training phases and thus the test data might not be completely new anymore. Second, we found a difference in the average pixels per millimeter when comparing the original dataset and the dataset of the current study. Again, this indicates that the retraining pixel errors should be interpreted with caution. Further, we were able to develop a model to detect ultrasonic vocalizations in the PRT accurately and automatically using DAS (Steinfath et al., 2021). Despite the wide range of available automated detection options, we chose to work with DAS based on a few selection criteria. First, both the toolbox and its basis software (i.e., Python) are completely open source. Second, the system is versatile which is necessary as this PRT assay intends to investigate early-life communicative deficits, and thus, the emitted vocalizations might not be as expected (Scattoni et al., 2008;Bowers et al., 2013;Ey et al., 2013;Shahrier and Wada, 2018). The system therefore should be easily adaptable and relatively flexible. Third, the system should be able to handle background noise as the PRT is performed in freely moving animals, which are interacting with their environment. As argued in the work of Ey et al. (2020), most available automated systems cannot (yet) handle background noise. However, the main limitation of DAS is that the output is limited to the temporal parameters start and end time of the vocalization. Although this was not a problem for the current study, it is a restriction when investigating communicative deficits. Additional spectrographic output parameters should be an integral part of communicative assessment to fully understand eventual deficits. An obstacle in the current study was the synchronization of video and audio recordings. Both recording data were sampled using different software and could be synchronized by introduction of a beep at the start of the trial. Although we were able to precisely retrace this beep with frame accuracy, this required an intensive step of data processing. To find its way to standard operational practices, an integrated recording system would significantly reduce human involvement and workload. An exemplary integrated recording system was described in Ey et al. (2020). In this work, behavioral monitoring was done using the Live Mouse Tracker [LMT, (de Chaumont et al., 2019)] system in which synchronized USV sequences were recorded using the Avisoft UltraSoundGate Recording system's trigger function. The Avisoft burst recording yield an advantage when working with long-term recordings (Ey et al., 2020). However, in the PRT paradigm a maximum time of 100 s is defined and, as previously mentioned, intends to investigate abnormalities in early-life communicative behaviors. The use of burst recordings should be used with caution as it could miss deviant vocalizations and thus could lead to loss of data which cannot be corrected afterward. Other options exist as most Avisoft Ultra Sound Gates have the possibility to connect a TTL cable, which can be used to start ultrasound recording together with another software, e.g., video recording. Lastly, we demonstrated the effectiveness of our combined methodology by applying it on PRT data sampled on PND5,7,9,11,and 13. It is important to add a note regarding the selection of the study subjects. In compliance with the reduction principle, mice of the present study were obtained from an independently designed pharmacological study. As a consequence, in the absence of controls for experimental disease models, subjects were exposed to VPA and pharmacological treatment, possibly affecting their behavior. Importantly, the aim of the present work was not to investigate pharmacological effects, but rather to present a proof of principle demonstration of the feasibility and validity of a new automated method for behavioral testing of early life mother-pup bidirectional interactions. Nevertheless, in order to address the issue of not being pharmacologically naive, statistical analyses performed in the current study employed a correction for pharmacological treatments as a confounding variable, by using a GLM model in which drug effect was set as a fixed effect, which allowed to pool the different drug groups into a single group (see Experimental groups). Therefore, the general relationships between pup vocalizations and maternal retrieval found in our study can be considered relevant for future research. We found an association between maternal retrieval success and pup calling behavior. Counterintuitively, we found that pups that were retrieved had a lower call rate during maternal separation than non-retrieved pups (Figure 4B), which was most pronounced on PND7-13. This effect was not caused by certain poorly retrieving mothers, nor testing day. Previous research (D'Amato et al., 2005; reported a negative relationship between maternal caregiving behaviors and separation-induced pup calling. These studies found that high levels of maternal caregiving behavior in the first days of life lead to reduced numbers of USV later in life, probably because of reduced anxiety. In the same line, maternal carrying has been shown to have soothing effects on pup physiology including cardiac deceleration, immobility response and a reduction of emitted USVs, whereas the absence of this calming response has been reported to hinder maternal retrieval efficacy (Yoshida et al., 2013). Altogether, these findings seemingly go against a robust set of evidence from playback literature which show that pup USVs elicit retrieval behavior (Sewell, 1970;Smotherman et al., 1974;Ehret and Haack, 1982;Ehret, 1992Ehret, , 2005. Our hypothesis is that USVs do elicit retrieval behavior, but is dependent on a great number of factors and an excessive amount of USV vocalizations might negatively influence maternal retrieval efficacy. This negative relationship might be due to a miscommunication in the motherpup dyad. However, further research is necessary to test this hypothesis. Studies that used maternal retrieval and separation-induced vocalizations separately suggested that these factors might be related. The present simultaneous registrations further confirm and detail this relationship. For example, we found that vocalizations during the first 10 s actually predicted retrieval success, notwithstanding corrections for age and maternal trial sequence. Still, this should not be taken as evidence that pup behavior tunes maternal behavior, as behavioral testing only started on PND5. In our results, we found a peak in USV rate at PND7-9 (Figure 4D), which corresponds with previous findings in literature (Sungur et al., 2016). However, future research might consider earlier time points as communicative fitness might already be affected before PND5 in either quality and/or quantity of vocalizations. Further, we show that dams subjected to repeated retrieval trials show a significant learning curve within the same test day, although this does not translate to an inter-day effect ( Figure 4D). Between PND7 and 9 this might be explained by the introduction of a cup in the home-cage. However, translation is still limited on the other four days that the cup is present. Research has shown that experience improves pup retrieval success (Stolzenberg et al., 2012;Dunlap et al., 2020). Mice tend to use a spatial memorybased strategy when engaged repetitively in pup search and retrieval (Dunlap et al., 2020). Therefore, an overall decrease in retrieval time was to be expected as pups were always placed in the same corner. Additionally, Dunlap et al. (2020) report that retrieval behavior further improves by sensory learning of associated cues. The beep at the start of the trial in the current experiment could have predicted the presence of an separated pup in the homecage. Our findings seem to contradict the findings of Dunlap et al. (2020) although the number of retrieval repetitions is significantly higher than in our PRT procedure, and the test environment might play a role in the valence of pup stimuli (Stolzenberg et al., 2012). For the interpretation of USVs, this means that the functional relevance of USV emission is particularly high at the beginning. After repeated testing, USV emission seems to be less and less relevant, as evidenced by the fact that retrieval behavior even occurred in the absence of USV emission probably due to maternal learning. However, this maternal learning curve could also be used as a behavioral read-out. In the present study, we adapted our previous automated home-cage PRT and we combined video recording of maternal behavior with synchronous audio recording of pup vocalizations in order to assess bidirectional dam-pup dyadic interaction. Our methodology expands the automated pup retrieval test with automated detection of pups' ultrasonic vocalizations. Moreover, we validated our results and showed that the number and rate of ultrasonic vocalizations are associated with retrieval success. BAMBI is a promising new automated home-cage behavioral method that can be applied to both basic and preclinical studies on early-life social development. Data availability statement The original contributions presented in this study are included in the article/Supplementary material. All models used for this study are publicly available at: doi: 10.17605/OSF.IO/VEJ4H. Further inquiries can be directed to the corresponding author. Ethics statement This animal study was reviewed and approved by the Animal Ethics Committee of KU Leuven (P028/2018). Author contributions CW and RD'H designed the experimental strategy. CW optimized experimental procedures, labeled the data, and wrote the manuscript with input from WG, MW, and RD'H. CW and WG conceptualized and wrote the code. All authors contributed to the article and approved the submitted version. Funding This study was funded by an SB Ph.D. fellowship (1S05818N) of the Research Foundation Flanders (FWO) to CW. WG was funded by an FR Ph.D. fellowship (1104320N) of the Research Foundation Flanders (FWO). The funding bodies played no role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript.
2023-03-05T16:17:29.997Z
2023-03-03T00:00:00.000
{ "year": 2023, "sha1": "1282f63f9dfe2a541a6dcf1b4ab2adcd6300b752", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnbeh.2023.1139254/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dc7eef43fdffb01f6a3cc61cc4b2c505aa3ab991", "s2fieldsofstudy": [ "Psychology", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
7363198
pes2o/s2orc
v3-fos-license
D. candidum has in vitro anticancer effects in HCT-116 cancer cells and exerts in vivo anti-metastatic effects in mice BACKGROUND/OBJECTIVES D. candidum is a traditional Chinese food or medicine widely used in Asia. There has been little research into the anticancer effects of D. candidum, particularly the effects in colon cancer cells. The aim of this study was to investigate the anticancer effects of D. candidum in vitro and in vivo. MATERIALS/METHODS The in vitro anti-cancer effects on HCT-116 colon cancer cells and in vivo anti-metastatic effects of DCME (Dendrobium canidum methanolic extract) were examined using the experimental methods of MTT assay, DAPI staining, flow cytometry analysis, RT-PCR, and Western blot analysis. RESULTS At a concentration of 1.0 mg/mL, DCME inhibited the growth of HCT-116 cells by 84%, which was higher than at concentrations of 0.5 and 0.25 mg/mL. Chromatin condensation and formation of apoptotic bodies were observed in cancer cells cultured with DCME as well. In addition, DCME induced significant apoptosis in cancer cells by upregulation of Bax, caspase 9, and caspase 3, and downregulation of Bcl-2. Expression of genes commonly associated with inflammation, NF-κB, iNOS, and COX-2, was significantly downregulated by DCME. DCME also exerted an anti-metastasis effect on cancer cells as demonstrated by decreased expression of MMP genes and increased expression of TIMPs, which was confirmed by the inhibition of induced tumor metastasis in colon 26-M3.1 cells in BALB/c mice. CONCLUSIONS Our results demonstrated that D. candidum had a potent in vitro anti-cancer effect, induced apoptosis, exhibited anti-inflammatory activities, and exerted in vivo anti-metastatic effects. INTRODUCTION 1) Dendrobium, microspermae, is a perennial epiphytic herb in the family Orchidaceae [1]. D. candidum is unique in its medicinal value. Its stem can be used as medicine, which can promote humoral secretion, prevent development of cataracts, relieve guttural agnail and fatigue, reduce peripheral vascular blockage, and improve immune function [2]. D. candidum can also be used in health care products, which are effective for toning the stomach, promoting hydration, nourishing yin, and reducing body heat. As a kind of traditional Chinese health product and Chinese medical herb, it has high value in use [3]. Apoptosis is an important cellular defense against cancer [4] and caspases form the central components of an apoptotic response. Nuclear factor-κB (NF-κB) is involved in the inhibition of apoptosis, stimulation of cell proliferation, inflammation, immune response, and tumorigenesis. Expression of inducible nitric oxide synthase (iNOS) and cyclooxygenase 2 (COX-2), two genes regulated by NF-kB, is induced by inflammation and they are frequently over-expressed in cancer cells [5]. Metastasis, the leading cause of death among cancer patients, involves the spread of cancer from a primary site and formation of new tumors in distant organs. Matrix metalloproteases (MMPs) play important roles in many physiological or pathological processes involved in metastasis. MMP activity is inhibited by specific endogenous tissue inhibitors of metalloproteinases (TIMPs) [6]. Previously, Bao et al. found that D. candidum produces strong in vitro anti-cancer effects on HeLaS3 human cervix carcinoma cells and HepG2 liver cancer cells [7]. In the current study, we further examined the anti-cancer and anti-metastatic effects of D. candidum. HCT-116 human colon cancer cells were treated with DCME (Dendrobium canidum methanolic extract) and the molecular mechanisms underlying the anti-cancer effects of DCME were studied. We evaluated DCME using different concentrations and also assessed its anti-metastatic effects in mice with tumors propagated by 26 -ATG TTC TTC TCT GTG ACC CA-3' TIMP-2 forward: 5'-TGG GGA CAC CAG AAG TCA AC-3' reverse: 5'-TTT TCA GAG CCT TGG AGG AG-3' GAPDH forward: 5'-CGG AGT CAA CGG ATT TGG TC-3' reverse: 5'-AGC CTT CTC CAT GGT CGT GA-3' Table 1. Sequences of reverse transcription-polymerase chain reaction primers MATERIALS AND METHODS Preparations of D. candidum D. candidum was purchased in Yunnan, China. It was stored at -80°C and freeze-dried to produce a powder. The powdered sample was extracted with 20-fold volume of methanol twice overnight. The methanol extract was evaporated using a rotary evaporator (Eywla, N-1100, Tokyo, Japan), concentrated, and then dissolved in dimethyl sulfoxide (DMSO; Amresco, Soion, OH, USA) to adjust the stock concentration (20%, w/v) and named as DCME for future reference. Cell culture HCT-116 human colon carcinoma cells obtained from the American Type Culture Collection (ATCC, Manassas, VA, USA) were used for experiments. The cells were cultured in RPMI-1640 medium (Gibco Co., Birmingham, MI, USA) supplemented with 10% fetal bovine serum (FBS; Gibco Co.) and 1% penicillinstreptomycin (Gibco Co.) at 37°C in a humidified atmosphere containing 5% CO2 (Forma, model 311 S/N29035; Waltham, MA, USA). The medium was changed two or three times each week. Measurement of lung metastasis after D. candidum treatment in BALB/c mice bearing 26-M3.1 colon carcinoma cell tumors 26-M3.1 colon carcinoma cells were obtained from Prof. Yoon, Department of Food and Nutrition, Yuhan University, Bucheon, South Korea. These highly metastatic cells were maintained as monolayers in Eagle's minimal essential medium (EMEM) (Welgene Inc., Daegu, South Korea) supplemented with 7.5% FBS, a vitamin solution, sodium pyruvate, non-essential amino acids, and L-glutamine (Gibco Co.). The cultures were maintained in a humidified atmosphere of 5% CO2 at 37°C. Experimental lung metastasis was induced by injection of colon 26-M3.1 cells into the lateral tail vein of 6-wk-old female Balb/c mice (Hyochang Science, Daegu, South Korea) [8]. DCME solutions (50, 100, and 200 mg/kg) were administered by subcutaneous injection into the mice, and the animals then received intravenous inoculation with 26-M3.1 cells (2.5 × 10 4 /mouse) after 2 d. After 2 wk the mice were sacrificed and their lungs were fixed in Bouin's solution (saturated picric acid: formalin: acetic acid; 15:5:1, v/v/v). The rate of metastasis was assessed by counting the lung tumor colonies using a digital camera (Canon D550, Tokyo, Japan). The protocol for these experiments was approved by the Animal Ethics Committee of Chongqing Medical University (SCXK(YU)2012-0001). MTT assay Anti-cancer effects of DCME were assessed by MTT assay [9]. HCT-116 human colon cancer cells were seeded in a 96-well plate at a density of 2 × 10 4 cells/mL in a volume 180 μL per well. DCME at final concentrations of 0.25, 0.5, and 1.0 mg/mL, 20 μL was added, and the cells were then incubated at 37°C in 5% CO2 for 48 h. Next, MTT solution (200 μL, 5 mg/mL; Amresco, Solon, OH, USA) was added and the cells were cultured for another 4 h under the same conditions. After removing the supernatant, 150 μL of DMSO was added per well and mixed for 30 min. Finally, the absorbance of each well was measured using an ELISA reader (model 680; Bio-Rad, Hercules, CA, USA) at 540 nm. DAPI staining Untreated control cells and cells treated with the DCME were harvested, washed with PBS, and fixed with 3.7% paraformaldehyde (Sigma, St. Louis, MO, USA) in PBS for 10 min at room temperature. The fixed cells were washed with PBS and stained with a 1 mg/mL DAPI (Sigma) solution for 10 min at room temperature. The cells were washed two more times with PBS and examined using a fluorescence microscope (BX50; Olympus, Tokyo, Japan). Flow cytometry analysis After treatment with DCME the cells were trypsinized, collected, washed with cold PBS, and resuspended in 2 ml PBS. DNA contents of the cells were measured using a DNA staining kit (CycleTEST TM PLUS kit; Becton Dickinson, Franklin Lakes, NJ, USA). Nuclear fractions stained with propidium iodide were obtained by following the manufacturer's protocol. Fluorescence intensity was determined using a FACScan flow cytometer (EPICS XL-MCL, Beckman Coulter KK, Brea, CA, USA) and analyzed using CellQuest software (Becton Dickinson, Franklin Lakes, NJ, USA). RT-PCR HCT-116 cells were inoculated in a 10 cm culture dish. After managing them according to the method of the MTT experiment for 24 h, total RNA in cells was extracted using Trizol according to the instructions. The total RNA concentration from each sample group was adjusted to the same level after testing its purity with ultraviolet radiation. The same amount of RNA (2 μg) was taken from the samples, followed by addition of 1 μL oligodT18, RNase, dNTP and 5 × buffer 10 μL enzyme MLV, respectively. In 20 μL body fluid, cDNA was synthetized at 37°C for 120 min and 99°C for 4 min and 4°C for 3 min, respectively. The target genes were then reverse transcribed and amplified ( Table 1). The reaction conditions were initial denaturation for 5 min at 95°C, annealing for 50s at 58°C, extension for 90s at 72°C, then repeating it 40 times, and extension for 10 min at 95°C. In the end, 2% agarose gel electrophoresis was performed to determine expression of the final products [10]. Western blot analysis After DCME treatment, protein lysates were added to the HCT-116 cancer cells after rinsing with pre-cooled PBS three times, lysed at 4°C and centrifuged (10000 r/min) for 15 min. Supernatant proteins were then extracted and mixed with SDS-PAGE loading buffer. Primary antibodies were then applied to them after SDS-PAGE gel electrophoresis, followed by transfer to a membrane, and the proteins were incubated overnight at 4°C. Then the proteins were incubated with horseradish peroxidase-conjugated secondary antibodies at room temperature. In the end, immunoreactive proteins were tested using a chemiluminescent enhanced chemiluminescence assay kit and observed using a LAS3000 luminescent image analyzer with β-actin as an internal reference [11]. Statistical analysis Data are presented as the mean ± SD. Differences between the mean values for individual groups were assessed using one-way ANOVA and Duncan's multiple range tests. Differences were considered significant when P < 0.05. SAS version 9.1 (SAS Institute Inc., Cary, NC, USA) was used for statistical analysis. RESULTS The in vitro anticancer effects of the D. candidum methanolic extract (DCME) were evaluated using the MTT assay, DAPI and flow cytometric analysis, gene expression analysis by RT-PCR, and Western blotting in HCT-116 cancer cells. The results showed that DCME had a strong in vitro as well as in vivo anticancer activity and increased with increasing concentration. The component analysis explained the mechanism of DCME action. In vivo anti-metastatic effect of DCME Prophylactic inhibition of tumor metastasis by DCME was evaluated using an experimental mouse metastasis model ( Table 2). All DCME-treated mice had significantly fewer lung metastatic colonies than control mice (number of metastatic tumors, 57 ± 6, n = 10; P < 0.05). DCME was the most effective for inhibition of lung metastasis at a concentration of 200 mg/kg. At this concentration tumor formation and lung metastasis were inhibited to a greater degree compared with the 100 mg/kg and 50 mg/kg doses. In vitro anti-cancer effect of DCME on HCT-116 cells Anti-cancer effects of DCME on HCT-116 cells were evaluated using a MTT assay. The survival rates of HCT-116 human colon cancer cells treated with different concentrations of DCME are shown in Table 3. HCT-116 cells were treated with different concentrations of D. candidum (0.25, 0.5, and 1.0 mg/mL) with cancer cell survival rates of 69%, 41%, and 16%, respectively (P < 0.05). These results demonstrated that DCME had significant anti-proliferative effects on HCT-116 cells. In addition, 1.0 mg/mL of DCME showed the strongest anticancer effects in vitro. Induction of apoptosis was monitored to determine a possible mechanism underlying the inhibitory activity of DCME on HCT-116 cancer cells. The extent of chromatin condensation was analyzed by fluorescence microscopy of cells stained with the DNA-binding fluorescent dye DAPI and flow cytometric analysis. While untreated HCT-116 cells presented nuclei with homogeneous chromatin distribution, treatment with DCME induced chromatin condensation and nuclear fragmentation, suggesting the presence of apoptotic cells (Fig. 1A). Chromatin condensation and formation of apoptotic bodies, the two hallmarks of apoptosis, were observed in cells treated with 1.0 mg/mL DCME. In contrast, the level of chromatin condensation was low in cells treated with 0.25 or 0.5 mg/mL DCME. Flow cytometric analysis showed that treatment with 1.0 mg/mL D. candidum promoted apoptosis of HCT-116 cells more strongly when compared to lower concentrations of 0.25 and 0.5 mg/mL of DCME (P < 0.05). This conclusion was based on the significant accumulation of cells with sub-G1 DNA content (Fig. 1B). Apoptosis-related gene expression of Bax, Bcl-2, and caspases To elucidate the mechanisms underlying the inhibition of cancer cell growth by DCME, expression of Bax, Bcl-2, and caspase-3 and -9 in HCT-116 human colon cancer cells was measured by RT-PCR and western blot analyses after incubation with different concentrations of DCME for 48 h. As shown in Fig. 2, expression of pro-apoptotic Bax and anti-apoptotic Bcl-2 showed significant changes in the presence of 1.0 mg/mL DCME. These results suggest that DCME induced apoptosis in HCT-116 cells via a Bax-and Bcl-2-dependent pathway. The mRNA and protein expression levels of caspase-3 and -9 were very low in untreated control HCT-116 cells, but increased significantly after the cells were treated with 1.0 mg/mL of D. candidum. With the D. candidum treatment, mRNA and protein expression of caspase-3 and -9 was gradually elevated with increased concentrations (Fig. 2). More specifically, induction of apoptosis by DCME was related to upregulation of Bax, caspase-3, and caspase-9, and downregulation of Bcl-2 in terms of mRNA and protein expression. The effects of 1.0 mg/mL DCME were greater compared to those of the 0.25 and 0.5 mg/mL D. candidum solutions. Inflammation-related gene expression of NF-κB, IκB-α, iNOS, and COX-2 We attempted to determine whether the anti-cancer actions of DCME were associated with NF-κB, IκB-α, iNOS, and COX-2 gene expression. As shown in Fig. 3, mRNA and protein expression of NF-κB and IκB-α was reduced in HCT-116 cells treated with a 1.0 mg/mL D. candidum solution. D. candidum significantly modulated the expression of genes associated with inflammation. mRNA and protein expression of NF-κB were decreased while IκB-α mRNA levels were increased. In addition, mRNA and protein expression of COX-2 and iNOS showed a gradual decrease in the presence of DCME in a concentration dependent manner (Fig. 3). Our findings indicate that DCME may be helpful in prevention of cancer in early stages by increasing anti-inflammatory activities. Overall, the results of this experiment showed that 1.0 mg/mL DCME had a stronger anti-inflammatory effect on colon cancer cells than 0.25 and 0.5 mg/mL concentrations. Metastasis-related MMP and TIMP gene expression RT-PCR and western blot analyses were performed to determine whether the anti-metastatic effect of D. candidum was due to gene regulation of metastatic mediators, specifically MMPs (MMP-2 and MMP-9) and TIMPs (TIMP-1 and TIMP-2), in HCT-116 cells. As shown in Fig. 4, 1.0 mg/mL DCME induced a significant decrease in mRNA and protein expression of MMP-2 and MMP-9, and increased the expression of TIMP-1 and TIMP-2. These changes in TIMP and MMP expression resulting from DCME treatment could effectively lead to metastatic inhibition in vitro. Our results also showed that 1.0 mg/mL DCME has the strongest anti-metastatic activity. DISCUSSION Although D. candidum has been used as a medicine or functional food, little scientific data on its effects are available. D. candidum contains high concentrations of benzenes and their derivatives, phenolic, lignans, lactone, flavonoids, and 18 types of novel pigments were also found [12]. D. candidum has recently been reported to have various therapeutic effects on numerous pathologic conditions, including inflammation, immunity, hpyerglycemic, and cancer [13]. A previous study using the MTT assay in HeLaS3 human cervix carcinoma cells and HepG2 liver cancer cells reported on the in vitro anticancer effects of D. candidum [7]. In this study, it was also proven that DCME has inhibitory effects by MTT assay and in vitro anticancer effects using DAPI assay, flow cytometry, RT-PCR, and Western blot. Apoptosis is programmed cell death, a process implemented by cells themselves for their own physiological and pathological factors [14]. In a healthy cell, the anti-apoptotic protein Bcl-2 is expressed on the outer mitochondrial membrane surface [15]. After treatment of HCT-116 cells with DCME, we observed that the number of apoptotic cancer cells increased after treatment with a high concentration of DCME, as seen with the DAPI and flow cytometry assays. Because the Bax and Bcl-2 genes are mainly expressed during apoptosis, we determined that these genes regulated the apoptotic activity. Apoptosis results from activation of caspase family members that act as aspartatespecific proteases [16]. Caspases are a type of protease hydrolysate which usually exists in the form of pro-caspase. However, among these pro-caspases, caspase-3 and caspase-9 are the main protease hydrolysates involved in the process of apoptosis. Caspase-9, an upstream caspase, acts as an apoptosistrigger caspase, responsible for activation of downstream caspases. Caspase-3 is a downstream caspase that induces apoptosis via hydrolyzing apoptosis-effect molecules. Hydrolysis activates caspase-3 and these active caspases induce apotosis [17]. In this study, the gene and protein expressions of Bax, caspase-3, and caspase-9 increased, while expression of Bcl-2 decreased after treatment with DCME. Based on previous studies on gene expression, DCME may be assumed to be a strong inducer of apoptosis in cancer cells. In addition, the anti-cancer mechanisms underlying the effect of DCME on human cancer cells involve the induction of apoptosis by increasing the number of apoptotic bodies, regulating the mRNA and protein expression of Bax and Bcl-2, and promoting anti-inflammatory effects by downregulating iNOS and COX-2 gene expression. COX-2 has been suggested to play an important role in colon carcinogenesis, and NOS, along with iNOS, may be a good target for chemoprevention of colon cancer [18]. NF-κB, one of the most ubiquitous transcription factors, regulates the expression of genes required for cellular proliferation, inflammatory responses, and cell adhesion [19]. NF-κB is present in the cytosol where it is bound to the inhibitory protein, IκB. Following its induction by a variety of agents, NF-κB is released from IκB and translocates to the nucleus where it binds to κB binding sites in the promoter regions of target genes [20]. These mechanisms could be involved in the anti-cancer effects of DCME in colon cancer cells. Based on the results of the MTT assay and the expression patterns of pro-apoptotic genes observed in the current study, we concluded that cancer cells treated with DCME underwent apoptosis. Similar to our findings, the anti-cancer effects of DCME in HeLaS3 human cervix carcinoma cells and HepG2 liver cancer cells were evaluated in a previous study by MTT assay [10]. Metastasis is defined as the spread of cancer cells from one organ or area to another adjacent organ or location [21]. It is thought that malignant tumor cells have the capacity to metastasize. Cancer occurs after cells in a tissue are genetically damaged in a progressive manner, resulting in cancer stem cells possessing a malignant phenotype. Once the tumor cells have come to rest in another site, they penetrate the vessel walls, continue to multiply, and eventually form another tumor. Colon 26-M3.1 carcinoma cells have been used in evaluation of anti-metastasis effects in vivo [22]. Based on the in vitro test results like the previous studies, the colon 26-M3.1 carcinoma cell anti-metastasis mice test was used to examine DCME. These results further proved the activities of D. candidum, and the anticancer effect and concentration were directly related. MMPs, a family of zinc-dependent endopeptidases, play a very important role in tumorigenesis and metastasis. MMPs can cleave virtually all extracellular matrix (ECM) substrates. Degradation of the ECM is a key event in tumor progression, invasion, and metastasis. Among the MMP family members, MMP-2 and MMP-9 are important molecules for cancer invasion, and are highly expressed in breast and colon cancer cells [23]. In fact, inhibition of MMP activity is useful for controlling tumorigenesis and metastasis. TIMPs are naturally occurring inhibitors of MMPs which prevent catalytic activity by binding to activated MMPs, thereby blocking ECM breakdown. Disturbances in the ratio between MMPs and TIMPs have been observed during tumorigenesis. Maintaining the balance between MMPs and TIMPs or increasing TIMP activity is an effective way to control tumor metastasis [24]. Experimental evidence demonstrating the role of MMPs in metastasis has been obtained by in vitro invasion assays and in vivo xenograft metastasis experiments. MMP-2 and -9 are key factors in cancer cell invasion and metastasis both in vivo and in vitro. Spontaneous and experimental metastasis to the liver is decreased in mice overexpressing TIMP1, and increased in mice expressing antisense TIMP-1 mRNA. Ectopic overexpression of TIMP-1 in the brain of transgenic mice also reduces experimental metastasis to the brain [25]. In particular, MMP-2 and -9 are important for tumor invasion and angiogenesis. Thus, tumor metastasis can be inhibited by blocking synthesis and activity of MMP [26]. Strong anti-metastasis effects appeared in the reduction of MMPs and the increase of TIMPs via DCME in HCT-116 cells. From the results, DCME showed a strong anti-metastasis effect, and could be used as a part of functional food for cancer prevention. In China, D. candidum, a rare Chinese medical material which can nourish yin, consists of various active substances. It can obviously increase many immune indexes, such as conversion rate of lymphocytes, improve syndrome of yin deficiency, and balance human organisms, thus protecting them from invasion of cancer [27]. The main efficacious ingredients in D. candidum were dendrobium polysaccharides, dendrobine, etc. According to some researchers, many soluble polysaccharides from D. candidum were immunopotentiators having strong anti-cancer bioactivity [28]. Polysaccharides from D. candidum could obviously increase the number of peripheral white blood cells and stimulate lymphocytes to produce migration inhibitory factors, both of which could efficiently eliminate side effects caused by immunosuppressor of cyclophosphamide (a commonly-used antineoplastic) [29]. In addition, they could also inhibit solid tumors, and, to some extent, improve conversion function of T-lymphocytes, NK activity, and levels of macrophage and hemolysin; although the amount of dendrobine in D. candidum was not very high, it is still very effective for its good quality [30]. D. candidum could efficiently inhibit lung cancer cells, atrophic gastritis, and diabetes. In experimental conditions, the inhibition rate could reach as high as over 70% [31]. Dendrobine was effective on anti-oxidation and anti-aging, and significantly increased SOD level and decreased LPO level. In addition, it could reduce the level of blood sugar, which was beneficial to treatment of diabetes [32]. If oxidation occurs in an organism, it is easy for the organism to become cancerous. However, anti-oxidation of D. candidum could effectively inhibit cancer at the early stages. In our other studies, we found that D. candidum consists of 11 dominant ingredients (dihydrogen resveratrol, dendromoniliside E, denbinobin, aduncin, adenosine, uridine, guanosine, defuscin, n-triacontyl cis-p-coumarate, hexadecanoic acid, and hentriacontane), most of which were effective on anti-cancer and health care [33]. Synthetic action of these ingredients might be the reason why D. candidum is effective as an anti-cancer agent. Many studies have reported on the effects of D. candidum on lung cancer, liver cancer, gastric cancer, esophageal cancer, and nasopharyngeal carcinoma [34,35]. However, no study on its effects on colon cancer has been reported. In the current study, the effects of D. candidum on colon cancer in vitro were studied using HCT-116 cells and in vivo using 26-M3.1 cells, both of which exerted beneficial effects. Stimulation of apoptosis in cancer cells and inhibition of their metastasis is the most important way to prevent tumor development. Results of the experiment showed that D. candidum could promote apoptosis in cancer cells and inhibit their metastasis in mice. The result may be achieved by some efficacious ingredients in D. candidum by improving the body's immunity and exerting immediate action on cancer cells. In summary, we found that DCME has potent in vitro and in vivo anti-cancer activities, particularly for combating in vivo tumor metastasis. The scientific data proved the functional effects, and the results provided the scientific basis for development of DCME for further anticancer initiatives. The important active compounds resulting from D. candidum and combined actions of the compounds should be identified and evaluated in future studies, and investigation of the activities in humans is also needed.
2017-08-15T22:59:05.701Z
2014-08-30T00:00:00.000
{ "year": 2014, "sha1": "00f74a9f79edef630d7952bbd389f6c2be98fd49", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc4198959?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "00f74a9f79edef630d7952bbd389f6c2be98fd49", "s2fieldsofstudy": [ "Biology", "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
199133623
pes2o/s2orc
v3-fos-license
On the use of domain-based material point methods for problems involving large distortion Challenging solid mechanics problems exist in areas such as geotechnical and biomedical engineering which require numerical methods that can cope with very large deformations, both stretches and torsion. One candidate for these problems is the Material Point Method (MPM), and to deal with stability issues the standard form of the MPM has been developed into new domain-based techniques which change how information is mapped between the computational mesh and the material points. The latest of these developments are the Convected Particle Domain Interpolation (CPDI) approaches. When these are demonstrated, they are typically tested on problems involving large stretch but little torsion and if these MPMs are to be useful for the challenging problems mentioned above, it is important that their capabilities and shortcomings are clear. Here we present a study of the behaviour of some of these MPMs for modelling problems involving large elasto-plastic deformation including distortion. This is carried out in a unified implicit quasi-static computational framework and finds that domain distortion with the CPDI2 approaches affects some solutions and there is a particular issue with one approach. The older CPDI1 approach and the standard MPM however produce physically realistic results. The primary aim of this paper is to raise awareness of the capabilities or otherwise of these domain-based MPMs. Introduction The Material Point Method (MPM) [1][2][3], originally proposed as an extension of a similar method known as the Fluid-Implicit Particle or FLIP method [4], itself an extension of the Particle-in-Cell method [5], is a numerical method combining advantages of Eulerian and Lagrangian approaches to solve solid mechanics problems. In the MPM, a body is described by a number of material points which store physical field variables such as stiffness, stress and displacement. At the beginning of each load step, the information held at material points is mapped to nodes Fig. 1 for a simple shear problem, the total deformation and other state variables are stored at the material points, while the background mesh is reset and extended with the incremental displacement, thus avoiding the mesh distortion seen with the standard Finite Element Method (FEM) for large deformation problems. In fact, the background mesh can be any mesh at the beginning of each load step. Because of this attraction the MPM has been applied to many large deformation problems, e.g. [6][7][8][9][10][11][12][13][14][15]. The method of mapping information between the material points and background mesh nodes can significantly influence the stability of the MPM. In the standard MPM (sMPM) [1], a material point only influences (and is only influenced by) its parent element, (i.e. the background element in which it is currently located), and the conventional FEM shape functions are used to map information between the nodes and the material points. However, the sMPM has instability issues from various sources: (i) the transition of a material point between elements leads to a sudden change in stiffness and internal force; (ii) under deformation fields involving large stretch, material points can end up separated by more than one element, resulting in artificial fracturing of the physical domain [16]; and (iii) partially filled elements with material points close to an element boundary can lead to an ill-conditioned global stiffness matrix. In order to reduce these problems, several extensions to the sMPM have been proposed. Each of the extensions replaces the discrete material point with a deformable material point domain, called a particle domain in some of the approaches [16][17][18]. The basis functions of these domain-based methods are based on an approximation of the integral of the background finite element shape functions over the material point's domain. The most notable of these approaches are the Generalised Interpolation Material Point (GIMP) method [19] and the Convected Particle Domain Interpolation (CPDI) approaches. The latter comprise CPDI1 [17], Second-order CPDI with quadrilateral particle domains (CPDI2q) [16] and Second-order CPDI with triangular particle domains (CPDI2t) [18]. Using a deformable particle domain for a material point enables the influence domain of a material point to overlap more than one element, thus reducing or eliminating the problems cited above. Today, there is considerable interest in the potential use of these MPMs on very challenging problems in biomechanical and geotechnical engineering which involve large deformation and material nonlinearity. An early mechanobiological application can be found in [20], where a cell scaffold consisting of growing microvascular fragments embedded in a collagen gel is modelled using a variant of the sMPM with over 13.6 million material points. Much more recently, a review also identifies the MPM as of substantial interest for biomechanical modelling [21] and in [22], the MPM is used to model an entire human head. The standard MPM and CPDI2t methods have also been used to simulate thin-walled tubes under lateral compression and validated against experiments in [23]. The motivation for our contribution here arises from a project in geotechnical engineering which aims to model the soil response to the installation of a screwed-in pile foundation, as the first step to providing a computer-aided design tool for engineers to optimise pile design for offshore wind turbine foundations [24,25]. During installation of a screw pile, it is pushed and rotated into the ground, deforming the soil in complex patterns, a problem for which the MPM seems ideal. The nature of the majority of these problems prompts the use of unstructured representations of the problem domain and for the background computational mesh, and of the domain-based MPMs only the CPDI approaches can support an unstructured background mesh. CPDI2 approaches can have unstructured particle domains that can more accurately discretise a general complex body shape but the remaining methods will, in general, result in gaps or overlaps between particle domains. Previously, each of the new developments of the sMPM have been demonstrated in the individual papers referenced above and have been verified on selected problems. The purpose of this contribution is to present an investigation of the behaviours of some of the domain-based MPMs on a selection of specific problems that test their predictive abilities when modelling problems involving large stretch, shear and torsion, thus hopefully providing useful guidance to those wishing to employ the methods on real-world applications of the types mentioned above. This paper also casts all methods within a common implicit quasi-static elasto-plastic computational framework so that any differences in the results are purely due to differences in their basis functions and domain updating methods under large deformation. The paper is organised as follows. In Section 2, the formulations for the continuum problem and the basic MPM discretisation are introduced, including finite stain theory for characterising the elasto-plastic deformation, the basis functions for the different MPM approaches, a formulation for implicitly solving the global system of nonlinear equations and details of the moving mesh concept. In Section 3, the implicit computational framework is presented with details of the essential differences between the different methods explained. The framework is verified and used to investigate the methods' capacities in various examples with representative deformation fields in Section 4, including a discussion of our findings via these examples in Section 4.7. Conclusions are drawn in Section 5. In this paper we use a combination of tensor and matrix notation in order to ensure that both the continuum formulation and the steps to numerical implementation are as clear as possible. Material point continuum formulation This section details the quasi-static implicit finite deformation elasto-plastic material point method formulation adopted in this paper. The approach is largely based on the elasto-plastic updated Lagrangian material point formulation of Charlton et al. [26] and [27] but with the extension to non-uniform background meshes and domain-based material point approaches. Problem statement and kinematics The weak statement of equilibrium for the adopted formulation can be expressed in the updated configuration as ∫ where ϕ(Ω ) indicates the deformed domain of the material body, Ω , which is subjected to tractions, t i , on the boundary (with surface, s), ∂Ω , and body forces, b i , acting over the volume, v. These external loads result in a Cauchy stress field, σ i j , through the body. The weak form is derived in the current frame assuming a field of admissible virtual displacements, η i . The deformation gradient, F i j , then provides the fundamental link between the original and deformed configurations where X i are the original (or reference) coordinates and x i = ϕ(X i , t) are the updated coordinates in the current (or deformed) body. Following the work of [28][29][30], the deformation gradient is multiplicatively decomposed into elastic and plastic components where the superscripts e and p denote elastic and plastic components, respectively. This multiplicative decomposition of the deformation gradient is combined with a logarithmic strain-Kirchhoff stress formulation. This is a powerful combination as it allows existing small strain constitutive formulations to be used directly rather than reformulating them for the particular choice of stress and strain measures used in the large deformation formulation. The elastic logarithmic strain is defined as is the left elastic Cauchy-Green strain. The Kirchhoff stress, τ i j is linked to the Cauchy stress, σ i j , through where J = det(F i j ) is the volume ratio between the deformed and reference states. It is assumed that the Kirchhoff stress is linearly dependent on the logarithmic strain through where D e i jkl is the isotropic linear elastic stiffness matrix. In this work, this linear relationship between Kirchhoff stress and logarithmic strain is combined with an implicit elastic predictor-plastic corrector constitutive algorithm for elasto-plastic material behaviour (see [31], amongst others, for details of these algorithms). Discrete material point implementation Starting from (1), the Galerkin form of the weak statement of equilibrium for a single element in the background mesh can be expressed as where E indicates an element in the background mesh, [G] is the strain-displacement matrix containing derivatives of the basis functions with respect to the updated coordinates, and [S] is the matrix form of the basis functions. The first term in (7) is the internal force within an element and the combination of the second and third terms is the external force vector. The residual for each element, { f R E }, can be assembled into a global residual, { f R }, which is non-linear in terms of the unknown nodal displacements, {d}. One option for solving the assembled global system of equations is the standard Newton-Raphson (NR) procedure. The nodal displacements within a load step, {∆d}, can be obtained by iteratively updating the nodal displacements until (7) is satisfied within a given tolerance using where k is the current iteration within the loadstep, [K ] is the global stiffness matrix and {δd} k is the iterative increment in the displacements, { f R } k−1 is the global residual out of balance force vector associated with the previous displacement value. The current displacement in a loadstep can be obtained by summing the iterative increments within the loadstep, that is {∆d} = ∑ {δd} k . In material point methods it is more convenient to express the global equilibrium equation (7) in terms of material point, rather than element, contributions. In this context, the global residual vector can be assembled through where { f int } and { f ext } are internal and external forces. The internal force vector where V p is the current volume associated with the material point and A is the standard assembly operator. The external force vector, which is constant over the load step, is given by The global stiffness matrix, [K ], can be obtained by linearising (9) with respect to the unknown nodal displacements. This gives the stiffness associated with a single material point as For the standard MPM, V p = J V 0 p , where V 0 p is the original volume associated with the material point. For domain-based MPMs, V p is computed as the volume of the updated domain associated with material point p. In (12), [A] is the spatial consistent tangent modulus, which can be most conveniently (and compactly) expressed in tensor notation as where and D alg i jmn is the small-strain algorithmic tangent obtained from the constitutive model. 1 An algorithm for the derivative of the logarithm of the elastic Cauchy-Green strain tensor with respect to its argument is given in [33]. The map between tensor and matrix forms can be found in many textbooks, e.g. [34]. Pseudo-time discretisation To advance the non-linear solution algorithm, the finite deformation equations are discretised in pseudo-time by imposing the deformation over a number of load (or pseudo-time) steps. This allows the current deformation gradient to be defined using where [∆F] is the increment in the deformation gradient over the current loadstep and [F n ] is the deformation gradient from the previously converged (or initial) state. Following the work of Charlton et al. [26], the increment in the deformation gradient component is obtained from where [I ] is a three by three identity matrix, {∆u v } is the displacement increment of a background grid node (or vertex) within the current loadstep, {x n } are the coordinates at the start of the load step and n in is the number of nodes that influence the material point. 2 In order to obtain the updated stress state for the current deformation gradient, the adopted constitutive algorithm requires an initial estimate (or trial) of the elastic strain state. In this approach the trial elastic Cauchy-Green strain and logarithmic strain tensors are given by where the subscript t denotes a quantity defined in the trial state. The constitutive model should then return the updated Kirchhoff stress, [τ ], and the associated algorithmic consistent tangent, [D alg ]. Once equilibrium has been obtained over the current pseudo-time step, the material point positions are updated through where {x p } and {x p n } are coordinates of the material point, p, after and before updating, respectively. Basis functions The major difference between the different material point methods used in this paper (sMPM, CPDI1, CPDI2q and CPDI2t) is the basis functions, S vp , and their spatial gradients, ∇ S vp . In the following, the basis functions for the four methods are provided in a common format: • The basis functions and their spatial derivatives for the standard MPM (sMPM) are and where S v is the standard FEM basis function of node v. In the case of regular grids, it is straightforward to express the basis functions in terms of the global coordinates of the background element nodes and the material points. In general, S vp and ∇ S vp can be computed after determining the material point's local coordinates within its parent element. • For the CPDI1 method [17], the basis functions and their spatial derivatives can be expressed as where {x c }, c = 1, 2, 3, 4 are coordinates of particle domain corners and {s} and {t} are two vectors forming the parallelogram particle domain, as shown in Fig. 2. The coordinates of the corners of the domain can be expressed in terms of the material point position, {x p } and the parallelogram vectors as • For the CPDI2q method [16] the basis functions and their spatial derivatives can be expressed as where a = ( , and x c , y c are components of coordinate {x c } for the four corners of the quadrilateral particle domain. It is obvious that both S vp and ∇ S vp only depend on the coordinates of the corners, rather than the coordinates of the material point. Therefore, in this method the positions of material points do not need to be stored. • For the CPDI2t method [18] the basis functions and their spatial derivatives can be expressed as and As with the CPDI2q method, S vp and ∇ S vp only depend on the coordinates of the corners so, once again, it is not necessary to store the locations of the material points. Computational framework Algorithm 1: Computational framework 1 Set up problems: generate computational mesh, material points and particle domains, and initial material point volumes, V 0 p ; 2 for each load step do 3 for each material point p do 4 Find influence elements of p in the computational mesh; Initialise the out of balance force,   f R   ← 2 tol to ensure that the while loop is entered; 11 for each material point p do 12 assembling stiffness matrix k p (∆u) into the global stiffness matrix [K ]; 13 Update position of material points or particle domains; 19 Update volumes of material points, V p ; 20 Update the computational mesh with the moving mesh strategy; 21 end The previous section has detailed the continuum formulation, non-linear solution procedure and basis functions for the material point approaches analysed in this paper. This section provides a unified computational procedure for the implicit solution of quasi-static problems for the four MPMs. This framework guarantees that the simulation conditions, except for the basis functions related to different methods, are exactly the same. Therefore, any differences in the simulation results can only be attributed to the different methods used. The computational procedure is summarised in Algorithm 1. The following specific steps are needed for particular methods: • Line 1: For each material point, p, the sMPM only needs its coordinate, {x p }, and the CPDI1 also needs two vectors, ({s 0 }, {t 0 }), of the initial parallelogram particle domain. However, the CPDI2 approaches need corners of the particle domain and its connectivity. • Line 4: The influence element is which encloses the material point for the sMPM. However, for CPDI approaches there are multiple influence elements, which enclose corners of the particle domain. Numerical examples A unified computational framework for the four MPMs was introduced in the previous section. In order to verify this computational framework, this section starts by solving three large deformation problems and compares the numerical predictions from the four MPMs with analytical or FEM solutions. The capabilities of these methods are then investigated via the solution of several problems involving large stretch, shear and torsion. We show that the CPDI2 approaches are less accurate than the standard MPM, under certain deformation fields, due to particle domain distortion. As the focus on this paper is a comparison of different MPMs, the problems have been carefully selected to avoid issues such as volumetric locking for elasto-plastic analysis. For detailed treatment of volumetric locking with different MPMs, see [27]. The presented simulations are conducted in two dimensions with a plane strain assumption, and all variables have compatible units. All of the analyses use a linear elastic, perfectly plastic constitutive model with the von Mises yield surface and associated plastic flow. The von Mises yield function has the following form Confined column The first example is compression of a column subject to body force. An analytical solution is available for this problem (see [26] for details), which can be used to verify and investigate the convergence behaviour of each method. Roller boundary conditions were applied to the base and sides of the column and an incremental body force ∆b = 10 4 was applied per load step. The material parameters were: a Young's modulus of E = 10 6 , a Poisson's ratio of ν = 0, and a yield strength of ϱ y = 2 × 10 5 . The initial height of the column was 20, and the width was set to 1. The mesh and material points at the start of the simulation and end of the first, second and third load steps are shown in Fig. 3 with 5 elements and 4 material points per initially populated element. The Cauchy stress components through the column at the end of the third load step are also shown in this figure. In the analytical solution, the stress component σ yy = σ x x and all shear stress components are zero. For the CPDI approaches, the Cauchy stress components agree with the analytical values, as shown in Fig. 3. However, the sMPM simulation suffers from cell crossing instabilities resulting in oscillations in the stress field in both the σ zz and σ x x components. In this problem, the particle domains in the CPDI1 method stay as rectangles during the simulation, so the results from the CPDI2q method are identical to those from the CPDI1 method. To study the effect of mesh size and material point density on the convergence, relative stress error is defined as where superscript a indicates the analytical values, and p indicates the material point values from the sMPM and CPDI approaches, and ∥(·)∥ is the Euclidean L 2 norm of (·). The problem domain was discretised vertically by 5, Fig. 4. The error in results from the sMPM is higher than that from the CPDI approaches because of the cell-crossing instability described above. The error is reduced by increasing the number of material points per element with the sMPM, while the error is reduced by increasing the number of elements with the CPDI approaches. We observe that increasing the number of elements cannot reduce the error in the sMPM. This is because of the increase in the number of instances that material points cross element edges. However, increasing the number of material points per element will reduce the volume associated with each material point, therefore the size of the discrete transfer of stiffness due to grid-crossing will be reduced. In the CPDI approaches, the alternative basis functions reduce the stiffness jump when material points cross element edges, therefore increasing material point density only has a minor impact on the maximum error. The relative error is also plotted against either the number of material points per element or the number of elements in Fig. 5. With an increase in the number of material points per element, the error with the sMPM decreases, but increases slightly with CPDI approaches. With an increase in the number of elements, the error from the sMPM decreases in the initial refinement and then stays almost constant. By contrast, convergence (i.e. the rate of error reduction) with CPDI approaches has an approximately linear relationship with the number of elements. Simple stretch Simple stretching of a square domain is the next simulation to investigate the performance of these methods. A 2 × 2 square domain is used, comprised of a material with the following properties: Young's modulus E = 1000, Poisson's ratio ν = 0 and yield strength ϱ c = 400. In this case a finite element analysis (with four bi-linear quadrilateral elements integrated using 2 by 2 Gauss quadrature) is taken as the reference solution as the problem does not involve any mesh distortion. For the material point analyses, the same mesh as used in the finite element simulation is adopted and the physical material is discretised by four material points per element. Fig. 6 shows the geometry, mesh and boundary conditions for the CPDI2q method in both initial and deformed configurations. A roller boundary condition was applied on the bottom and left sides of the domain, with a horizontal displacement of 0.2 per load step applied on the right, and the simulation was run up for 20 steps. The moving mesh strategy [24,35] was utilised by horizontally stretching the mesh so that the edge of the physical domain was always aligned with the right end, where the displacement boundary condition is applied. The convergence of the CPDI2q method is shown in Fig. 7(a) by plotting the residual, i.e. L 2 norm of (9), against the cumulative NR iteration steps. Each of the discrete lines is a single load step and each marker the iterations within each load step. The first six load steps are elastic and converge in two iterations whereas the remaining load steps include elasto-plastic deformation and take four iterations to find equilibrium. Examination of the norm of the residual in an iterative step against that in the previous step in Fig. 7(b) indicates that the average convergence rate is 1.996 which is very close to the theoretical asymptotic quadratic convergence rate of the NR method. The sMPM and other CPDI approaches also have this correct quadratic convergence rate. This correct convergence rate indicates that the tangent stiffness matrix for large deformation elasto-plasticity is consistent within the out of balance force computation, verifying the correct implementation of the methods. The reaction forces on the right end of the domain from the sMPM and FEM models are plotted in Fig. 8, where the markers indicate the load steps. The force increases nonlinearly in the first six steps due to the large deformation mechanics, i.e. the geometric nonlinear finite strain measure used in the computational framework. In the seventh step, the material yields and so this reaction force starts to decrease gradually. The results from the sMPM and CPDI approaches are the same as the FEM, as this simulation is not affected by mesh distortion. Also, as this is a constant stress problem with zero normal stress in the vertical direction the sMPM does not suffer from cell-crossing instabilities. Manufactured solution of torsional deformation This section presents an example using the Method of Manufactured Solutions (MMS) to verify the implemented numerical methods on a problem involving large torsional deformation using a problem similar to that presented in [36]. In the MMS, the first step is to postulate a deformation field and then, based on the governing equations, calculate analytically the body forces required to enforce this deformation field. In the numerical analysis, these body forces are imposed at the material points along with appropriate Dirchlet or Neumann conditions on the boundary of the domain. The numerical method can be verified by comparing the numerically computed deformation field to the postulated one. In this example a circular annulus, as shown in Fig. 9, with inner (R i = 0.75) and outer (R o = 1.25) radii is subjected to a smooth rotational deformation field applied over 10 equal load steps. The material parameters are E = 10 6 , ν = 0 and a large yield stress value to ensure that the material remains in the elastic regime. The inner and outer radial surfaces of the annulus are fully fixed and the material points are subjected to the appropriate body forces. The mathematical expression of the deformation field and associated stresses are detailed in the Appendix. With this manufactured deformation field and the relevant stress, the body force for a quasi-static problem can be computed via (A.2). The comparison of the numerical results from the four MPMs with the manufactured exact solution can be used to analyse the convergence of each method and quantify any difference in accuracy of the methods. The relative error in the displacement field is computed as where {u} This error is computed for several background meshes with different levels of refinement for each MPM. As seen in Fig. 9(b), the annular domain is discretised by 5 × 16 elements in radial and circumferential directions, . Manufactured torsional deformation: the relative error in the displacement field for all four methods as indicated by markers: sMPM (circle), CPDI1 (diamond), CPDI2q (square) and CPDI2t (triangle), with the mesh including (5 × r f ) and (16 × r f ) elements in radial and circumferential directions, respectively. Note that the variance in the curves with r f = 8 cannot be seen due to the large scale in the vertical axis and please see the variance in Fig. 12(d). respectively. Each element is initially populated with 4 × 4 material points. Notably, there is one layer extra of empty elements along both inner and outer radial surfaces for dealing with corners of some particle domains which may be outside of the mesh due to the curved boundary. The background mesh shown in Fig. 9(b) is the coarsest mesh, indicated by a refinement factor r f = 1. The finer meshes comprise (5 × r f )×(16 × r f ) elements, where r f is set to 1, 2, 4 and 8. The relative error with these meshes in all simulation steps shown in Fig. 10 shows that all of the methods have an error less than 3% with a mesh when r f ⩾ 2. A comparison of the convergence rate of the Fig. 11. Manufactured torsional deformation: the comparison of convergence with mesh refinement for four methods by plotting the arithmetic mean of ERR u across all of 10 load steps in Fig. 10 against the refinement level. mean error across the 10 load steps of each method with mesh refinement is shown in Fig. 11. All of the methods initially converge quadratically, however the figure shows that the CPDI1 method is starting to plateau whereas the other results are continuing to converge. The comparison of accuracy of these methods for this problem across all load steps is shown in Fig. 12. With a coarse mesh, r f = 1, the CPDI1 produces the most accurate solution, followed by the CPDI2q, and the sMPM produces the solution with the maximum error. It is interesting to observe that the error with the CPDI2t method initially decreases and then increases with increasing deformation. The comparison is similar for r f = 2 and r f = 4. With a fine mesh, r f = 8, the error with the CPDI1 becomes the largest, while with the CPDI2q is the smallest, for α > 4 • . Examination of these four figures leads to the conclusion that there is no clear evidence that one method is better than others for this manufactured deformation field. The error values are similar to that shown in Fig. 12, however the oscillations in the error for the sMPM are more clearly shown in Fig. 13. The cause of these oscillations can be attributed to cell-crossing as the other (domainbased) methods show a smooth variation in error with progressive rotation. It should be notated that the errors for the CPDI2 methods have a general trend of increasing error with rotation, which could be due to distortion of the particle domains, whereas the error associated with the CPDI1 method remains approximately constant. Corner stretch In order to test the performance of these methods for problems involving large shear, the next example is the stretching of a square domain along its diagonal. The material behaviour is purely elastic with Young's modulus E = 10 3 , Poisson's ratio ν = 0.4 and a very large value for yield strength ϱ c . Fig. 14 shows the initial configuration and loading of the problem domain which is discretised by a mesh of four quadrilateral finite elements with four material points per element, see Fig. 15. Roller boundary conditions are applied on both the left and bottom edges, while the top-right corner is stretched by 0.2 per load step in both directions. The moving mesh strategy was adopted such that the top and right edges moved with the top-right corner. The deformed configurations for a displacement of u = 4 in the both directions are shown in Fig. 15. It is obvious that the movement of the top right material point is the smallest with the sMPM, followed by the CPDI1 method, while the CPDI2 approaches have the largest deformations. This is because the deformation is tracked by the continuous particle domains in the CPDI2 approaches, while the sMPM and CPDI1 use discrete material points to represent the physical domain. The loss of information during the mapping of displacement between material points and mesh nodes is less with the continuous particle domains than with the discrete material points. Plots of the nodal reaction force at top-right corner against displacement with different densities of material points are shown in Fig. 16. The reaction force reaches a peak and then decreases in all simulations. With 4 material points per element, the CPDI2t approach reaches the peak with the greatest gradient and predicts the largest reaction force. The other three methods lead to similar pre-peak results, but CPDI2q predicts a higher value than the others. This appears to be because the particle domains in the CPDI2q can track the deformation more accurately, as shown in Fig. 15(c). With 64 material points per element, the results from the four methods are similar, although the reaction force from CPDI2t is slightly larger than the others. It should also be noted that the deformed shape of the material point domains is significantly different for the CPDI2t method as compared to the other methods. This is because the derivatives of the basis functions ∇ S vp in the CPDI2t have some non-physical values, which is explained as follows. Consider the standard reference element shown in Fig. 17 where the indices of nodes are denoted, v and material points are denoted, p. If we consider node 4 and material point 1, from (23), since the shape function S 4 at first two corners with coordinates (−1, −1) and (1, −1) are equal to zero, we can obtain the spatial derivatives of the basis function of this pair as where V p = 1 and S 4 (0, 0) = 1/4. We find that ∂ S 41 /∂ x ≡ 0, wherever the third corner is located. That means that the evaluation of the shape function S 4 (x, y) is independent of the first variable x, however this contradicts the definition of basis functions for both the underlying linear finite element grid and the method itself, as from (23) we can see that S 41 = 1 3 S 4 (x 1 , y 1 ). In addition, this means that the material point has zero contribution to the stiffness in the horizontal direction as the internal force in the x direction is not dependent on the normal stress in the horizontal direction, σ x x . This is a fundamental deficiency in the CPDI2t method and explains why its results are so different from the other MPMs. Column collapse The collapse of a column under self weight is the next example to be presented in this paper. The initial height of the column was 20 and the width 4, and was comprised of an elasto-plastic material with the following material parameters: Young's modulus E = 10 6 , Poisson ratio ν = 0 and yield strength ϱ c = 2 × 10 4 . The density was set to ρ = 80 and a gravity loading of ∆g = 20 per step was applied with 15 steps, giving a total gravitational load of g = 300. Due to symmetry only half of the column was analysed and a roller boundary condition imposed on the symmetry line. The base is also applied by a roller boundary condition. The deformed profiles from using different methods are shown in Fig. 18 where the material points are shown as blue dots (for the CPDI methods the centre of the material point domain is plotted). The response of the sMPM, CPDI1 and CPDI2q methods are similar but the CPDI2t method shows very different material behaviour with the deformed profile reminiscent of the collapse of a purely frictional material. This response is at odds with the plasticity model used in the simulation and again potentially highlights a serious deficiency of the CPDI2t method. The horizontal displacement of the lower-right corner of the column during the collapse is plotted in Fig. 18. In the first step, material response is elastic and all methods predict a very similar response. In the following plastic regime, the displacement-gravity response is more linear using CPDI2t than the other methods. Azimuthal shear of an annulus The final simulation to be presented in this paper is that of an annulus of elasto-plastic material subjected to internal twist with a fixed outer boundary. This example has been included to test the methods' performance under problems with large torsional deformation, which is related to our project for modelling of the soil response to the installation of a screwed-in pile foundation. Two different configurations are examined: circular internal and external boundaries, and an elliptical internal boundary and a circular external boundary. The problem modelled here is closely related to that studied in [37]. Circular annulus The geometry, boundary conditions and finite element background mesh for this problem are shown in Fig. 19. The inner radius was R i = 5 and the outer radius R o = 10. In these simulations, incremental rotation was applied directly via a displacement boundary condition on the inner surface and the moving mesh strategy for rotation [24] was adopted to fix the background mesh to the inner circle. To verify setup of this model, a small rotation with α = 10 • is firstly considered, and the numerical results are compared to the FE results. In this case the deformed finite element mesh suffers only minor distortion, it is therefore assumed that the finite element results provide a reasonable reference solution. The FEM [38] was used to analyse the problem with both fine (160 × 1152 elements) and coarse (10 × 72 elements) discretisations. 3 The agreement of the results between fine and coarse meshes (as shown in Fig. 20) suggests the coarse mesh is good enough to model this problem. Therefore, this coarse mesh was used as the computational mesh in the simulation with the sMPM and CPDI approaches. In all simulations, four material points per element were used. All simulations were run with a rotation increment of ∆α = 5 • in the clockwise direction up to α = 10 • in two load steps. As the deformation field is axisymmetric, the magnitude of displacements along a radial line through the problem domain is compared in Fig. 20. The mesh nodes along a radial line with θ = 0 • were selected as sampling points, indicated by the markers in Fig. 19. In the FEM simulation, these nodal displacements were obtained directly. However, sampling points had to be added as infinitesimal-volume material points in the sMPM and CPDI1 methods, in order to obtain comparable total displacements. In contrast, the CPDI2 approaches do not need these material points. Instead the displacements at the corners of particle domains were used. The comparison in Fig. 20 shows that all of MPM variants can produce an accurate solution under small levels of rotational deformation. All of the methods are now applied to simulate the annulus problem up to α = 80 • . The magnitude of reaction force along the inner circle against the rotation α is plotted in Fig. 21 for each method. The reaction forces are almost the same for all of the methods in the elastic region (α < 25 • ) but there is significant variation in the methods' elasto-plastic responses. When the material yields, the reaction force is expected to decrease due to elastic unloading with continued plastic deformation of the yielded region close to the inner bore. This is observed in the results of the sMPM and CPDI approaches (at least initially). The reaction force in the FEM erroneously increases due to mesh distortion errors. The reaction force from the CPDI2t approach increases immediately after the post-yield drop, showing it to again face some issues in modelling. We have found that the CPDI2t approach performs differently to other CPDI methods because of the issue in the gradient of basis functions, as explained earlier (see Section 4.4). At around α = 35 • , a larger peak is observed in the sMPM than in the CPDI approaches. The drop following the peak appears to be due to stress relaxation when the inner material yields while the external material is still elastic, as shown in Fig. 22. The incremental deformation is shown in the deformed mesh for α = 30 • , 45 • and 60 • . It should be observed that at the α = 45 • step the first layer mesh nodes inside the physical domain rotate in the opposite direction, counter-clockwise, to the boundary condition which is applied clockwise on the inner circle, showing the stress relaxation. Both the sMPM and CPDI1 approaches predict the same constant reaction force for α > 50 • , but the CPDI2q method has a slight increase on this response and the CPDI2t method is actually closer to the finite element response than the other MPMs. This erroneous increase in the CPDI2q is because of the distortion of particle domains. Recall that in the CPDI2 approaches the particle domains exactly follow the deformation as in a finite element mesh, but the distortion causes more serious errors in the FEM than the CPDI2 approaches. Elliptical annulus The performance of the material point methods was also investigated through the simulation of azimuthal shear with an elliptical inner boundary. In these simulations, the mesh at the ends of the long axis was locally refined because of the higher gradients in the pattern of deformation gradient and stresses expected at these positions. The deformed mesh with incremental displacement and material points in loadsteps α = 20 • , 30 • and 60 • is shown in Fig. 23. Due to the stress concentration at the ends of the long axis of the ellipse, the material yields earlier than for the circular annulus. As the ellipse rotates in the plastic material, two regions with fewer material points are created after the ends of long axis pass, as shown in Fig. 23 when α = 60 • . We also observed that the distortion of particle domains is significant in this simulation, as seen in Fig. 24. The magnitude of torque against α plotted in Fig. 25 shows that the CPDI2 approaches predict an erroneous increase in the torque due to particle domain distortion and that the degradation in the CPDI2t method is more severe than the CPDI2q. Both the CPDI1 method and the sMPM predict a more physically realistic response in the plastic region. Summary of findings This section has presented a number of numerical examples to verify implementation of the four different material point approaches and investigate their performance. In these examples, we have considered various deformation modes, including large stretch, shear and torsion, and different loading conditions, including body force and displacement boundary conditions. Our implementations of these MPM and CPDI approaches are first verified in the column compression and simple stretch examples, by comparing the numerical solutions to analytical or FEM solutions. In addition, we have shown the convergence behaviour of these approaches, for the first time to our knowledge. It was found that the standard MPM will only converge with increasing numbers of material points per element (causing a reduction in cell-crossing instabilities), while the CPDI approaches will converge with increasing numbers of background elements for the same number of material points per element. The expected quadratic convergence rate of the NR iterative solver is also shown in the simple stretch example. The MMS is also used to verify these methods and our implementation. The internal torsion of an elastic ring is simulated by applying body force, which is computed analytically from the manufactured deformation. However, the comparison of the methods for this problem shows that there is no clear evidence for that one method is more accurate than the others for this deformation field. Notably, the elastic material response will have a gradually varying deformation, but can not include the very large distortion for the elasto-plastic deformation, e.g. see Figs. 22 and 24. There is an indication that the CPDI1 method is starting to plateau with continued mesh refinement whereas the others continue to converge at a quadratic rate. The other examples serve to illustrate some important features of the methods which will affect their ability to model certain deformation fields. The corner stretch example clearly demonstrates both the benefits and drawbacks of the CPDI2 approaches. They can clearly track the deformation of the physical domain better than the standard MPM and CPDI1 approach. In particular, the particle domains in the CPDI2 approaches can exactly represent the deformed domain. However, this characteristic is also a source of erroneous modelling prediction when using CPDI2 methods. This has been shown in the annulus example, where we observe the erroneous increase in the torque predicted using the CPDI2 approaches after the material has yielded. In contrast, the standard MPM and CPDI1 approaches predict physically reasonable responses -almost constant reaction force and torque after the material yields. This error is because of the distortion of the particle domains, as shown in Fig. 24 and might be surprising, i.e. that the later developed method (CPDI2) fails in comparison to earlier methods. Raising this awareness is very important, because this pattern of very large distortion in a region with material yielding is very common in geotechnical engineering, e.g. the installation of a screw pile, a shear vane test, etc. We have also found that the results from the CPDI2 method with triangular particle domains are considerably different to those from the other methods in some problems. In the example of corner stretch, we have shown that there is an inherent error in calculation of the gradient of basis functions so that some components of the basis function gradients are zero, which leads to an unrealistic response compared to other approaches. It appears that there is work to do to remedy the CPDI2t approach. Conclusion This paper has presented a unified computational framework for the standard MPM and its latest particle domain based extensions, the CPDI approaches. The framework has been verified and then applied to problems involving large elasto-plastic deformation with different dominant deformation modes, namely: stretch, shear and torsion. CPDI2 approaches can increase the stability of material point analyses as they reduce cell-crossing problems and provide an accurate representation of the physical domain. However, they effectively include an unstructured mesh of particle domains that, like finite element methods, can suffer from erroneous results due to domain distortion, especially under torsional deformation. In addition to this the spatial gradients of the CPDI2t method's basis function degenerate under certain conditions leading to physically unrealistic results. The sMPM and CPDI1 are preferred to the CPDI2 approaches for simulating large elasto-plastic deformation problems involving torsional deformation modes. where Div is the divergence with respect to the Cartesian basis ({E 1 }, {E 2 }, {E 3 }), [P] the 1st Piola-Kirchhoff or PK1 stress, ρ 0 is the initial density of the material, and {b} is the body force. For the problem of axisymmetric torsional deformation in Fig. 9, it is more convenient to compute the body force via the polar coordinate system than direct computation in the Cartesian system as shown in [36]. The polar coordinate of a point is denoted by (R, Θ) in the reference configuration and (r, θ ) in the deformed configuration. The body force is given by Following equations (54)-(55) in [36] and neglecting the inertia term due to our focus on quasi-static analysis, the body force can be computed through where P F is the PK1 stress associated with the deformation F, and ϵ is the shear strain which is defined as and F is an angle-independent 'baseline' deformation representing simple shear without superimposed rotation. t is a pseudo-time for indicating particular loading in this quasi-static analysis. h is function of R, and g is function of t. α is the rotation angle which is dependent on the radial coordinate of the material point, R, and the imposed degree of deformation through α(R, t) = h(R)g(t), (A. 8) where h(R) controls the radial deformation field and g(t) the deformation magnitude In the above equation, R i and R o are inner and outer radii and α 0 is the maximum imposed rotation. The deformation gradient expressed in the Cartesian frame is where Θ is the circumferential coordinate of a point in the reference configuration. A.2. Stress and its derivatives The formulation used in this paper adopts a linear relationship between Kirchhoff stress and logarithmic strain which is a function of shear strain, ϵ. Finally, we use a finite difference technique to approximate the derivative of [P F ] which respect to the imposed shear strain, that is, where δϵ is an infinitesimal increment in the shear strain and was taken to be 1 × 10 −6 . These derivatives are used in (A.4)-(A.5) to determine the body force.
2019-07-29T13:46:28.592Z
2019-10-01T00:00:00.000
{ "year": 2019, "sha1": "86defe3fc99378ae85d81a77bef6dde99d227717", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.cma.2019.07.011", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "294cce412b1f76e69999aca3fa43baba5e14f117", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
119270289
pes2o/s2orc
v3-fos-license
Thermoelectric effect in high mobility single layer epitaxial graphene The thermoelectric response of high mobility single layer epitaxial graphene on silicon carbide substrates as a function of temperature and magnetic field have been investigated. For the temperature dependence of the thermopower, a strong deviation from the Mott relation has been observed even when the carrier density is high, which reflects the importance of the screening effect. In the quantum Hall regime, the amplitude of the thermopower peaks is lower than a quantum value predicted by theories, despite the high mobility of the sample. A systematic reduction of the amplitude with decreasing temperature suggests that the suppression of the thermopower is intrinsic to Dirac electrons in graphene. The thermoelectric response of high mobility single layer epitaxial graphene on silicon carbide substrates as a function of temperature and magnetic field have been investigated. For the temperature dependence of the thermopower, a strong deviation from the Mott relation has been observed even when the carrier density is high, which reflects the importance of the screening effect. In the quantum Hall regime, the amplitude of the thermopower peaks is lower than a quantum value predicted by theories, despite the high mobility of the sample. A systematic reduction of the amplitude with decreasing temperature suggests that the suppression of the thermopower is intrinsic to Dirac electrons in graphene. Graphene, a single layer of graphite, has a unique band structure, in which electrons are described by the relativistic Dirac equation. Extensive electrical transport studies have been performed to understand Dirac electrons in the material. Compared with electrical transport, the thermoelectric properties provide complementary information to the electronic structure and the detail of electron scattering, but investigations have been started only recently [1][2][3][4][5][6][7][8][9][10][11][12]. The newly discovered topological insulators, whose electrons in surface states are also Dirac electrons, are extraordinary thermoelectric materials. The possibility of further improving the performance in its nanostructures by exploiting the Dirac nature of electrons also calls for studies on the thermoelectric response of Dirac electrons [13]. The thermoelectric effect of Dirac electrons has been experimentally investigated in exfoliated graphene on SiO 2 [6][7][8]. It was found that the Mott relation, which is used to describe the thermoelectric effect in conventional 2-dimensional electron gases, is basically obeyed, but not in the vicinity of the charge neutrality point. In the quantum Hall regime, though theories predict a quantized value for the thermopower [10,11], experiments saw a smaller value [6,8]. In those studies, the mobilities of the samples are low, in the order of a few thousand cm 2 /V·s. Thus, questions have been raised about whether the quantization of the thermopower is intrinsic to graphene and can be realized in high mobility samples [6,8]. Generally, high mobility is crucial for studying the intrinsic properties of Dirac electrons. Therefore, achieving high mobility in graphene samples has been a main effort in many experiments. A few milestone experiments are indeed consequences of improved or new techniques for obtaining high mobilities, e.g. recent success in greatly improving the mobility of graphene by suspending and in situ annealing it directly led to observation of the fractional quantum Hall effect [14,15]. In this study, we grow high mobility single layer epitax-ial graphene on the silicon carbide substrate and study its thermoelectric response as a function of temperature and magnetic field. The temperature dependence deviates from the Mott relation, revealing the importance of the screening effect. In a quantized magnetic field, the thermopower is suppressed and shows unexpected temperature dependence, inconsistent with theories on the thermoelectric effect for Dirac electrons. High mobility single layer epitaxial graphene samples are grown on the carbon face of SiC (0001) in a high vacuum furnace [16][17][18][19]. Thermoelectric measurements are carried out following a technique developed by Kim et al. [20] and also used in previous thermoelectric studies on exfoliated graphene. In this technique, a local heater made of a metal line produces a temperature difference ∆T between the two ends of a sample, which gives rise to a thermoelectric voltage ∆V . The temperatures of two ends are measured by two local thermometers which are also made of metal lines, as seen in the bottom right inset of Fig. 1. The thermopower S xx = −∆V /∆T . When a magnetic field is applied perpendicular to the graphene plane, a transverse thermoelectric voltage is generated, so called Nernst effect. It is defined as S yx = E y /∇T . These two determine the thermopower tensor S. The thermoelectrical conductivity tensor can then be computed by α = σ · S, where σ is the electrical conductivity tensor. In our experiment, a low frequency ω ac current is applied to the heater. The voltage across the sample is then measured at 2ω frequency. The thermopower is compared with the one measured by a DC method to rule out spurious signals. In addition, the voltage across the sample is found to be linearly proportional to the temperature difference, which confirms its thermoelectrical origin. Throughout the measurement, the temperature difference is always maintained at a level much less than the substrate temperature. The sample shown here is hole doped at a level of 1 × 10 12 cm −2 . The mobility is about 20, 000 cm 2 /V·s at The Mott relation, , has been used to describe the diffusion thermopower for conventional electron gases. It is interesting to see whether it holds for Dirac eletrons. The temperature dependence of the thermopower S xx of the same is plotted in Fig. 1, which displays a strong nonlinearity. It can be well fit when considering a linear dependence plus a quadratic correction, AT + BT 2 . A dominant linear dependence was observed in exfoliated graphene, while only a slight deviation appeared at higher temperatures [6,7]. The linearity can be explained by the Mott relation. The question is what causes the deviation. Phonon drag effect is unlikely, not only because the electron-phonon coupling is weak in graphene, but because it gives rise to a large power index, usually no less than 4 [21]. Hwang et al. studied the temperature dependence of the thermopower of graphene [4]. They found that when the screening effect and its temperature dependence are taken into account, a quadratic correction to the thermopower appears. Note that the dielectric constant of SiC is about 10, a factor of over 2 higher than that of SiO 2 . The screening is consequently stronger in epitaxial graphene than in exfoliated graphene on SiO 2 , which accounts for the stronger nonlinearity observed here. The magnetic field dependence of the thermopower S xx at low temperature displays reproducible fluctuations in low field, as shown in Fig. 2. The fluctuations are manifest of phase coherent transport, the same origin of the universal conductance fluctuations (UCFs) [22]. The large amplitude of the oscillations indicates a long coherence length, which is consistent with the high mobility of the sample. Girvin and Jonson have calculated the thermoelectric tensor for a conventional 2-dimensional electron gas subject to a quantized magnetic field using a generalized Mott formula [23,24]. They found that the thermopower exhibits a large peak when a Landau level is half filled. The peak value is quantized at k B e ln 2 n + 1/2 and independent of temperature and magnetic field. Here k B , e and n denote the Boltzmann constant, the electron charge and the Landau index, respectively. Recent theoretical work confirmed the same quantization for Dirac electrons in graphene, except n + 1/2 is replaced by n because of the anomalous Berry's phase of π [10,11]. We have measured S xx and S yx in high magnetic field. Both show strong quantum oscillations periodic in 1/B, as seen in Fig. 3a and b. The oscillations are the consequences of formation of Landau levels and can be understood by the change of the density of state at the Fermi level. That is, when the Fermi level lies in the localized states, S xx becomes zero because of absence of diffusion. The oscillations of S xx slightly overshoot y = 0, resulting in small negative minima of about -2 µV/K. The amplitude of S xx oscillations at each Landau level n is obtained by subtracting the minima contour from the corresponding peak height. In the panel c of Fig. 3, the amplitude A S multiplied by its Landau index n is plotted against temperature. A S ·n for all Landau levels increases with decreasing temperature at high temperature, which can be explained by reduction of thermal broadening of Landau levels. At low temperature in the quantum Hall regime, according to the theoretical result, A S · n will collapse to a temperature independent value of k B e ln 2 ∼ 59.6µV/K. Interestingly, in our experiment, the A S · n reaches 52 µV/K and then starts to decrease, showing a turning point. The lower the Landau index n, the higher temperature the turning point occurs at and the stronger the suppression is at 10 K. The systematic trend suggests that deep in the quantum Hall regime, S xx is suppressed, contradicting with a temperature independent quantum value predicted by theories. On the other hand, S yx oscillates around zero. The phase of the S yx oscillations shifts by about π/4 with repect to S xx , consistent with the generalized Mott formula. Note that, on top of the oscillations, S yx also exhibits fluctuations, which are stronger than those in S xx . This is due to the shorter distance between two Hall probes that are used to measure S yx . The slope of S yx in low field is linear in temperature. The Nernst effect under weak magnetic field in graphene has been studied theoretically [12]. It was found that the linearity depends on the details of electron scattering. It would be interesting to extract such information from our experiment. Unfortunately, the theoretical result is numerical and direct comparison between the theory and our experiment cannot be easily made. Another feature is that S yx changes sign at 2.8 T even when the temperature is so high that Landau levels are thermally smeared out. The question may be raised whether two types of carriers coexist in the sample. However, a perfectly straight Hall resistance as a function of magnetic field, seen in the inset of Fig. 1, rules out this possibility. Moreover, no clear indication of two frequencies can be seen in the SdH oscillations. Unlike the resistivity, which is determined by the scattering time, the thermoelectric effect is sensitive to the energy dependence of the scattering time. Therefore, the intriguing features seen in the low field S yx , which are in sharp contrast to the trivial behavior of the Hall resistance, are most likely manifest of the detail of scattering mechanism. Further investigation may provide insight on the scattering processes in graphene. In some cases, it is intuitive to look at the thermoelectrical conductivity α. For instance, α xy can be linked to electron entropy [10]. Having measured the conductivity tensor σ and the thermopower tensor S, the calculation of α is straightforward, α = σ · S. In particular: In Fig. 4, we plot the two components of the tensor α xx and α xy as a function of magnetic field at different temperatures. Similar to S yx , both components display quantum oscillations at low temperatures and a change of the sign in the intermediate field regime at high temperatures. Like S xx , α xy has also been predicted to quantize at gk B e h ln 2 ∼ 9.2 nA/K, when a Landau level is half filled. Here g is the total degenercy. Attempts to experimentally test both quantization have been made, while no link between them has been provided [6,8]. The formulae for two quantum values are very similar. In fact, they are closely related, as explained in the following manner. Note that the oscillations of S xx is in phase with σ xx and ahead of S xy by π/4. When a Landau level is half filled, S yx becomes zero, hence the second term on the right side of Eq. (2). At the same time, σ xy is at the middle of two quantum Hall plateaus. Therefore, according to Eq. (2), It is clear that a suppression of S xx peak will lead to a suppression of α xy . However, in our experiment, the amplitude of α xy oscillations reaches 15 nA/K, over 50% larger than the quantum value 9.2 nA/K, inconsistent with the smaller value of S xx . We want to point out that S yx is subject to a large uncertainty because the temperature gradient can not be measured directly, instead extrapolated, not to mention that α xy is calculated from four experimentally measured quantities. Consequently, we believe that it is much more reliable to test the quantization of S xx than α xy . Nevertheless, a similar temperature dependence of α xy as that of S xx at low temperature are observed. Most of the theories on the thermoelectric effect in graphene have found that the Mott relation holds well except for very low carrier density and S xx in the quantum Hall regime has maxima of k B e ln 2 n . Previous experiments all seem more or less consistent with these predictions. However, it is worth noting that a S xx maximum smaller than the theoretical prediction was always observed in experiments and ascribed to low mobilities of the samples [6,8]. In our sample, which has a significantly higher mobility, a suppression of S xx is still seen. This suggests that the suppression of the S xx peak may be intrinsic to graphene. A systematic reduction of S xx with decreasing temperature for different Landau levels further strongly supports the suggestion. Although most theories predict a temperature independent S xx peak in the QHE regime, there is one exception to our best of knowledge. Zhu et al. have studied the temperature dependence of the amplitudes of both S xx and α xy peaks [11]. A linear dependence was found at low temperature, while it saturates at the quantum value at high temperature. However, the numeric computation was performed for extremely high magnetic field in the order of hundreds of Tesla. It is not clear if the suppression we saw in a much lower field (< 10 Tesla) is indeed the linear dependence predicted by the theory. Bergman et al. have suggested that further theoretical work, taking into account inelastic processes, might provide a clue [10]. In conclusion, we study the thermoelectric response of high mobility single layer epitaxial graphene. We observe a strong deviation from the Mott relation, even when the system is not in the vicinity of the Dirac point. In a magnetic field, while the Hall resistivity displays a trivial linear dependence, the Nernst signal shows a nonmonotonic behavior, which we believe is related to different scattering mechanisms. In the quantum Hall regime, contrary to theories, the amplitude of the thermopower peaks is lower than the quantum value, although the mobility of the sample is high. The suppression of the thermopower is further confirmed by its systematic reduction with decreasing temperature. To understand these behaviors, further theoretical work that takes into account inelastic processes of Dirac electrons is needed. This work was supported by NSF grant DMR-0820382 and the W. M. Keck Foundation. X.W. thanks Xin-Zhong Yan for helpful discussions.
2011-04-07T06:24:52.000Z
2011-03-22T00:00:00.000
{ "year": 2011, "sha1": "264b16c4e7008862fe881d72d9e2887e6481732c", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1104.1248", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "264b16c4e7008862fe881d72d9e2887e6481732c", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
4921623
pes2o/s2orc
v3-fos-license
Dendrite Suppression by Shock Electrodeposition in Charged Porous Media It is shown that surface conduction can stabilize electrodeposition in random, charged porous media at high rates, above the diffusion-limited current. After linear sweep voltammetry and impedance spectroscopy, copper electrodeposits are visualized by scanning electron microscopy and energy dispersive spectroscopy in two different porous separators (cellulose nitrate, polyethylene), whose surfaces are modified by layer-by-layer deposition of positive or negative charged polyelectrolytes. Above the limiting current, surface conduction inhibits growth in the positive separators and produces irregular dendrites, while it enhances growth and suppresses dendrites behind a deionization shock in the negative separators, also leading to improved cycle life. The discovery of stable uniform growth in the random media differs from the non-uniform growth observed in parallel nanopores and cannot be explained by classic quasi-steady “leaky membrane” models, which always predict instability and dendritic growth. Instead, the experimental results suggest that transient electro-diffusion in random porous media imparts the stability of a deionization shock to the growing metal interface behind it. Shock electrodeposition could be exploited to enhance the cycle life and recharging rate of metal batteries or to accelerate the fabrication of metal matrix composite coatings. Scientific RepoRts | 6:28054 | DOI: 10.1038/srep28054 ("leaky membranes" 41 ) can either stabilize or destabilize metal electrodeposition at high rates, depending on the sign of their surface charge. Our initial model system is a symmetric copper cell consisting of a porous cellulose nitrate (CN) or polyethylene (PE) separator with positive or negative polyelectrolyte coatings, which is compressed between two flat copper electrodes in copper sulfate solutions. The current-voltage relations in both cases ( Fig. 1a,b) show common plateaus around the diffusion-limited current because surface conduction is negligible compared to bulk electro-diffusion. At higher voltages, however, strong salt depletion occurs at the cathode, and dramatic effects of the surface charge are observed (Fig. 1c). The positive separator exhibits reduced cation flux, opposed by surface conduction 42 , while the negative separator exhibits OLC sustained by surface conduction 7,41,42 , which also leads to a transient deionization shock 30,40,43,44 ahead of the growth. We have discovered that the interaction between these nonlinear transport phenomena and the growing deposit is strongly dependent on the porous microstructure, as shown in Fig. 2. In a recent publication 42 , we showed that surface conduction can profoundly influence the pore-scale morphology of copper growth in ordered anodic aluminum oxide (AAO) membranes. In such materials with non-intersecting parallel nanopores, diffusion-limited metal growth is inherently non-uniform and leads to a "race of nanowires" 45 . Above the limiting current, there is a transition to new non-uniform growth modes, either nanotubes following separate deionization shock waves in each pore of the negatively-charged membrane (Fig. 2b) or slowly penetrating, pore-center dendrites in the positively-charged membrane (Fig. 2a). Here, we demonstrate nearly opposite effects of surface conduction on the electrode-scale morphology in random CN membranes with well-connected pore networks. Above the limiting current, some low-density dendritic structures penetrate into the positive membrane (Fig. 2c), but, remarkably, the growth is uniform, dense, and reversible in the negative membrane, which we attribute to the propagation of a single flat, stable deionization shock ahead of the deposit (Fig. 2d). Theory In porous media, the physical mechanisms for OLC are very different from those in free solutions and just beginning to be explored. According to theory 7 , supported by recent microfluidic experiments 44 , if the counterions (opposite to the pore surface charge) are the ones being removed, then extended space charge is suppressed, and electro-osmotic instability is replaced by two new mechanisms for OLC: (1) surface conduction by electromigration, which dominates in submicron pores 41,42 , and (2) surface convection by electro-osmotic flow, which dominates in micron-scale pores 29,38,41,46 . Regardless of whether OLC is sustained by constant current 40,43 or constant voltage 47 , the ion concentration profile develops an approximate discontinuity that propagates into the porous medium, leaving highly deionized fluid in its wake, until it relaxes to a steady linear profile in a finite porous slab 41,48 . This "deionization shock wave" 40 is analogous to concentration shocks in chromatography, pressure shock waves in gases, stop-and-go traffic, glaciers, and other nonlinear kinematic waves 49 . The influence of surface conduction on electrodeposition was recently discovered in our investigations of copper electrodeposition in AAO membranes with modified surface charge 42 . Below the limiting current, surface conduction is negligible if the double layers are thin (small Dukhin number), but surface conduction profoundly affects the growth at high currents. With positive surface charge, growth is blocked at the limiting current by oppositely-directed surface conduction (electro-migration) and surface convection (electro-osmotic flow); above a critical voltage, some dendrites are observed avoiding the pore walls, likely fed by vortices of reverse electro-osmotic flow returning along the pore centers. With negative surface charge, the growth is enhanced by surface conduction until the same critical voltage, when surface dendrites and ultimately smooth surface films grow rapidly along the walls. These phenomena are consistent with the theory of OLC in a single microchannel 7,44 , but we expect different behavior in random media with interconnected pores. The motivation for our experiments is the theoretical prediction that a flat deionization shockwave is nonlinearly stable to shape perturbations 40 , since we hypothesize that this stability could be imparted to an electrodeposit growing behind a propagating shock. In free solution, dendritic growth occurs soon after salt depletion, owing to the simple fact that a surface protrusion receives more flux, thereby causing it to protrude further 1 (Fig. 3a). This is the fundamental instability mechanism of Laplacian growth, which leads to fractal patterns by continuous and deterministic viscous fingering 50 or discrete and stochastic diffusion-limited aggregation (DLA) 2 . In contrast, the propagation of deionization shockwaves is controlled "from behind" by the high resistance of the ion depletion zone. As shown in Fig. 3b, a lagging region of the shock will have more flux leaving by surface conduction, causing In parallel nanopores, (a) positive surface charge suppresses metal penetration or allows thin dendrites avoiding the pore walls, while (b) negative charge promotes non-uniform surface coverage leading to metal nanotubes of different lengths growing behind deionization shock waves (dashed lines). In well-connected, random nanopores, (c) positive surface charge blocks penetration or allows low-density porous dendrites, while (d) negative charge leads to a flat metal-matrix composite film, stabilized by a macroscopic shock wave propagating ahead of the growth. it to advance back to the stable flat shape. The dynamics of a thin shock is thus equivalent to Laplacian dissolution 40 , the stable time reversal of Laplacian growth 51 , and this suggests that transport-limited electrochemical processes occurring behind the shock might proceed more uniformly as well. What would happen if a stable deionization shock precedes an unstable growing electrodeposit in a charged porous medium? According to the simplest theoretical description, the classical 52 "leaky membrane" model 40,41,48 (LMM), the answer depends on the importance of transient diffusion ahead of the shock. The ion concentrations and macroscopic electroneutrality, including the surface charge density per volume, ρ s . The mean flow is incompressible, driven by gradients in dynamical pressure, electrostatic potential, and chemical potential, respectively, The macroscopic ionic diffusivities, D i , and mobilities, M i , Darcy permeability, k D , electro-osmotic mobility, k EO , and diffusio-osmotic mobility, k DO k DO , depend on c i and φ, but not on their gradients or (explicitly) on position. This approximation is reasonable for surface conduction in nanopores, but neglects hydrodynamic dispersion due to electro-osmotic flow in micron-sized pores 53 or pore network loops 38 , for which no simple model is available 29,46 . Assuming a transport limited growth process, the moving electrode surface has Dirchlet (c i = φ = 0) and Neumann ( ⋅ =  n u 0) boundary conditions. With these general assumptions, we observe that the steady-state LMM, Eqs (1-3), falls into Bazant's class of conformally invariant nonlinear partial differential equations 54 . The profound implication is that quasi-steady transport-limited growth in a leaky membrane (with growth velocity opposite to the active-ion flux, − ṽ F 1 ) is in the same universality class 55 as Laplacian growth 56,57 and thus always unstable. This explains the recent theoretical prediction that negative charge in a leaky membrane cannot stabilize quasi-steady electrodeposition, although it can reduce the growth rate of the instability 58 , consistent with the improved cycle life of lithium batteries with tethered anions in the separator 21,27 . In contrast, copper electrodeposition experiments in free solution have shown that the salt concentration profile is unsteady prior to interfacial instability 33 and forms a "diffusive wave" ahead of growing dendrites 4-7 with the same asymptotic profile as a deionization shock 40 . In a negatively charged medium, before the salt concentration vanishes at Sand's time, the diffusion layer sharpens and propagates away from the electrode as deionization shock 41,48 , which could perhaps lead to stable, uniform "shock electrodeposition" in its wake, as outlined in Fig. 3c. Since the LMM neglects many important processes, however, such as surface diffusion 59 , surface convection 29,53,59 , pore-scale heterogeneity 60 , and electro-hydrodynamic dispersion 38,46,53 , we turn to experiments to answer this question. Experimental Results In order to isolate the effects of charged porous media, we use the same copper system (Cu|CuSO 4 |Cu) studied over the past three decades by physicists, as a canonical example of diffusion-limited pattern formation 1,3 . Compared to lithium electrodeposition and electrodissolution, which involves complex side reactions related to the formation and evolution of the solid-electrolyte interphase (SEI), this system is simple enough to allow quantitative interpretation of voltammetry in nanopores 42 and microchannels 3,7,33 . A unique feature of our experiments is that we control the surface conductivity by modifying the separator surface charge by layer-by-layer (LBL) deposition of charged polymers. We also demonstrate the role of pore connectivity for the first time by choosing random porous media, such as cellulose nitrate (CN), with similar pore size (200 ~ 300 nm) as the parallel nanopores of AAO from our recent study that introduced this method 42 . We denote the charge-modified positive and negative membranes as CN(+ ) and CN(− ), where excess sulfate ions (SO 4 2− ) and cupric ions (Cu 2+ ), respectively, are the dominant counter-ions involved in surface conduction (Fig. 1c). As noted above, voltammetry clearly shows the nonlinear effect of surface conduction. Figure 1a shows current-voltage curves of CN(+ ) and CN(− ) in 10 mM CuSO 4 at a scan rate of 1 mV/s, close to steady state. In the low-voltage regime of slow reactions 42 (below − 0.07 V), the two curves overlap since the double layers are thin, and surface conduction can be neglected compared to bulk diffusion (small Dukhin number) 38,41 . At the diffusion-limited current, huge differences in CN(+ ) and CN(− ) are suddenly observed. While the current in the CN(+ ) reaches − 1.5 mA around − 0.1 V and maintains a limiting current of − 1.3 mA, the CN(− ) shows a strong linear increase in current, i.e. constant over-limiting conductance. The data are consistent with the surface conduction (SC) mechanism (Fig. 1c), which is sensitive to the sign of surface charge 42,53 . With negative charge, Cu 2+ counter-ions provide surface conduction to "short circuit" the depletion region to maintain electrodeposition. With positive charge, the SO 4 2− counter-ions migrate away from the cathode, further blocking Cu 2+ ions outside the depletion region in order to maintain neutrality. At higher salt concentration, 100 mM CuSO 4 , sweeping at 10 mV/s, the results are similar (Fig. 1b) with no effect of SC below − 0.15 V, limiting current of − 19 mA for CN(+ ), and overlimiting conductance for CN(− ), although the effect of SC is weaker (smaller Dukhin number), and transient current overshoot and oscillations are observed 42,61 . Striking effects of surface charge are also revealed by chronopotentiometry (Fig. 4). When constant OLC (− 5 mA) is applied in 10 mM CuSO 4 solution, CN(+ ) exhibits large, random voltage fluctuations, which we attribute to the blocking of cation transport by the reverse SC of SO 4 2− counter-ions near the cathode. Large electric fields drive unstable electro-osmotic flows, some dendritic growth, and water electrolysis, consistent with observed gas bubbles. Metal growth is mostly prevented from entering the CN(+ ) membrane, so it is easily separated from the cathode after the experiment. In stark contrast, CN(− ) maintains low voltage around − 100 mV, as expected since the SC of Cu 2+ counter-ions sustains electrodeposition under OLC regime. More importantly, the Fig. 4(b), consistent with the theoretical motivation above, based on the stability of deionization shock propagation ahead of the growth. Figure 5 clearly shows the suppression of dendritic instability. When OLC (− 20 mA) is applied in 100 mM CuSO 4 for 2000 s, irregular electrodeposits are generated in CN(+ ) (Fig. 5a). This imposed current exceeds the limiting current (− 17 mA) measured by voltammetry (Fig. 1b), so the observed low-density stochastic growth, which is opposed by surface conduction, may result from vortices of surface electroconvection, driven in the reverse direction by huge electric field in the depletion region. Once again, under the same experimental conditions, we obtain a highly uniform Cu film in CN(− ) (Fig. 5b) by shock electrodeposition. The difference in morphology of Cu electrodeposits between CN(+ ) and CN(− ) can also be precisely confirmed by EDS mapping analysis of Cu element (Fig. 5c,d). The Cu film in CN(− ) shows more compact and flat morphology, consistent with simple estimates of the metal density. Based on the applied current (− 20 mA), nominal electrode area (1.0 cm × 1.5 cm) and time (2000 s), pure copper would reach a thickness of 19.6 μ m, which would be increased by porosity, but also lowered by fringe currents, side reactions, and metal growth underneath the membrane. The penetration of copper dendrites in CN(+ ) to a mean distance of 45 μ m, supports the direct observation of low density ramified deposits, while the smaller penetration, 12.8 μ m, into CN(− ) suggests that shock electrodeposits densely fill the pores. The variation of morphology with applied current is demonstrated in Fig. 6. For under-limiting current (− 15 mA), both cases exhibit a uniform Cu film (Fig. 6a,b), independent of surface charge, as expected when surface conduction is weak compared to bulk electro-diffusion within the pores (small Dukhin number). As the applied current is increased, highly irregular, dendritic electrodeposits are generated in CN(+ ). When extreme OLC (− 25 mA) is applied, CN(+ ) shows much less dense dendritic growth, and weak adhesion of the membrane to the cathode leading to its peeling off (Fig. 6e). On the other hand, shock electrodeposition in CN(− ) suppresses dendritic growth and produces uniform, dense Cu films, which show signs of instability only at very high currents (Fig. 6f). The observed morphologies shed light on the different cycling behavior for positive and negative membranes under extreme currents (± 25 mA), as shown in Fig. 6g. The unstable dendritic growth of CN(+ ) results in short-circuit paths that cause the voltage to drop quickly to 5 mV in the first cycle. Although further cycles are possible, the voltage never recovers. In contrast, the more uniform growth observed in CN(− ) is associated with stable cycling around ± 100 mV, in spite of the large nominal current density (± 18.8 mA/cm 2 ), well above the limiting current during 10 hours. After eleven cycles, short circuit occurred. Improved cycling life has also recently been reported for lithium metal anodes with separators having tethered anions 20 , albeit at much lower currents (0.5 mA/cm 2 ) without observing the deposits. Our observation of stable shock electrodeposition may thus have broad applicability, including rechargeable metal batteries. In order to investigate the generality of this phenomenon and its potential application to batteries, we repeated the same experimental procedures for several commercially available, porous polymeric battery separators. Here, we report results for a 20 μ m thick Celgard K2045 polyethylene (PE) membrane with a pore size of 50 nm, porosity of 47%, and a tortuosity of 1.5, which was modified using the same layer-by-layer (LBL) assembly sequence for either positively or negatively charged membrane. As is evident in the voltammetry of PE(+) and PE(−) membranes (Fig. 7a), similar OLC behavior, consistent with the nonlinear effect of surface conduction, is observed as the copper electrode is polarized at a scan rate of 2 mV/s in 10 mM CuSO 4 solution. Once diffusion limitation begins to dominate at approximately − 0.2 V, consistent discrepancies in the current-voltage curve can again be attributed to surface conduction, which enhances Cu 2+ transport in the PE(− ) membrane, as anions (SO 4 2− ) in the double layer of the PE(+ ) membrane further block the transport of Cu 2+ inside the depleted region near the cathode. Although the current-voltage response for both PE membranes is similar to that of the CN membranes, minor discrepancies may be observed at a voltage below − 0.2 V, where differences in the current output are significant. This is possibly a result of differences in solvent uptake, affected by the extent of membrane wetting by the aqueous solvent, despite the fact that the membranes were soaked in electrolyte overnight before cells were assembled for analysis. As in other systems with deionization shock waves, it can be more stable to control the current rather than the voltage 42,62 , so we perform galvano-electrochemical impedance spectroscopies (GEIS) for PE(+ ) and PE(− ) membranes, in Fig. 7b,c, at different direct current biases with alternating currents of 10 μ A from 100 kHz to 100 mHz. When applying no dc-bias, the impedance for both cases exhibits a similar response, devoid of any diffusional resistance. When applying a dc-bias, the Warburg-like arc for PE(− ) shrinks as the current increases, and is surprisingly followed by a "reversed" semicircle at intermediate current densities (− 0.5 mA and − 1.0 mA in Fig. 7c), which might be attributed to the growths of copper layer on the cathode, as well as the pitting on the surface of the anode during the measurements. In contrast, as a result of ion blocking by surface conduction in PE(+ ), the low frequency response becomes noisy. This may also indicate effects of electro-osmotic surface convection 29,44,53 , mostly likely around connected loops in the porous network 38 , which could serve to bypass the blocked surface conduction pathways in PE(+ ) and lead to the observed dendrite penetration. In any case, it is clear that the positive and negative membranes exhibit distinct low frequency responses with increasing dc-bias, which indicates a significant difference in the mass-transfer mechanism for Cu 2+ associated with the surface charge of the porous medium. Seven copper cells with PE(− ) membranes were individually assembled and examined to testify the repeatability of our methodology. As is evident in Fig. 8(a), repeatability can be achieved with stringent LBL-coating procedure as well as cell-assembly process to further validate our proposition of surface conduction phenomenon. We observe similar current-voltage response of surface-modified PE membranes in 100 mM CuSO 4 solution as those of PE membranes in 10 mM CuSO 4 solution. The nonlinear effect of surface conduction dominants the charge transport as the cathode is polarized beyond − 0.15 V. As evident in Fig. 8(b), a sharp difference between the current-voltage behavior of PE(+ ) and PE(− ) membranes further supports the proposition of surface charge sensitivity. The sudden increase in current beyond a voltage of − 0.6 V for both cases corresponds to short-circuit conditions, where some copper dendrites have spanned from cathode to anode, thereby allowing electrons to pass freely. To further support the electrochemical evidence for SC-controlled growth, we performed SEM and EDS mapping analyses to examine the morphological differences between copper electrodeposits using both positive and negative PE membranes. The surface of a random porous membrane before electrodeposition is shown in Fig. 9(a). After galvanostatic deposition of copper onto a silicon wafer (with a thin layer of copper) in 100 mM CuSO 4 for 2000 s, two distinct cross-sectional morphologies are observed, depending only on the surface charge of the membrane. In the case of PE(− ), in Fig. 9(b,d1), a dense copper film (approx. 8 μ m thick) is observed. Due to the existence of denser copper inside the lower portion of the membrane, the upper portion of the membrane above the Cu deposits is tapered, deformed, and torn away when the cell is disassembled for imaging. In contrast, a layer of porous copper grown directly on the wafer is observed for PE(+ ), in Fig. 9(c,e1). The whole membrane above the Cu deposits is clearly separated from the wafer/copper complex with little adhesion. It is worth mentioning that whether the metal deposits can grow into the porous membranes depends on many factors, e.g. elasticity, strength and wettability of the membrane, salt concentration of the solution, and the applied currents. The common features that emerge from the comparison of Figs 5 and 9 are: (i) negatively charged membranes always produce a uniform layer of metal deposits (Fig. 5b CN(− ) and Fig. 9b PE(− )), and (ii) positively charged membranes always yield random/porous structures (Fig. 5a CN(+ ) and Fig. 9c PE(+ )). This direct observation of the dependence of the growth morphology on membrane charge appears to be the first, and rather compelling, validation of the hypothesis depicted in Fig. 3. Conclusions This work provides fundamental insights into the physics of transport-limited pattern formation in charged porous media. We show that the surface charge and microstructure of porous separators can strongly influence the morphology of copper electrodeposition, which is considered to be the prototypical case of unstable diffusion-limited growth in free solutions. For the first time, we directly observe the suppression of dendritic instability at high rates, exceeding diffusion limitation. With negative surface charge, uniform metal growth is stabilized behind a propagating deionization shock, and reversible cycling is possible. Under the same conditions with positive surface charge, dendrites are blocked from penetrating the medium, and at high rates the growth becomes unstable and cannot be cycled. Besides its fundamental interest, shock electrodeposition may find applications in energy storage and manufacturing. High-rate rechargeable metal batteries could be enabled by charged porous separators or charged composite metal electrodes [11][12][13]63 . The rapid growth of dense, uniform metal electrodeposits in charged porous media could also be applied to the fabrication of copper 64 or nickel 10 metal matrix composites for abrasives or wear-resistant coatings. Methods Chemicals. Polydiallyldimethylammonium chloride (pDADMAC, 100,000 ~ 200,000 M w, 20 wt% in water), (poly(styrenesulfonate) (pSS, 70000 M w ), copper sulfate (CuSO 4 , ≥ 98%), sodium chloride (NaCl, ≥ 98%), and sodium hydroxide (NaOH, ≥ 98%) are purchased from Aldrich and used without further purification. Ultrapure deionized water is obtained from Thermo Scientific (Model No. 50129872 (3 UV)) or from a Milli-Q Advantage A10 water purification system. Cellulose nitrate (CN) membranes (pore diameter 200 ~ 300 nm, porosity 0.66-0.88, thickness 130 μ m, diameter 47 mm) are purchased from Whatman. Polyethylene (PE) membranes (K2405) with a pore size of 50 nm, a porosity of 47% and a thickness of 20 μ m, are obtained from Celgard. Copper plates (1/8" thickness) were purchased from McMaster Carr and machined down to appropriate dimensions using a water jet cutter. Sample Preparation. The surface charge of CN and PE membranes is modified by layer-by-layer (LBL) method of charged polyelectrolytes. Polydiallyldimethylammonium chloride (pDADMAC) is directly deposited on the membrane to make a positive surface charge, CN(+). For this, the bare CN is immersed in polycation solution (1 mg/mL pDADMAC in 20 mM NaCl at pH 6) for 30 min. Then, the membrane is triple rinsed (10 min each) with purified water purification system) to remove unattached polyelectrolyte. Negatively charged CN(−) is obtained by coating negative polyelectrolytes (poly(styrenesulfonate), pSS) on the pDADMAC-coated CN by immersion in a polyanion solution (1 mg/mL pSS in 20 mM NaCl at pH 6) for 30 min and followed by the same washing procedure. The polyelectrolytes coated CN membranes are stored in a CuSO 4 solution. The surface charge of PE membranes are modified using a similar LBL procedures described above. Bare PE membranes are air-plasma treated for 10 min before being immersed in pDADMAC solution for 12 h to make the positively charged membrane (PE(+ )). The membrane is triple rinsed (30 min each) with purified water is needed to remove any unattached polyelectrolyte. For the negatively charged PE membrane, thoroughly rinsed PE(+ ) membranes are immersed in pSS for 12 h, followed by the same washing procedure as that of the PE(+ ) membrane. The surface-modified PE membranes are stored in purified water and soaked in a CuSO 4 solution 12 h before cell-assembly. Experiments Apparatus. The experimental set-up is from previous our work (see ref. 42). The modified membrane is clamped between two Cu disk electrodes (13 mm diameter) under constant pressure, where Cu is stripped from the anode and deposited on the cathode. Electrode polishing consists of grinding by fine sand paper (1200, Norton) followed by 3.0 μ m alumina slurry (No. 50361-05, Type DX, Electron Microscopy Sciences) and thorough rinsing with purified water. For SEM images, a Cu-sputtered Si wafer (1.0 cm × 1.5 cm) is used as a cathode, in place of a copper disk electrode. To prevent the evaporation of the binary electrolyte solution inside the CN or PE membrane, the electrochemical cell is immersed in a beaker containing the same electrolyte. All electrochemical measurements are performed with a potentiostat (Reference 3000, Gamry Instruments). The morphology and composition of electrodeposited Cu films are confirmed by scanning electron microscopy (SEM) with energy-dispersive spectroscopy (EDS) X-ray detector (6010LA, JEOL) at 15 kV accelerating voltage.
2018-04-03T03:09:06.231Z
2015-05-21T00:00:00.000
{ "year": 2016, "sha1": "9e3484b7a7a8c9cdb0b2f1572b4f433bc8e13b58", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/srep28054.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fc9b29163e41863abd88de368106e667ac6bfaa6", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine", "Physics", "Materials Science", "Chemistry" ] }
133977370
pes2o/s2orc
v3-fos-license
Locational Analysis of Surface Water Quality, Sediment and Dredge Spoil At Nembe, Bayelsa State-NIGERIA ing and Indexing: DOAJ, Index Copernicus, OAJI, Scientific Indexing Services, JF, Google Scholar International Journal of Environment and Geoinformatics 6(1): 15-21 (2019) 15 Locational Analysis of Surface Water Quality, Sediment and Dredge Spoil At Nembe, Bayelsa State-NIGERIA Introduction It is known that the need to know the interactions of human and environment better as a result of the experience of anthropogenic climate change and its effects (Eludoyin et al, 2011;Papila, et al., 2019).Environmental monitoring activities with relevance to Agenda 21 of the Rio Earth Summit and the Kyoto principles address deforestation, desertification, air, soil and water pollution, erosion, meteorology and climatology as some of the key issues (Gazioğlu, et al., 2013;GAF, 1999). Ecologically, the environment is view as the physiochemical parameters of the habitat (Gazioğlu, 2018;Ülker et al., 2018).Ecology is the science of pant and animals in relation to their environment.Water bodies provide water for both domestic and industrial uses for man (Simav et al., 2013;Nkwo, 2001).The physiochemical parameters of water surface are the environmental indicators study in this work (Gazioğlu et al., 2010(Gazioğlu et al., -2013)).According to GAF (1999), the task consists of monitoring changes to the environment is by observing s environmental indicators over time.Environmental indicators are tools used to measure progress towards long-term sustainability and are observed simultaneously to economic and social indicators.These indicators supply decision-makers with answers to the four basic questions about the environment (GAF, 1999);  What is happening? Why is it happening? Is it significant? What has to be done about it?Location technology, in addition to aiding data gathering and data management, allows the introduction of more adequately representation of information.Point data allows results to be presented in the form of an advanced integrated framework (maps) in support of environmental reporting and policy implementation (Balkıs, et al., 2010).Due to the environmental insensitivity of oil surveys in inland waters and rivers, it results in water pollution in the Niger delta.Water pollution has thus become a limiting factor in the sustainable development of a number of countries including Nigeria.Dredging activities unbalanced the water ecosystem, thus environmental monitoring of demobilization phase of such project very useful in answering the above four questions. There were the needs to sweep and dredge Obama creek for the passage of the marine boat and other oil and gas marine transport activities.The Obama creek was be dredged to a depth of 3m and generate a total of 35,950 m3 of dredged spoils using a Suction Dredger. The environmental monitoring was to determine the impact of the Dredging activity on the biological characteristics of dredge spoils, surface water and sediment samples at the dredging point including the upstream and downstream on the environment.Environmental Monitoring of biological and physico chemistry characteristics of the surface water, sediment and dredge spoil samples after demobilization from the Dredging activity at Obama creek in Bayelsa State is the aim of this work.The objectives for this study were;  Acquire geospatial data of monitoring locations  Sampling of surface water, sediments/soil and dredged spoils at designated points and  Conduct analytical assessment on the collected samples. Study Area The study area, Obama creek, is in Nembe South Local Government Area of Bayelsa State.Obama creek lies in the freshwater swamp forest characterized by thick forests belt and low lying lands which are subject to seasonal flooding.It is criss-crossed by creeks/creeklets, which receives some tidal water from the Atlantic Ocean through the southern axis.Thus, in most parts, it is a large expanse of freshwater or riparian forest.However, in the southern axis, patches of mangrove occupy the flanks of the creeks ecosystem.Obama creek is within the Niger Delta (Bayelsa state) humid tropical zone that is defined with dry (usually November to March) and rainy (around April to October) period (seasons).The Nigeria southern rainy season comes from the Southwest trade ocean current (wind) from the coast of Atlantic Ocean.Oguntoyinbo and Hayward (1987) describes study area as dry, mostly dusty and with cold Northeast trade wind moving from the northern desert (Sahara) and dominates the dry period and comes with little harmattan season.The Niger Delta relative humidity is high within January to July with percentage values of 70 to 80.According to Gobo (1998), the area average atmospheric temperature is 25.5 C during the rainy period and 30 C in the dry months. Materials and Methods Various equipment and material was used.In addition the table 1 below , GPSmap 60CSx was used to capture GPS coordinates of samples location.Basically, Satellite imagery was downloaded from Google earth professional edition desktop software.GPS receiver was used to capture the longitude and latitude of the sample parameters location.The sample parameters was collected using appropriate equipment (see table 2) and results were analysed.The satellite imagery was then geo-referenced in ArcGIS 10.1.Global positioning systems received using laboratory methods.The analytical results were presented in samples and geodatabase tabular data format (see figure 4 for methodology flowchart).shows values from 7.60 -8.56mg/l; 0.050 -0.080mg/l; and 0.051 -0.076mg/l respectively.The Heavy metals, Cd, Pb, and Cr, values are from 0.011 -0.026mg/l; 0.119 -0.128mg/l; and 0.064 -0.094mg/ l respectively.Mercury was below equipment detection limit of 0.001mg/l.Ni had values ranging from 0.061 -0.072mg/l while values obtained for Zn ranged from 0.194 -0.232mg/l.Total and Fecal Coliform values are from <2.00 -30.0 MPN/100mL and <2.00 -23.00 MPN/100mL respectively. Sediment and Dredge Spoil results obtained at the five location of pH maximum and minimum values are from 5.40 -6.97. Results of Heavy metal analysis showed Cd, Pb, and Cr, with values ranging from 0.321 -0.512mg/kg; 1.19 -1.52mg/kg; and 1.00 -1.16mg/kg respectively.Hg was below the equipment detection limit of <0.001mg/kg.Ni had values ranging from 1.01 -1.29mg/kg while values for Zn ranged from 2.11 -2.40mg/kg.The Total and Faecal Coliform values for sediment ranged from 2.00 -90.0 MPN/100mg and <2.00 -13.0MPN/100mg respectively which is within acceptable limit. Conclusions The outcome of the Environmental Monitoring research at Obama creek from data capture/collection to the analyses at the laboratory showed that the water surface and sediments values were within acceptable limit, therefore the impact on the environment based on the samples examined in accordance with both Nigeria and the international bodies and regulatory agency requirement is minimal. However, air quality, vegetation and aquatic animals require assessment for effective environmental sustainability and management and balancing the ecosystem.The integration of locational data enhances visualization using Geographical Information system (GIS). Figure 1 Figure 1-2.Map of Nigeria with location of Bayelsa State Table 2 : Standard Test Methods used for Laboratory Analysis Table 3 : the coordinates of collected samples S/N Table 4 : Physico-Chemical and Microbiological Characteristics of Surface Water Samples Figure 5. PH and DPR limits values for surface water samples Table 5 : Physico-Chemical and Microbiological Characteristics of Sediment /Dredged Spoil Samples The nutrients of Sulphate, Phosphate and Nitrate,
2019-04-27T13:13:21.882Z
2019-04-12T00:00:00.000
{ "year": 2019, "sha1": "7fc29dd031a9327a13f7d72964ba1824f2ddb788", "oa_license": "CCBYNC", "oa_url": "https://dergipark.org.tr/en/download/article-file/664896", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "832abbf64724f4bea0acc0483e0d37ac84208f50", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
106403375
pes2o/s2orc
v3-fos-license
Using the Multi-Response Method with Desirability Functions to Optimize the Zinc Electroplating of Steel Screws Zinc electroplating is a coating process controlled by several input process parameters. However, the commonly used input parameters for setting the process of zinc deposition are current density, temperature of the coating solution, zinc concentration, deposition time, and concentration of additives (conditioner and brightener). The power consumed in the zinc plating process, coating thickness, increase in coating mass, and corrosion resistance are considered to be outputs or zinc coating parameters. They are widely used when the zinc coating requirements are based on the coating process cost, coating process speed, corrosion resistance, and coating thickness. This paper seeks to determine regression models by the response surface method (RSM) that relate the zinc coating parameters to the input parameters in steel screws. When considering the coating requirements of cost, coating process speed, corrosion resistance, and coating thickness, the optimal input parameters were found by using a multi-response surface (MRS). Input parameters of 0.3 amps/dm2, 20.0 ◦C, 13.9 g/L, 45 min, 28.5 mL/L, and 2.8 mL/L, respectively (relative to the commonly used input parameters detailed above), were obtained when considering the cost. Considering minimization of the deposition time, the input parameters obtained were 0.5 amps/dm2, 24.6 ◦C, 13.9 g/L, 45 min, 26.9 mL/L, and 1.1 mL/L, respectively. The optimal inputs to maximize the corrosion resistance were 0.6 amps/dm2, 32.4 ◦C, 14.0 g/L, 45 min, 28.7 mL/L, and 2.5 mL/L, respectively. Finally, when maximizing the coating thickness, the inputs were 0.7 amps/dm2, 38.4 ◦C, 12.2 g/L, 45 min, 26.5 mL/L, and 1.5 mL/L, respectively. Introduction Zinc electroplating is one of the most commonly used methods to protect steel from the corrosion process.The reason for this is that it is a low-cost fabrication process in comparison to other deposition technologies.Thus, it is the preferred choice for many companies that keep a close eye on expenditures.The main way of using zinc to protect steel from corrosion is by sacrificial protection of the steel.This means that the zinc coating will be first to be corroded, instead of the metallic substrate, in order to increase the latter's corrosion resistance [1][2][3].One of the most important industrial applications of zinc electroplating is found in the automotive sector where it provides corrosion protection to brake pipes, brake calipers, and power steering components.It also can be employed in the military sector (tanks and armored personnel carriers) or as a protective coating prior to painting for better adhesion of paint to steel surfaces [4,5].However, considerable efforts are continuously given to the development and implementation of new surface finishing processes for zinc plating due to the unending requirements of industry (especially in the automotive sector) for longer service life and better corrosion resistance in harsher environments [6][7][8].Several strategies have been used to improve the corrosion resistance of zinc electroplating, such as using zinc alloy coatings [9,10].As an example, Short et al. [9] demonstrated that the use of zinc-nickel electrodeposits that contain about 12-13% Ni promoted an increase in corrosion resistance compared with pure zinc electrodeposits.However, the effect of different factors, such as temperature, current density, time, zinc concentration, and additive concentration, on the corrosion rate has not been proven yet. This process involves the electrolytic deposition of a thin coating of zinc onto the surface of the metal to be protected, which is known as a substrate.This process is governed by several factors.However, current density (ρ), temperature (T), and concentration of the zinc deposit (C) have the most significant influence on zinc deposition [11].The current density (ρ) has a significant impact on the thickness of the zinc coating (Th).As a general rule, the thickness of the zinc coating obtained rises as the current density increases [12].However, if the current density (ρ) exceeds a threshold value, a rough surface of the zinc substrate is generated, leading it to have a lower corrosion resistance (R) than if the surface was smooth [13].At a higher temperature (T), there is an increase in the diffusion of hydrogen at the cathode.In addition, the absorption of hydrogen in metals is a serious problem during zinc electroplating.In fact, it is often said that a zinc layer acts as a barrier against hydrogen absorption.This greatly improves the mechanical and anticorrosive characteristics of the zinc coating.In addition, when the temperature (T) and the current density (ρ) increase at the same time, the deposited zinc is much brighter.On the other hand, when the temperature (T) increases and the current density (ρ) remains constant, the zinc coating is irregular, because the zinc crystals that are deposited on the substrate are very large [14].Also, the concentration of zinc (C) in the coating solution affects the brightness and surface finish of the zinc deposit.Higher concentrations (C) will produce a rougher surface with large zinc crystals, whereas lower concentrations will provide a brighter finish with finer and more corrosion-resistant crystals [15].In recent decades, some scientific studies have been conducted to determine which factors are most important in improving the performance of the zinc plating process [16,17]. However, these scientific studies have not been supported by multivariate statistical techniques that investigate how one factor influences the other factors.One of the most frequently used techniques used to study the influence of a factor or input on the others and then optimize the combination of factors to obtain the best output is response surface methodology (RSM) [18,19].When there is more than one output, several response surfaces should be optimized using the multi-response surface method (MRS) [20].In this paper, a group of regression models that are based on the RSM were used to relate the zinc coating requirements (outputs) to the zinc coating process parameters (inputs).The latter were current density (ρ), temperature of the coating solution (T), zinc concentration (C), deposition time (t), concentration of Additive 1 (Envirozin Conditioner (CA1)), and concentration of Additive 2 (Envirozin 100 Initial Brightener (CA2)).The response surface method (RSM) for steel screws was used.Then, while considering zinc coating requirements based on cost, manufacturing speed, corrosion resistance, and coating thickness, the optimal input process parameters were found by using the multi-response surface (MRS) with the desirability functions.The power consumed in the zinc plating process (W) and the increase in coating mass (∆M) are the zinc coating process parameters that are used when the zinc coating requirements are based on cost.However, the deposition time (t) is the parameter that is used when the zinc coating process parameters requirements are based on the electroplating process speed.Corrosion resistance (R) is the parameter used when the zinc coating process parameter requirements are based on the corrosion resistance.Finally, zinc coating (Th) is the parameter used when the zinc coating process parameters requirements are based on the coating thickness.This paper concentrates on a study of the zinc electroplating process in steel screws for the following ranges: current densities (ρ), 2-3 amps/dm 2 ; temperature (T), 30-40 • C; concentration of zinc deposit (C), 8-14 (g/L); time (t), 30-60 min.;Concentration Additive 1 (CA1), 8-10 (mL/L); and Concentration Additive 2 (CA2), 1-3 (mL/L).Figure 1 shows a scheme in which all inputs and outputs that were considered in this work are used for modeling and optimizing the zinc electroplating process of screws.Concentration Additive 1 (CA1), 8-10 (mL/L); and Concentration Additive 2 (CA2), 1-3 (mL/L).Figure 1 shows a scheme in which all inputs and outputs that were considered in this work are used for modeling and optimizing the zinc electroplating process of screws. Modeling and Optimizing Using the RSM with Desirability Functions The RSM seeks to determine the relationships of input variables (independent variables) to output variables (response variables).It was developed as a means to model experimental responses.Box and Wilson introduced the method in 1951 [18] to create a model for optimal response with the data provided by experiments.It has been used recently along with other techniques for the optimization of products and industrial processes [21][22][23][24].In essence, the RSM is a collection of statistical techniques that uses a regression model based on a low-degree polynomial function (Equation (1)): ( ) where Y is an experimental response, (X1, X2, X3, …, Xk) is the input vector, e is an error term, and f is a function of cross-products of the polynomial's terms.The quadratic model (second-order) is a widely-used polynomial function.It appears in Equation ( 2): where the linear part is the first summation, the quadratic part is the second, and the product of the pairs of variables is the third.The coefficients b0, bi, bii, and bij are determined using regression analysis.However, satisfactory results are not always obtained from these functions for complex problems that have many inputs and nonlinearities.The reason for this is that continuous functions are defined by polynomials.Thus, if data are insufficient, the functions cannot be adjusted.The p-value (or Prob.> F) is the probability of receiving a result that equals or exceeds what was observed.This assumes that the model is accurate.It can be determined by analysis of variance (ANOVA).If the Prob. is greater than the F value of the model and the model has no term with a level of significance that exceeds, for example, α = 0.05, the model will be acceptable at a confidence interval of (1 − α).Some researchers have employed ANOVA to determine the inputs' or process parameters' influence on the zinc plating process outputs [25,26].If there is more than one output, the problem is Modeling and Optimizing Using the RSM with Desirability Functions The RSM seeks to determine the relationships of input variables (independent variables) to output variables (response variables).It was developed as a means to model experimental responses.Box and Wilson introduced the method in 1951 [18] to create a model for optimal response with the data provided by experiments.It has been used recently along with other techniques for the optimization of products and industrial processes [21][22][23][24].In essence, the RSM is a collection of statistical techniques that uses a regression model based on a low-degree polynomial function (Equation ( 1)): where Y is an experimental response, (X 1 , X 2 , X 3 , . . ., X k ) is the input vector, e is an error term, and f is a function of cross-products of the polynomial's terms.The quadratic model (second-order) is a widely-used polynomial function.It appears in Equation ( 2): where the linear part is the first summation, the quadratic part is the second, and the product of the pairs of variables is the third.The coefficients b 0 , b i , b ii , and b ij are determined using regression analysis.However, satisfactory results are not always obtained from these functions for complex problems that have many inputs and nonlinearities.The reason for this is that continuous functions are defined by polynomials.Thus, if data are insufficient, the functions cannot be adjusted.The p-value (or Prob.> F) is the probability of receiving a result that equals or exceeds what was observed.This assumes that the model is accurate.It can be determined by analysis of variance (ANOVA).If the Prob. is greater than the F value of the model and the model has no term with a level of significance that exceeds, for example, α = 0.05, the model will be acceptable at a confidence interval of (1 − α).Some researchers have employed ANOVA to determine the inputs' or process parameters' influence on the zinc plating process outputs [25,26].If there is more than one output, the problem is termed an MRS.This implies that the outputs are in disagreement.There can be a great difference between the optimal configurations for different outputs.Harrington [27] proposed a compromise between outputs, comprising desirability functions for each output, as shown in Equations ( 3) and ( 4), and an overall desirability measure, namely, the geometric mean D of each output's desirability (Equation ( 5)). In the equations above, A and B are limit values and the exponent s determines the importance of achieving the target value; X is the input vector and f r is the model used in the prediction.To optimize one or more responses, one should use a higher-degree polynomial [28].The desirability approach requires that each estimated response be transformed into a unitless utility whose boundaries are 0 < d r < 1, where higher values of d r indicate more desirable response values.The optimization portion of the R package v.1.6looks for a combination of factors (or weights in the range 1-3) that simultaneously satisfy the optimization criteria of all responses and inputs. Electroplating Process Factors Examined by Use of RSM Researchers have previously employed RSM to identify an optimal combination of process parameters or inputs for the electroplating process.However, most of their works have been based on modelling and optimizing the electroplating process and have involved relatively few input and output parameters.For example, Oraon et al. [29] used multi-response optimization of the nickel electroplating process to model the deposited mass per unit area (g/cm 2 ) considering the concentration of NiCl 2 •6H 2 O, the concentration of NaBH 4 , and the temperature ( • C) as the nickel coating process input parameters.In this case, they observed that reducing the concentration of NiCl 2 •6H 2 O, the concentration of NaBH 4 , and the temperature significantly influenced the deposition of the nickel coating.Santana et al. [30] studied the optimization of the electrolytic bath for electro-deposition of corrosion-resistant Fe-W-B alloys using multi-response optimization.In their case, a full factorial design was considered for the design of the experiments.Other authors, such as Poroch et al. [31], used multi-response optimization combined with a genetic algorithm to optimize the nickel electroplating process in order to improve the cathode efficiency, coating thickness, brightness, and hardness of the metallic layer.They used the nickel coating process input parameters of current density (amps/dm 2 ), temperature T ( • C), and pH.More recently, Poroch et al. [32] studied the modelling and optimization of nickel-iron electroplating process variables to maximize the surface hardness while considering current density (amps/dm 2 ), temperature ( • C), and pH as input factors.Also, Poroch et al. [33] used the design of experiments and response surface methodology to model and optimize an Fe-Ni electroplating process from a chloride-sulphate bath.They optimized the Fe-Ni electroplating process by using the desirability function approach. Experimental Setup and Results Before undertaking the zinc electroplating process, all steel screws were subjected to a standard zinc phosphating process.This was done to provide a foundation for improvement of the adhesion of the coating to be applied to steel parts.Then, each of the phosphate screws was weighed on a precision balance to determine its initial mass prior to application of the zinc coating.The zinc electroplating process was then carried out in an isothermal container that was connected to an adjustable heater, into which each proposed solution was poured.These solutions were proposed based on zinc concentration (C), sodium hydroxide concentration, additive concentration 1 (CA1; Envirozin Conditioner), and additive concentration 2 (CA2, Envirozin 100 Initial Brightener).The positive pole of the power supply was connected to a pure zinc plate that served as a cathode.The negative pole was connected to the already phosphated steel screws, which served as the anode.Once the temperature of the previously proposed solution had been reached, each of the screws was completely immersed in this specific solution.Next, the power supply was connected and the values of intensity and voltage for the deposition of the proposed zinc coatings were adjusted until the values of the proposed current densities (ρ) were reached.The current density values were obtained from the current provided by the power source and from the surface of the screw.The current was measured using the power supply's ammeter, whereas the surface of the screws was obtained theoretically using Catia v5 R18 (Woodlands Hills, CA, USA) [34].However, for the purpose of always keeping the exposed surface of the screws unchanged, the position of the screws inside the isothermal container was also left unchanged, as well as the distance between the cathode and the anode.Figure 2 shows the proposed installation of the zinc electroplating of the phosphate screws. Experimental Setup and Results Before undertaking the zinc electroplating process, all steel screws were subjected to a standard zinc phosphating process.This was done to provide a foundation for improvement of the adhesion of the coating to be applied to steel parts.Then, each of the phosphate screws was weighed on a precision balance to determine its initial mass prior to application of the zinc coating.The zinc electroplating process was then carried out in an isothermal container that was connected to an adjustable heater, into which each proposed solution was poured.These solutions were proposed based on zinc concentration (C), sodium hydroxide concentration, additive concentration 1 (CA1; Envirozin Conditioner), and additive concentration 2 (CA2, Envirozin 100 Initial Brightener).The positive pole of the power supply was connected to a pure zinc plate that served as a cathode.The negative pole was connected to the already phosphated steel screws, which served as the anode.Once the temperature of the previously proposed solution had been reached, each of the screws was completely immersed in this specific solution.Next, the power supply was connected and the values of intensity and voltage for the deposition of the proposed zinc coatings were adjusted until the values of the proposed current densities (ρ) were reached.The current density values were obtained from the current provided by the power source and from the surface of the screw.The current was measured using the power supply's ammeter, whereas the surface of the screws was obtained theoretically using Catia v5 R18 (Woodlands Hills, CA, USA) [34].However, for the purpose of always keeping the exposed surface of the screws unchanged, the position of the screws inside the isothermal container was also left unchanged, as well as the distance between the cathode and the anode.Figure 2 shows the proposed installation of the zinc electroplating of the phosphate screws.After the time (t) for the deposition of the coating had elapsed, the power supply was disconnected, and the zinc-plated screw was withdrawn from the solution and then submerged in pure water to remove the dissolution residues from its surface.Then, the zinc-plated screw was immersed for 30 s in a passivating product (TRIPASS ECO 3) and subsequently immersed for 60 s in a sealant product (Hydroklad 30).After the zinc-plated screws that had been treated with the passivant and sealant were completely dry, they were weighed again on the precision scale.The difference in masses between the phosphate and galvanized screws was the coating mass increase (ΔM).The coating thickness measurement (Th) was conducted by means of a nondestructive technique that is based on the magnetic induction phenomenon and is in accordance with ASTM B499-09 [35].A measuring device Minitest (Model 1100, Elektrophysik, Cologne, Germany) that was equipped with an FN 1.6-type probe was selected in this case for use when dealing with zinc coatings on steel substrates.The measurement of the coatings was made at four different points on each of the galvanized screws and an average value of the coatings was subsequently calculated.In order to After the time (t) for the deposition of the coating had elapsed, the power supply was disconnected, and the zinc-plated screw was withdrawn from the solution and then submerged in pure water to remove the dissolution residues from its surface.Then, the zinc-plated screw was immersed for 30 s in a passivating product (TRIPASS ECO 3) and subsequently immersed for 60 s in a sealant product (Hydroklad 30).After the zinc-plated screws that had been treated with the passivant and sealant were completely dry, they were weighed again on the precision scale.The difference in masses between the phosphate and galvanized screws was the coating mass increase (∆M).The coating thickness measurement (Th) was conducted by means of a nondestructive technique that is based on the magnetic induction phenomenon and is in accordance with ASTM B499-09 [35].A measuring device Minitest (Model 1100, Elektrophysik, Cologne, Germany) that was equipped with an FN 1.6-type probe was selected in this case for use when dealing with zinc coatings on steel substrates. The measurement of the coatings was made at four different points on each of the galvanized screws and an average value of the coatings was subsequently calculated.In order to validate and adjust this method of measuring the thickness, the thickness of the coatings of several of the zinc-plated steel screws was measured by means of destructive tests based on metallographic methods according to the ASTM E3-95 standard [36].The corrosion resistance of all samples obtained by the zinc electroplating process was evaluated by taking electrochemical measurements using a potentiodynamic polarization technique [37].Open-circuit potential (OCP) measurement and linear potential scan experiments were chosen as the electrochemical measurement methods.The measurements were conducted using an AUTOLAB-PGSTAT computer-controlled potentiostat (Metrohm, Herisau, Switzerland) in a naturally aerated 3.5 wt.% NaCl solution at room temperature.For this purpose, a conventional three-electrode cell was used with a graphite bar as counter electrode, an Ag/AgCl/3 M KCl electrode as reference electrode, and the specimen (steel screw) as a working electrode [38][39][40][41][42].A glass cell containing 150 mL of 3.5 wt.% NaCl solution (corrosive electrolyte) was used for each electrochemical experiment.Figure 3a shows the connections made to each of the electrodes while Figure 3b shows the placement of each of the electrodes inside the glass cell. Metals 2018 6, x 6 of 20 validate and adjust this method of measuring the thickness, the thickness of the coatings of several of the zinc-plated steel screws was measured by means of destructive tests based on metallographic methods according to the ASTM E3-95 standard [36].The corrosion resistance of all samples obtained by the zinc electroplating process was evaluated by taking electrochemical measurements using a potentiodynamic polarization technique [37].Open-circuit potential (OCP) measurement and linear potential scan experiments were chosen as the electrochemical measurement methods.The measurements were conducted using an AUTOLAB-PGSTAT computer-controlled potentiostat (Metrohm, Herisau, Switzerland) in a naturally aerated 3.5 wt.% NaCl solution at room temperature. For this purpose, a conventional three-electrode cell was used with a graphite bar as counter electrode, an Ag/AgCl/3 M KCl electrode as reference electrode, and the specimen (steel screw) as a working electrode [38][39][40][41][42].A glass cell containing 150 mL of 3.5 wt.% NaCl solution (corrosive electrolyte) was used for each electrochemical experiment.Figure 3a shows the connections made to each of the electrodes while Figure 3b shows the placement of each of the electrodes inside the glass cell.The open-circuit potential for each sample was measured until a steady-state value was reached.Then, a linear potential sweep in the anodic direction was conducted at a scan rate of 1 mV/s, but beginning at 0.1 V below OCP and terminating at 0.1 V above the OCP.The output from these experiments yielded a polarization curve of the current density versus the applied potential.The resulting corrosion current can be calculated by using Tafel slope analysis where the relationship between the current density and the electrode potential during the polarization is obtained by the following equation (Equation ( 6)): where E is the electrode potential, I is the measured current density, ŋ is the difference between the applied electrode potential and the corrosion potential, Ecorr is the corrosion potential of the corroding metal, Icorr is the corrosion current, and ba, bc are the Tafel constants of anodic and cathodic half-cell reactions, respectively.Values for Tafel plots are derived from the logarithm of current density values as a function of voltage.More details of the fitting method for obtaining the corrosion parameters can be found elsewhere [43].The corrosion data that were obtained from Tafel polarization curves were obtained by superimposing a straight line on the linear portions of the cathodic and anodic curves.It is important to note that the Tafel polarization curve is the most efficient method for detecting the anticorrosion performance of metal surfaces.In this sense, an excellent corrosion resistance is associated with a lower corrosion rate, which corresponds to a higher The open-circuit potential for each sample was measured until a steady-state value was reached.Then, a linear potential sweep in the anodic direction was conducted at a scan rate of 1 mV/s, but beginning at 0.1 V below OCP and terminating at 0.1 V above the OCP.The output from these experiments yielded a polarization curve of the current density versus the applied potential.The resulting corrosion current can be calculated by using Tafel slope analysis where the relationship between the current density and the electrode potential during the polarization is obtained by the following equation (Equation ( 6)): 303 Metals 2018 6, x 6 of 20 validate and adjust this method of measuring the thickness, the thickness of the coatings of several of the zinc-plated steel screws was measured by means of destructive tests based on metallographic methods according to the ASTM E3-95 standard [36].The corrosion resistance of all samples obtained by the zinc electroplating process was evaluated by taking electrochemical measurements using a potentiodynamic polarization technique [37].Open-circuit potential (OCP) measurement and linear potential scan experiments were chosen as the electrochemical measurement methods.The measurements were conducted using an AUTOLAB-PGSTAT computer-controlled potentiostat (Metrohm, Herisau, Switzerland) in a naturally aerated 3.5 wt.% NaCl solution at room temperature.For this purpose, a conventional three-electrode cell was used with a graphite bar as counter electrode, an Ag/AgCl/3 M KCl electrode as reference electrode, and the specimen (steel screw) as a working electrode [38][39][40][41][42].A glass cell containing 150 mL of 3.5 wt.% NaCl solution (corrosive electrolyte) was used for each electrochemical experiment.Figure 3a shows the connections made to each of the electrodes while Figure 3b shows the placement of each of the electrodes inside the glass cell.The open-circuit potential for each sample was measured until a steady-state value was reached.Then, a linear potential sweep in the anodic direction was conducted at a scan rate of 1 mV/s, but beginning at 0.1 V below OCP and terminating at 0.1 V above the OCP.The output from these experiments yielded a polarization curve of the current density versus the applied potential.The resulting corrosion current can be calculated by using Tafel slope analysis where the relationship between the current density and the electrode potential during the polarization is obtained by the following equation (Equation ( 6)): where E is the electrode potential, I is the measured current density, ŋ is the difference between the applied electrode potential and the corrosion potential, Ecorr is the corrosion potential of the corroding metal, Icorr is the corrosion current, and ba, bc are the Tafel constants of anodic and cathodic half-cell reactions, respectively.Values for Tafel plots are derived from the logarithm of current density values as a function of voltage.More details of the fitting method for obtaining the corrosion parameters can be found elsewhere [43].The corrosion data that were obtained from Tafel polarization curves were obtained by superimposing a straight line on the linear portions of the cathodic and anodic curves.It is important to note that the Tafel polarization curve is the most efficient method for detecting the anticorrosion performance of metal surfaces.In this sense, an excellent corrosion resistance is associated with a lower corrosion rate, which corresponds to a higher ba − e 2.303 Metals 2018 6, x 6 of 20 validate and adjust this method of measuring the thickness, the thickness of the coatings of several of the zinc-plated steel screws was measured by means of destructive tests based on metallographic methods according to the ASTM E3-95 standard [36].The corrosion resistance of all samples obtained by the zinc electroplating process was evaluated by taking electrochemical measurements using a potentiodynamic polarization technique [37].Open-circuit potential (OCP) measurement and linear potential scan experiments were chosen as the electrochemical measurement methods.The measurements were conducted using an AUTOLAB-PGSTAT computer-controlled potentiostat (Metrohm, Herisau, Switzerland) in a naturally aerated 3.5 wt.% NaCl solution at room temperature.For this purpose, a conventional three-electrode cell was used with a graphite bar as counter electrode, an Ag/AgCl/3 M KCl electrode as reference electrode, and the specimen (steel screw) as a working electrode [38][39][40][41][42].A glass cell containing 150 mL of 3.5 wt.% NaCl solution (corrosive electrolyte) was used for each electrochemical experiment.Figure 3a shows the connections made to each of the electrodes while Figure 3b shows the placement of each of the electrodes inside the glass cell.The open-circuit potential for each sample was measured until a steady-state value was reached.Then, a linear potential sweep in the anodic direction was conducted at a scan rate of 1 mV/s, but beginning at 0.1 V below OCP and terminating at 0.1 V above the OCP.The output from these experiments yielded a polarization curve of the current density versus the applied potential.The resulting corrosion current can be calculated by using Tafel slope analysis where the relationship between the current density and the electrode potential during the polarization is obtained by the following equation (Equation ( 6)): where E is the electrode potential, I is the measured current density, ŋ is the difference between the applied electrode potential and the corrosion potential, Ecorr is the corrosion potential of the corroding metal, Icorr is the corrosion current, and ba, bc are the Tafel constants of anodic and cathodic half-cell reactions, respectively.Values for Tafel plots are derived from the logarithm of current density values as a function of voltage.More details of the fitting method for obtaining the corrosion parameters can be found elsewhere [43].The corrosion data that were obtained from Tafel polarization curves were obtained by superimposing a straight line on the linear portions of the cathodic and anodic curves.It is important to note that the Tafel polarization curve is the most efficient method for detecting the anticorrosion performance of metal surfaces.In this sense, an excellent corrosion resistance is associated with a lower corrosion rate, which corresponds to a higher bc ] (6) 6, x 6 of 20 nd adjust this method of measuring the thickness, the thickness of the coatings of several c-plated steel screws was measured by means of destructive tests based on metallographic ccording to the ASTM E3-95 standard [36].The corrosion resistance of all samples obtained c electroplating process was evaluated by taking electrochemical measurements using a ynamic polarization technique [37].Open-circuit potential (OCP) measurement and linear scan experiments were chosen as the electrochemical measurement methods.The ents were conducted using an AUTOLAB-PGSTAT computer-controlled potentiostat , Herisau, Switzerland) in a naturally aerated 3.5 wt.% NaCl solution at room temperature.purpose, a conventional three-electrode cell was used with a graphite bar as counter an Ag/AgCl/3 M KCl electrode as reference electrode, and the specimen (steel screw) as a electrode [38][39][40][41][42].A glass cell containing 150 mL of 3.5 wt.% NaCl solution (corrosive e) was used for each electrochemical experiment.Figure 3a shows the connections made to e electrodes while Figure 3b shows the placement of each of the electrodes inside the glass s the electrode potential, I is the measured current density, ŋ is the difference between the lectrode potential and the corrosion potential, Ecorr is the corrosion potential of the metal, Icorr is the corrosion current, and ba, bc are the Tafel constants of anodic and cathodic eactions, respectively.Values for Tafel plots are derived from the logarithm of current lues as a function of voltage.More details of the fitting method for obtaining the corrosion rs can be found elsewhere [43].The corrosion data that were obtained from Tafel on curves were obtained by superimposing a straight line on the linear portions of the and anodic curves.It is important to note that the Tafel polarization curve is the most ethod for detecting the anticorrosion performance of metal surfaces.In this sense, an orrosion resistance is associated with a lower corrosion rate, which corresponds to a higher where E is the electrode potential, I is the measured current density, Metals 2018 6, x 6 of 20 validate and adjust this method of measuring the thickness, the thickness of the coatings of several of the zinc-plated steel screws was measured by means of destructive tests based on metallographic methods according to the ASTM E3-95 standard [36].The corrosion resistance of all samples obtained by the zinc electroplating process was evaluated by taking electrochemical measurements using a potentiodynamic polarization technique [37].Open-circuit potential (OCP) measurement and linear potential scan experiments were chosen as the electrochemical measurement methods.The measurements were conducted using an AUTOLAB-PGSTAT computer-controlled potentiostat (Metrohm, Herisau, Switzerland) in a naturally aerated 3.5 wt.% NaCl solution at room temperature. For this purpose, a conventional three-electrode cell was used with a graphite bar as counter electrode, an Ag/AgCl/3 M KCl electrode as reference electrode, and the specimen (steel screw) as a working electrode [38][39][40][41][42].A glass cell containing 150 mL of 3.5 wt.% NaCl solution (corrosive electrolyte) was used for each electrochemical experiment.Figure 3a shows the connections made to each of the electrodes while Figure 3b shows the placement of each of the electrodes inside the glass cell.The open-circuit potential for each sample was measured until a steady-state value was reached.Then, a linear potential sweep in the anodic direction was conducted at a scan rate of 1 mV/s, but beginning at 0.1 V below OCP and terminating at 0.1 V above the OCP.The output from these experiments yielded a polarization curve of the current density versus the applied potential.The resulting corrosion current can be calculated by using Tafel slope analysis where the relationship between the current density and the electrode potential during the polarization is obtained by the following equation (Equation ( 6)): where E is the electrode potential, I is the measured current density, ŋ is the difference between the applied electrode potential and the corrosion potential, Ecorr is the corrosion potential of the corroding metal, Icorr is the corrosion current, and ba, bc are the Tafel constants of anodic and cathodic half-cell reactions, respectively.Values for Tafel plots are derived from the logarithm of current density values as a function of voltage.More details of the fitting method for obtaining the corrosion parameters can be found elsewhere [43].The corrosion data that were obtained from Tafel polarization curves were obtained by superimposing a straight line on the linear portions of the cathodic and anodic curves.It is important to note that the Tafel polarization curve is the most efficient method for detecting the anticorrosion performance of metal surfaces.In this sense, an excellent corrosion resistance is associated with a lower corrosion rate, which corresponds to a higher is the difference between the applied electrode potential and the corrosion potential, Ecorr is the corrosion potential of the corroding metal, Icorr is the corrosion current, and ba, bc are the Tafel constants of anodic and cathodic half-cell reactions, respectively.Values for Tafel plots are derived from the logarithm of current density values as a function of voltage.More details of the fitting method for obtaining the corrosion parameters can be found elsewhere [43].The corrosion data that were obtained from Tafel polarization curves were obtained by superimposing a straight line on the linear portions of the cathodic and anodic curves.It is important to note that the Tafel polarization curve is the most efficient method for detecting the anticorrosion performance of metal surfaces.In this sense, an excellent corrosion resistance is associated with a lower corrosion rate, which corresponds to a higher corrosion potential (Ecorr) or a lower corrosion current density (Icorr), respectively [44,45].Finally, other corrosion parameters, such as equivalent weight of the metal, density, or exposed surface, are required as input parameters.With this information, the AUTOLAB software (Model 30, PalmSens, Houten, The Netherlands) generates the complete set of corrosion parameters.Thus, the corrosion rate is calculated according to Equation (8): where 327 = 1 year (in seconds)/96,500, and 96,500 = 1 F in coulombs.Icorr is the corrosion current and is determined by an intersection of the linear portions of the anodic and cathodic sections of the Tafel curves, M is the atomic mass, V is the valence (number of electrons that are lost during the oxidation reaction), D is the density, and A is the exposed area of the sample [44]. Design of Experiments To provide accurate models without a great deal of data to support the original hypotheses, the RSM must establish a Design of Experiments (DoE) [46].There are several methods to develop a DoE.However, they require that a design matrix (inputs) be constructed for the measurement of the outputs or experiment responses.In this case, we used a Box-Behnken design (BBD) [47] that had three factors, as well as three levels, to develop the experiment.The input process parameters that were used to develop the DoE were current density (ρ), temperature (T), time (t), concentration of zinc (C), concentration of Additive 1 (Envirozin Conditioner (CA1)), and concentration of Additive 2 (Envirozin 100 Initial Brightener (CA2)).The outputs were the power consumed in the zinc plating process (W), coating thickness (Th), the increase in the coating mass (∆M), and corrosion resistance (R).The range of study considered in this case for each of the input process parameters was based on a preliminary group of phosphate screws to which a zinc coating was applied.The coating was applied to ensure that the galvanized screws had no defects or imperfections.Coatings were rejected if they presented a very irregular and/or rough coating, a reduced coating thickness (Th), a high temperature (T), or even an excessive deposition time (t).For example, Figure 4 shows some of the metallographic analyses that were obtained from the preliminary zinc coating and examined with a 50× microscope.Figure 4a shows a zinc coating with a thickness of 20 to 25 µm.The coating was formed with irregular zinc crystals, which create a very rough surface finish.In this case, and according to [11], the current density value considered was high (1.3 amps/dm 2 ), whereas the considered solution temperature was low (20 • C). Figure 4b shows a coating that has a thickness of 25 to 30 µm.In this case, the temperature was excessive (48 • C), whereas the current density value considered was 0.5 amps/dm 2 .In this case, and according to [14], the coating surface was very rough and contained large zinc crystals.Figure 4c shows a homogeneous coating that was formed by small crystals and had a thickness of 7 to 8 µm.In this case, the values of current density, temperature, and deposition time were 0.32 amps/dm 2 , 30 • C, and 20 min, respectively.The crystals that formed had a reduced size.According to [15], the coating provides a brighter finish with finer crystals and is more corrosion resistant.In contrast, the coating had a reduced thickness, which was due mainly to the deposition time, which was reduced.Finally, Figure 4d shows a homogeneous coating that was formed by small crystals and a thickness of 24 to 25 µm.The values of current density, temperature, and deposition time considered in this case were 0.3 amps/dm 2 , 25 • C, and 90 min, respectively.The homogeneous coating was formed by small crystals with a brighter finish and higher corrosion resistance [17].The search process to fix the parameters that did not generate effects or imperfections on the galvanized screws was carried out successively.It included other parameters, such as concentration of zinc (C), concentration of Additive 1 (CA1), and concentration of Additive 2 (CA2).After these were discarded, the ranges of all input process parameters were set.The input process parameters and their limits, as well as the notation, all appear in Table 1.Use of the statistical open source software R (r-project) [48] and the input parameters and levels that appear in Table 1 led to the manufacture of 54 zinc-electroplated steel screws with the corresponding inputs and outputs obtained experimentally (Table 2).Use of the statistical open source software R (r-project) [48] and the input parameters and levels that appear in Table 1 led to the manufacture of 54 zinc-electroplated steel screws with the corresponding inputs and outputs obtained experimentally (Table 2).The data in Table 2 were used to fit Equations ( 9)-( 12) to provide regression equations for all responses.The RMS "R" package [28] was employed for this.Second-order polynomial models were constructed for each response.Selection of the most accurate model involved several criteria.These were R 2 , p-value, Mean Absolute Error (MAE), and Root Mean Squared Error (RMSE).The second-degree polynomial functions that modeled the power consumption of the zinc plating process (W), the increase in coating mass (∆M), the zinc coating (Th), and the corrosion resistance (R) are shown in the equations.These equations show that each output is provided by a combination of second-order polynomials.The latter are formed, in turn, by combinations of input variables. The results of ANOVA for all final quadratic models appear in Tables 3-6.The p-value of most variables is less than 0.01.Thus, the inputs that the reduced quadratic models used are statistically significant.Similarly, it can be seen in these tables that ρ is the most influential input for "W" (Table 3), since the p-value is <2.2 × 10 −16 , whereas CA1 is the most influential input for "∆M" (Table 4) and "Th" (Table 5), since it has p-values of 0.0025282 and 0.00005107, respectively.Additionally, the measure of the variation around the mean of the regression model's results was used as the multiple correlation coefficient (R 2 ).All values of R 2 were close to 1. Thus, these models possess good predictive capacity.Also, MAE and RMSE were calculated in order to identify the quadratic models' generalization capacity, using the data in Table 2, according to Equations ( 13) and ( 14): where Y k Experiment are the experimental responses in this case, and Y k Model are the responses from the quadratic models that were obtained with the quadratic regression models and the specimens, m.The prediction errors MAE and RMSE that appear in Table 7 occurred when the maximum error corresponded to Th (an MAE of 10.48% and an RMSE of 12.73%).The minimum error corresponded to W (an MAE of 2.77% and an RMSE of 3.87%).Additionally, six new zinc-coated steel screws were created.They were used for testing the proposed regression models with previously unused regression model parameters.These new six steel screws were generated randomly.Table 8 shows the inputs and outputs of these six new zinc-coated steel screws.Once the new six new zinc-coated steel screws were manufactured, the errors that arose during the testing stage were calculated (See Table 9).This table shows that the maximum error corresponds to Th (an MAE of 10.81% and an RMSE of 11.58%), whereas the minimum error corresponds to W (an MAE of 5.9% and an RMSE of 5.57%).The errors indicate that the adjustment of the regression models and the results of the zinc-coated steel screws are relatively accurate.It also indicates a good generalization capacity.After the errors in the prediction from the regression models of the training and testing data were made, a scatter diagram of the variables was created.Figure 5 shows the scatter diagram or relationship of the experimental values to the values that had been predicted (quadratic models) for W, ∆M, Th, and R. The blue points correspond to the 54 datapoints shown in Table 2, whereas the red points correspond to the six additional zinc-coated steel screws that were used in the regression models that appear in Table 8.In this case, if the variables are correlated, the points will fall along the diagonal line or curve.The better the correlation, the tighter the points will hug the line.The figures indicate that all the red dots (or test data) are closer to the diagonal line than are some of the blue dots (or training data).Therefore, their correlation is greater.Because the number of test datapoints is less than the number of training datapoints, the MAE and RMSE errors for the testing analysis and the training analysis are similar (see Tables 8 and 9).However, the variables that had the greatest correlation were W and ∆M, whereas the variables that had the lowest correlation were Th and R. The reason for this may be that the procedure to obtain these variables is more complex than the one to obtain W and ∆M, and, therefore, the error may be greater.As a result of the figures and the errors shown in Tables 8 and 9, it can be said that these models suffice for the prediction of such values, as the residuals were small and the correlations of actual to predicted values were high. Multi-Response Optimization Tables 10-17 show the combinations of input parameters that were examined when looking for the optimal process for zinc electroplating of steel screws by means of the RMS "R" package and desirability functions while considering several optimization criteria or scenarios.The first column of Tables 10-17 provides the input and output zinc coating process parameter requirements that were studied.The second column indicates the optimization process objective for inputs and outputs.The third and the fourth columns in the tables show the minimum and maximum values that can be reached for process parameters and characteristics of the zinc coating (range) according to Table 1.Finally, the fifth column shows the values of the electroplating process parameters and characteristics of the zinc coating that are achieved, whereas the sixth column shows the obtained desirability values.The results of the optimal zinc electroplating process based on the coating process cost are shown in Table 10.In this case, the coating process cost was based on minimizing the power consumed (W), minimizing the temperature (T), and minimizing the coating mass (ΔM).The value of the overall desirability was 0.98 in this case.Table 11 shows the results for the zinc electroplating process based on the coating process speed which, in turn, is based on the minimization of deposition time (t).In this case, the value of the overall desirability was 1. Table 12 shows the results for the optimal zinc electroplating process based on maximizing the corrosion resistance of the coating (R).The value of the goal that was established was the maximum, and the overall desirability was 1. Table 13 shows the results for the optimal zinc electroplating process based on maximizing the coating thickness (Th).The overall desirability in this case was 1. Table 14 shows the results for the optimal zinc electroplating process based on maximizing the resistance (R) while minimizing the temperature (T), the concentration (C), and the deposition time (t).The overall desirability in this case was 0.73.Table 15 shows the results for the optimal zinc electroplating process based on maximizing the thickness (Th) while the temperature (T), the concentration (C), and the time (t) are minimized.In this case, the value of the overall desirability was 0.79.Finally, Table 16 shows the results for the optimal zinc electroplating process based on maximizing the thickness (Th) and the resistance (R), while the temperature (T) and the concentration (C) are minimized.The overall desirability obtained in this last case was 0.74.The results that appear in show that the process parameters differ greatly for the various optimal zinc electroplating processes that were studied. Multi-Response Optimization Tables 10-17 show the combinations of input parameters that were examined when looking for the optimal process for zinc electroplating of steel screws by means of the RMS "R" package and desirability functions while considering several optimization criteria or scenarios.The first column of Tables 10-17 provides the input and output zinc coating process parameter requirements that were studied.The second column indicates the optimization process objective for inputs and outputs.The third and the fourth columns in the tables show the minimum and maximum values that can be reached for process parameters and characteristics of the zinc coating (range) according to Table 1.Finally, the fifth column shows the values of the electroplating process parameters and characteristics of the zinc coating that are achieved, whereas the sixth column shows the obtained desirability values.The results of the optimal zinc electroplating process based on the coating process cost are shown in Table 10.In this case, the coating process cost was based on minimizing the power consumed (W), minimizing the temperature (T), and minimizing the coating mass (∆M).The value of the overall desirability was 0.98 in this case.Table 11 shows the results for the zinc electroplating process based on the coating process speed which, in turn, is based on the minimization of deposition time (t).In this case, the value of the overall desirability was 1. Table 12 shows the results for the optimal zinc electroplating process based on maximizing the corrosion resistance of the coating (R).The value of the goal that was established was the maximum, and the overall desirability was 1. Table 13 shows the results for the optimal zinc electroplating process based on maximizing the coating thickness (Th).The overall desirability in this case was 1. Table 14 shows the results for the optimal zinc electroplating process based on maximizing the resistance (R) while minimizing the temperature (T), the concentration (C), and the deposition time (t).The overall desirability in this case was 0.73.Table 15 shows the results for the optimal zinc electroplating process based on maximizing the thickness (Th) while the temperature (T), the concentration (C), and the time (t) are minimized.In this case, the value of the overall desirability was 0.79.Finally, Table 16 shows the results for the optimal zinc electroplating process based on maximizing the thickness (Th) and the resistance (R), while the temperature (T) and the concentration (C) are minimized.The overall desirability obtained in this last case was 0.74. The results that appear in show that the process parameters differ greatly for the various optimal zinc electroplating processes that were studied.After obtaining the proposed optimal zinc electroplating process, seven new zinc-coated steel screws were manufactured in order to test the proposed methodology's accuracy.The manufacturing of the zinc-coated steel screws followed the combination of process parameters that appears in Tables 10-16 and under conditions identical to those described in Section 4. The values of the outputs or zinc coating parameters of these eight new screws are shown in Table 17.In order to examine the different errors in predicting outputs or zinc coating parameters by the eight optimal zinc coating criteria, MAE and RMSE were developed with data that had been normalized.Data is frequently normalized in statistical processes to convert all variables to a common scale (from 0 to 1).This transformation was effected in this case by deducting from each original value the minimum value, and then dividing by the range of each variable as per Equation ( 15): where Y k, norm are the normalized outputs from outputs or zinc coating parameters from models developed with RSM and the zinc coating parameters or experimental outputs.The error in the last two columns of Table 17 represents the MAE and RMSE which were normalized for all variables of the eight criteria that were studied.The normalized MAE and RMSE in the last two rows of the table relate to the errors in the zinc coating parameters or outputs that were studied.For example, when the first criterion is considered (minimizing the power consumed), the errors obtained are smallest (MAE = 4.9% and RMSE = 7.3%).However, when the fifth criterion is considered (maximizing the resistance (R) and minimizing the temperature (T), the concentration (C), and the time (t)), the error is greatest (MAE = 7.7% and RMSE = 9.8%).The reason for this difference in the errors could be that both the experimental measurement and the regression model to obtain the power consumed in the zinc electroplating process (W) are very precise (see Tables 7 and 9), so the total MAE and RMSE may be the lowest.However, the experimental measurement and the regression models that were used to obtain the thickness (Th) and corrosion resistance (R) at the same time are not very precise (see Tables 7 and 9).Thus, the total MAE and RMSE may be the highest.Similarly, the maximum errors for the zinc coating parameters or outputs are lower when predicting the power consumption (MAE = 2.7% and RMSE = 2.9%) and greater when predicting thickness (MAE = 12.5% and RMSE = 13.4%).The MAE and RMSE values for all zinc-electroplated steel screw parameters or outputs are in acceptable agreement. Conclusions This paper presents a methodology that permits the optimization of a zinc electroplating coating process in steel screws when several optimization scenarios are considered simultaneously.First, a DoE using BBD determined the configuration for electroplating 54 zinc steel screws.Using RSM, the power consumed in the zinc plating process, coating thickness, increase in coating mass, and corrosion resistance were modeled by quadratic regression models as functions of the input parameters.The latter were current density, temperature of the coating solution, zinc concentration deposition time, and concentration of additives (conditioner and brightener).The resulting models were tested and deemed to be acceptable.A multi-objective optimization study that used the desirability function approach was conducted.It considered several optimization criteria or scenarios.These included manufacturing cost, manufacturing speed, corrosion resistance, and coating thickness of the zinc plating process.In the optimization study results, the point at which the current reached its optimum value ranged from 0.3 amps/dm 2 to 0.7 amps/dm 2 for current density, 20 • C to 38.371 • C for temperature, 9.555 g/L to 14.0 g/L for concentration of zinc, 45.0 min to 89.717 min for time, 25.021 mL/L to 29.967 mL/L for conditioner (Additive 1), and 1.099 mL/L to 2.814 mL/L for brightener (Additive 2).The results suggest that optimal process parameters are found when various design requirements are satisfied in a range that is relatively small.This is particularly the case for time, given that most process instances were 45 min in length.Finally, seven zinc-electroplated steel screws with optimal zinc coating requirements were manufactured in order to test the proposed methodology's accuracy.The experimental and predicted results were found to be in good agreement. Figure 1 . Figure 1.Inputs and outputs that were considered in this work for the modelling and optimizing of the zinc electroplating process of screws. Figure 1 . Figure 1.Inputs and outputs that were considered in this work for the modelling and optimizing of the zinc electroplating process of screws. Figure 2 . Figure 2. Details of the isothermal container and the power supply used in electroplating of the phosphate screws. Figure 2 . Figure 2. Details of the isothermal container and the power supply used in electroplating of the phosphate screws. Figure 3 . Figure 3. Conventional three-electrode cell used for the electrochemical corrosion tests and the glass cell that contained the corrosive electrolyte (NaCl): (a) Connections made to each of the electrodes; (b) Placement of each of the electrodes inside the glass cell. Figure 3 . Figure 3. Conventional three-electrode cell used for the electrochemical corrosion tests and the glass cell that contained the corrosive electrolyte (NaCl): (a) Connections made to each of the electrodes; (b) Placement of each of the electrodes inside the glass cell. Figure 3 . Figure 3. Conventional three-electrode cell used for the electrochemical corrosion tests and the glass cell that contained the corrosive electrolyte (NaCl): (a) Connections made to each of the electrodes; (b) Placement of each of the electrodes inside the glass cell. Figure 3 . Figure 3. Conventional three-electrode cell used for the electrochemical corrosion tests and the glass cell that contained the corrosive electrolyte (NaCl): (a) Connections made to each of the electrodes; (b) Placement of each of the electrodes inside the glass cell. e 3 . Conventional three-electrode cell used for the electrochemical corrosion tests and the glass at contained the corrosive electrolyte (NaCl): (a) Connections made to each of the electrodes; acement of each of the electrodes inside the glass cell.pen-circuitpotential for each sample was measured until a steady-state value was reached.inear potential sweep in the anodic direction was conducted at a scan rate of 1 mV/s, ning at 0.1 V below OCP and terminating at 0.1 V above the OCP.The output from these nts yielded a polarization curve of the current density versus the applied potential.ing corrosion current can be calculated by using Tafel slope analysis where the relationship he current density and the electrode potential during the polarization is obtained by the equation (Equation (6)): Figure 3 . Figure 3. Conventional three-electrode cell used for the electrochemical corrosion tests and the glass cell that contained the corrosive electrolyte (NaCl): (a) Connections made to each of the electrodes; (b) Placement of each of the electrodes inside the glass cell. Metals 2018 6 , x 8 of 20 Figure 4 . Figure 4. Metallographic analyses of the preliminary zinc coating: (a) surface finish of the coating with a very rough surface finish; (b) very rough surface with large zinc crystals; (c) reduced homogeneous coating formed by small crystals and with a very smooth surface; and (d) homogeneous coating formed by small crystals and with a very smooth surface. Figure 4 . Figure 4. Metallographic analyses of the preliminary zinc coating: (a) surface finish of the coating with a very rough surface finish; (b) very rough surface with large zinc crystals; (c) reduced homogeneous coating formed by small crystals and with a very smooth surface; and (d) homogeneous coating formed by small crystals and with a very smooth surface. Table 1 . The experimental design levels of the Box-Behnken design (BBD) method and the independent variables. Table 1 . The experimental design levels of the Box-Behnken design (BBD) method and the independent variables. Table 3 . ANOVA table for the "W" quadratic model. Table 4 . ANOVA table for the "∆M" quadratic model. Table 5 . ANOVA table for the "Th" quadratic model. Table 6 . ANOVA table for the "R" quadratic model. Table 7 . Results with the predicted error criteria and the regression model: training analyses. Table 8 . Parameters of six additional zinc-coated steel screws used for testing the proposed regression model. Table 9 . Predicted error criteria results using the regression model: testing analyses. Table 10 . The first criterion that was considered: coating process cost based on minimizing the power consumed (W), the temperature (T), and the coating mass (∆M). Table 11 . The second criterion that was considered: minimizing the coating deposition time (t). Table 12 . The third criterion that was considered: maximizing the corrosion resistance of the coating (R). Table 13 . The fourth criterion that was considered: maximizing the coating thickness (Th). Table 14 . The fifth criterion that was considered: maximizing the resistance (R) and minimizing the temperature (T), the concentration (C), and the time (t). Table 15 . The sixth criterion that was considered: maximizing the thickness (Th) while minimizing the temperature (T), the concentration (C), and the time (t). Table 16 . The seventh criterion that was considered: maximizing the thickness (Th) and the resistance (R), while minimizing the temperature (T) and the concentration (C). Table 17 . Outputs or zinc coating parameters attained when the five design requirements were respected.
2019-04-08T08:02:44.738Z
2018-09-11T00:00:00.000
{ "year": 2018, "sha1": "a8fafe0b4eb28378262e40e3b461b51d7b31fcca", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4701/8/9/711/pdf?version=1536660475", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "a8fafe0b4eb28378262e40e3b461b51d7b31fcca", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Computer Science" ] }
55794289
pes2o/s2orc
v3-fos-license
Shift of Shapiro Step in High-Temperature Superconductor Influence of the charge imbalance effect on the system of intrinsic Josephson junctions of high temperature superconductors under external electromagnetic radiation are investigated. We demonstrate that the charge imbalance is responsible for a slope in the Shapiro step in the IV-characteristic. The nonperiodic boundary conditions shift the Shapiro step from the canonical position which determined by a frequency of external radiation. We also demonstrate how the system parameters affect on the shift of Shapiro step. Introduction The phase dynamics of the layered superconducting materials have attracted a great interest because of rich and interesting physics from one side and perspective of applications from the other one [1,2]. In particular, the nonequilibrium effects created by stationary current injection in high-T c materials have been studied very intensively in recent years [1][2][3][4][5][6][7]. However, the charge imbalance in the systematic perturbation theory is considered only indirectly as far as it is induced by fluctuations of the scalar potential [1,2,5]. In Ref. [8], it is taken into account as an independent degree of freedom and, therefore, the results are different from those of earlier treatments. In addition, due to the fact that charge does not screen in the superconducting layers a system form an intrinsic Josephson junctions (IJJ) [9,11]. Such system cannot be in the equilibrium state at any value of the electrical current. The influence of charge coupling on Josephson plasma oscillations was stressed in Refs. [6,9]. Last few years, two theoretical models are widely used to describe IJJ: capacitively coupled Josephson junctions (CCJJ) model and charge imbalance (CIB) model. In CCJJ model a nonvanishing generalized scalar potential appears due to the breaking of charge neutrality, but in CIB model it is related to the quasiparticle charge imbalance as well. Actually, relaxation length of charge imbalance in the layered system could be much larger than any other characteristic lengths. Therefore, both effects could exist in HTSC simultaneously because the thickness of superconducting layers is smaller than the Debye length and thus obviously less than characteristic length of disequilibrium relaxation. In current paper, we study the nonequilibrium effects created by current injection in a stack of IJJ under external electromagnetic radiation. A system of N + 1 superconducting layers (S-layers) presented in Fig. 1 is characterized by the order parameter ∆ l (t) = |∆| exp(iθ l (t)) and time-dependent phase θ l (t). The thickness of the S-layer is comparable with the Debay screening r D length that leads to the generalized Josephson relation [12]. The total current density J l through each S-layer is given as a sum of displacement, superconducting, quasiparticle, diffusion and nonequilibrium terms. Those equations together with kinetic equations for the nonequilibrium potential Ψ l (t) describe the physics of IJJs in HTSC. In the dimensionless form the system of equations arė where the dot shows a derivative with respect to τ = ω p t, v l between the layers l − 1 and l, v l (t) ≡ v l,l−1 (t), ϕ l (t) is phase difference across the layers l − 1 and l, α = ǫǫ o /2e 2 N(0)d is the coupling parameter, ǫ is the dielectric constant, ǫ o is the vacuum permittivity, d is the distance between the superconducting layers and N(0) is the density of states, I = J/J c is the dimensionless current density, J c is the critical current density, ω p = 2eJ c C is plasma frequency, C is the capacitance. Other dimensionless parameters are the dissipation parameter β = ω p 2eRI c , R is the junction resistance, the normalized quasiparticle relaxation time ζ l = ω p τ qp and the nonequilibrium parameter η l = . The term A sin ωτ introduces the effect of external radiation with amplitude A and frequency ω, which are normalized to J c and ω p , respectively. To reflect the experimental situation, we have added the noise I noise in the bias current with the amplitude ∼ 10 −8 which is produced by random number generator and its amplitude is normalized to the critical current density value J c . This system of equations is solved numerically using the fourth order Runge-Kutta method. We assume that due to the proximity effect the thickness of the first and the last S-layer larger than the middle one. Therefore, the nonequilibrium parameters depend on the parameter of boundary conditions γ, η 0,N = γη l , where l = 1, 2, .., N − 1. We consider the underdamped case with the McCumber parameter β c = 25 or β = 0.2. Results In Ref. [12] we have shown that in the system of intrinsic Josephson junctions of high temperature superconductors under external electromagnetic radiation charge imbalance is responsible for a slope in the Shapiro step in the IV-characteristic. The value of the slope increases with a nonequilibrium parameter. We demonstrate that coupling between junctions leads to the distribution of the slope's values along the stack. It was shown also that the nonperiodic boundary conditions shift the Shapiro step from the canonical position. The simulated IV-characteristics of JJs stack in the case without the charge imbalance η = 0 (dashed line) and at η = 0.2 (solid line) are presented in Fig.2. The IV-curve without the charge Influence of the coupling parameter on the shift of Shapiro step is shown in Fig. 3(a). Increasing of α leads to increasing of the shift value. The steps on the IV-characteristic at α = 0, 0.2, 0.6, 1 is indicated by the large dashed rectangle. The IV-characteristics at large α demonstrate also additional steps those appears on an internal branch. Those Shapiro steps indicated by the small dashed rectangle. The shift of the Shapiro step depends on the value of γ and on the coupling parameter α. Fig. 3(b) shows the distribution of shift of step along the stack with parameters γ = 0.5, 0.8, 1. One can see that the maximum of the shift value represents first and last Josephson junction. Thickness of superconductive layers of those junctions are larger than the others. The distribution of the shift can be also seen in the middle layers due to the coupling between the Josephson junctions. Thus, the Shapiro step demonstrates a shift of its position from the canonical value Nω, where N is the number of junctions in the stack and ω is the frequency of the external radiation. The value of this shift depends on the boundary conditions and coupling between Josephson junctions. Due to the coupling, the effect of the boundary conditions is extended to the neighboring junctions.
2017-08-28T18:33:23.000Z
2017-08-28T00:00:00.000
{ "year": 2018, "sha1": "4c88c62d939f3bd3300eacb26d81c0ea6b15d14a", "oa_license": "CCBY", "oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2018/08/epjconf_mmcp2018_03015.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "e2bca36a338bea0f3e4aa2341366218c3c9cc927", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
5607035
pes2o/s2orc
v3-fos-license
Determination of cytokeratins 1, 13 and 14 in oral lichen planus Introduccion: Cytokeratins (CK) are molecules of the cytoskeleton that contribute to the cellular differenciation. We studied the expression of CK1, CK13 and CK14 in thirty-three patients with OLP. The biopsied lesions were located in the dorsal surface of the tongue, the palatal keratinized mucosa and the nonkeratinized buccal mucosa. Objectives: This study aimed to determine the expression of CK1, CK13 and CK14 in oral lichen planus (OLP) and its relations with: clinical patterns, prognosis, drugs and tobacco intake and histopathological features. Study Design: Immunohistochemical analysis, retrospective, descriptive, observational and no randomized study. Results: No significant difference was observed in the expression of CK1 in patients with or without drug treatment. No association was found with the amount of drugs intake or smoking nor with the histopathological features examined. Samples immunostained with CK13 were all positive in the suprabasal layers, and 13 of them in the basal layer. In these last ones, statistical analysis showed significance in the grade of vacuolization of the basal layer (p=0.023) and in the degree of exocytosis (p=0.0025), this, making the degree of affection higher for both parameters. Thirty-two tissue sections were immunostained with CK14. CK14 was expressed in the basal layer in 97% of samples and in the suprabasal layer in 94% of samples. Conclusions: The three CK were altered in OLP. CK1 does not have a direct connection with the presence of orthokeratosis. The finding of the CK13 in the basal layer is related to the agression of the lymphocytic infiltration in the epithelium, due to the basal stratum vacuolization and the increase in lymphocytic exocitosis. The presence of CK14 in the suprabasal stratums is not a parameter to predict malignancy. The CK in OLP do not follow the normal pattern of keratinized or non-keratinized mucosa. Key words:Basal cell vacuolization, CK1, CK13, CK14, cytokeratin, lymphocytic exocytosis, oral lichen planus. Introduction Oral lichen planus (OLP) is a mucocutaneous inflammatory disease which affects 0.5 to 2.2% of population. It usually presents with a chronic course that includes frequent exacerbations (1). OLP is an example of autoimmunitary damage. The diagnosis must be based on the recognition of clinical alterations as well as on carrying out an interview with the objective of observing a possible cause-effect relationship to differentiate OLP from oral lichenoid reactions (OLR) (2). In this report we only included patients with OLP. It has been reported that the clinical features alone may be sufficiently diagnostic, particularly when presenting in the classic reticular form. The evidence regarding the need and value of biopsy for histological confirmation of the diagnosis is not definitive. Studies have shown variability in both interobserver and intraobserver reliability in the clinicopathological assessment of OLP (2). Cytokeratins (CK) are a group of the intermediate filament proteins in the epithelium comprising a heterodimer of an acidic and a basic keratin (keratin pair) (3). In 1982, Moll et al. found that there were 19 subclasses of CK and classified them according to their molecular weights. CK are place specific and may change when the growth rate rises or when the degree of differentiation is altered pathologically (4). CKs are the main differentiation markers of stratified epithelium. The pair CK5/CK14 is recognized as a specific marker of the basal layer of normal stratified epithelium (5) and it is the main component of hemidesmosomes (6). As they move away from the basal layer, keratinocytes fail to express these CKs and begin to express tissue-specific CKs. The major keratins of the interfollicular epidermis are CK10 and CK1. However cells of stratified non keratinized mucosa express CK4 and CK13 (7) (Fig. 1). In oral mucosa, CK4 and CK13 are the predominant suprabasal CKs. However, small subpopulations of suprabasal cells also express CK1 and CK10 (8). Hard palate and gingival mucosa express the same patterns as in the epidermis. The dorsal surface of the tongue and the lateral border contain intermediate patterns between keratinized and non keratinized epithelium (9). In contrast, the alveolar mucosa contains a large proportion of CK4 and CK13 and less CK5, CK6, CK14 and CK17 (10). CK1 synthesis starts earlier than CK10 which initiates its transcription when there are significant levels of CK1 (11,12) (The presence of CK 1/10 in the epithelium in OLP in the basal and suprabasal layers has been reported) (13). In this article, we have explored the distribution of CK1, CK13 and CK14 in the oral mucosa lesions in patients with OLP and correlated their expression with clinical and histopathological patient data. Material and Methods Subjects: Thirty female patients and three male patients with OLP, ranging in age from 40 to 85 years (average age, 61.6 years) were included. The biopsied lesions were located in three cases in the dorsal surface of the tongue, in three others in the palatal keratinized mucosa and in the other 27 patients, in the nonkeratinized buccal mucosa ( Table 1). The chief complaint was pain (93.8%). In fact, in the qualitative assessment, through an analogue pain scale from 0-10, patients had an average of 6.45 (SD ± 2.4). In 97.9% of the patients the lesions were multiple, predominantly erosive lesions (n = 37), plaque lesions (n = 20) and erythema (n = 12). The main location was in the buccal mucosa (over 70%), followed by the lips, tongue (dorsal and lateral region) and gums. There were lesions in the palate in 12.5% of the participants. The average evolution was 252.7 days with a range of 7-3285 days. Inclusion criteria: Biopses obtained from the oral mucosa of patients with clinical manifestations of OLP were included in this study, according to WHO Clinical diagnostic criteria: 1) Presence of bilateral, mostly symmetrical lesions; 2) Presence of network of slightly raised lines (reticular pattern); 3) Erosive, atrophic, bullous, and plaque type lesions in presence of reticular lesions elsewhere in the oral mucosa). The exclusion criteria were: 1) Patients younger than 16 years old; 2) patients with OLR of graft-versus-host disease; 3) OLR seen in direct topographic relationship to amalgam; 4) OLR in temporal association with the taking of medications, based on the interview and observing possible cause-effect relationship. Four patients with oral mucocele were enrolled into the study as controls. The present study was approved by e362 the Ethic Committee of the Lagomaggiore Hospital of Mendoza, Argentina. Written informed consent was obtained from each subject. Eighty seven percent of patients had other pathologies and 71% of them were on drugs treatment. The principal drugs administered were: antihypertensives (ACE inhibitors or angiotensinconverting enzyme inhibitors, β-blockers and calcium channel blockers), non steroidal anti-inflammatory drugs (NSAIDs), antidepressants (benzodiacepins), thyroid hormones, calcium and alendronate. Twenty five percent of the patients were tobacco smokers. Biopsies: Samples were fixed in 10% formalin and embedded in paraffin. Serial 5 µm-thick sections were mounted onto 3-aminopropyltrietoxysilane (Sigma, S. Louis, MO, USA)-coated slides. The histologic specimens were stained with hematoxylin and eosin and the diagnosis was confirmed by optical microscopic examination. The presence of hyperkeratosis, parakeratosis, vacuolization of the basal layer, band-like infiltrate, keratinocytes necrosis, incontinentia pigmenti, dysplasia, lymphocytes exocytosis and sawtooth rete ridges were examined (14). These parameters were graduated: 0= absence, 1= low grade, 2= moderated and 3= severe. Immunohistochemistry: The immunohistochemical procedure was performed as reported previously (15). Mouse monoclonal antibodies against the CK1 (NCL-CK1, clone 34βB4), CK13 (NCL-CK13, clone KS-1A3) and CK14 (NCL-L-LL002, cloneLL002) (Novocastra, Newcastle Upon Tyne, United Kingdom) were used. The antigen retrieval protocol was carried out in 0.01 M citrate buffer, pH 6.0 at 100 °C for 25 min. Tissue sections were incubated with the primary antibodies overnight at 4°C in a humidity chamber at the following dilutions: CK1, 1:100; CK13, 1:200; CK14, 1:100. As secondary antibody we used anti-mouse IgG biotin conjugate and Avidin and Biotinylated horseradish peroxidase Complex (Vectastain Universal Elite ABC, VECTOR, Burlinghame, CA, USA) were used. Diaminobenzidine/hydrogen peroxide was used as a chromogen substrate. Slides were counterstained with hematoxylin. The evaluation of the immunostaining was clasified as positive or negative in basal and suprabasal layers. Statistical Analysis: Fisher's exact test was used to determine whether the expression of the CK studied correlated significantly with the intake of different drugs or smoke. Mann Whitney test and t test with Welch correction were applied to compare means of the grades of histological changes detected related to the expression of CKs. Statistical analysis was performed using the Prism computer program (Graph Pad Software, San Diego, CA); p< 0.05 was considered significant. Results In the control group the usual pattern of expression of CK present in non keratinized normal mucosa was observed ( Fig. 2). Thirty one biopsies were immunostained with CK1; 5 of them were positive in the basal layer (16%) and 20 were positive in the suprabasal layers (64%) (Fig. 2). In six of these patients, the biopsies were taken from keratinized mucosa (KM); three from hard palate mucosa and three from the dorsal surface of the tongue (DST) ( Table 1). In five of them, CK1 was positive in the suprabasal layer, and in two of those, CK1 was also positive in the basal layer. The expression of CK1 and CK14 was negative in one patient with biopsy from KM and positive for CK13. This suggests a change in the pattern of keratinization. In our patients with OLP, the expression of CK1 in NKM was positive in 15 samples and negative in 10 samples. Tobacco smoke was present in 4 of these patients. CK1 expression was positive in NKM of all the smokers in the suprabasal layers, and also in the basal layer in two of those smokers. In the group of NKM from non smokers patients (n=21) CK1 was positive in the suprabasal layer in 10 patients. No significant difference was observed in the expression of CK1 in patients with or without drug treatment and there was no association with the amount of drugs intake or smoking. When the tisular afectation was compared with the expression of CK1, no correlation was found. Thirty one samples were immunostained with CK13, resulting all of them positive in the suprabasal layers and 13 positive in the basal layer (42%) (Fig. 2). In KM, CK13 was expressed in 4 out of 6 samples in the basal layer. In NKM, in the basal layer, CK13 was negative in 16 samples and positive in 9. CK13 was positive in the 24 biopsies of NKM in the suprabasal layers. When CK13 was expressed in the basal cells, statistical analysis showed a significantly higher degree of vacuolization of this layer (p=0.023) and of exocytosis (p=0.0025) ( Table 2). CK14 was expressed in the basal layer of 31 samples (97%, 31/32) and in the suprabasal layer of 30 samples (94%, 30/32) (Fig. 2). The expression of CK14 in the suprabasal layers was more intense than in the basal layer in five samples. Discussion We observed CK1 expression in 64% of our specimens in the suprabasal layer and in 16% in the basal layer. Our research differs from the ones made by other authors, since the presence of CK1 does not have a direct connection with the presence of orthokeratosis. In 1999, Chaiyarit et al., reported that the expression of CK1/10 in the epithelial basal and suprabasal layers was significantly higher in OLP than in the fibroids and that HSP60 presence in the basal layer was significantly superior in the samples of OLP. These authors suggested that many cytokines hidden by the T lymphocytes present during the infiltration, might influence the presence of different CK genes in the adjacent keratinocytes (13). Later, other authors presented similar unifying etiopathogenic models of OLP. At the first stage basal keratinocytes are ''activated'' by different antigens, which promotes the expression of heat-shock/stress-response proteins in a variant fashion. These proteins may act as a self-antigen induced on basaloid keratinocytes via innate immune response activation. Cytotoxic and pro-apoptotic mediators/stimuli expressed by fully activated cytotoxic CD8 T cells could then mediate the basal cell layer apoptosis and necrosis that is typical of lichen planus. It is also possible that the first antigen and the second antigen could represent the same molecule (18). An example might be a chemical hapten (19). Some examples of exogenous stimulus capable of starting an autoperpetuating cascade may be certain drugs, dental materials and infections. The generation of cytokines by these cells can positively regulate the presence of HSP60 in the basal adjacent keratinocytes. The next step depends on the individual's predisposal to a reaction to HSP60. If he is not predisposed, the first immune reaction will result in an unspecified mucositis. However, if the individual is predisposed to react to the HSP60 due to the presence of HLA antigens like the HLA-Bw57 or the HLA-DR2 in his body, then a second immune reaction will continue with the TL development which target the basal keratinocytes, resulting in their destruction (13). Our findings are greatly supported by this theory, because normally CK1 and CK13 are not expressed in basal layers of the epithelium, neither CK1 in NKM. In our study, CK13 was observed in 42% of the samples in the basal layer. We found a direct association with the degree of basal layer vacuolization, and the degree of lymphocytes exocytosis with the expression of CK13 in the basal layer. Bloor et al. detected CK4 and CK13 homogeneously spread in the suprabasal compartment of the parakeratotic epithelium in OLP (8). The finding of CK13 in the basal layer, suggests a jump in the presence/expression of CK, which should be in superior stratums, and is related to the aggression of the infiltrate to the epithelium, reliable through the basal stratum vacuolization and increased lymphocytic exocitosis. We observed CK14 expression in 97% of the samples in the basal and in 94% in the suprabasal stratums. The presence of CK14 on the suprabasal stratums may be an indication of a flaw in the cytological differentiation. This may be showing a sort of immaturity in those levels. However, according to our study, this parameter should not be studied since it is expressed in nearly all the OLP, and that is known that not all the OLP suffer from malignant transformation. It has been reported that patients with OLR have an increased risk of oral cancer, particularly important in patients who have atrophic, erosive or ulcerative lesions (20). As OLR have also clinical criteria, we excluded them from our sample. These include lichenoid contact lesions, lichenoid drug reactions and lichenoid lesions of graft-versus-host disease. In spite of this, in some cases there is a spectrum of OLR that may confuse the differential diagnosis (2). OLP is a chronic disease, in most of our patients it evolved during years. Many of them have multiple amalgam restorations, but at the time of inclusion in this work, they fulfilled the diagnosis criteria for OLP. Besides drug intake was present in most patients, but it had no chronological correlation with the disease. Our results are concordant with Jaques et al. who found CK14 in basal and suprabasal layers in the 23 samples analyzed (6). On the other hand, we do not agree with Brunotto et al, who claimed that the positive immunostaining of CK14 in the superficial epithelial stratums of OLP should be a sufficient malignant sign to begin new exams (5). Our discordance is based on the fact that malignancy of around 5% of OLP has been reported in the literature and that the expression of CK14 is found in 97% of the suprabasal layers. Besides, only one of our patients developed an oral squamous carcinoma in five years follow-up. This patient presented symmetrical lesions of erosive OLP and he was also tobacco smoker. His lesions had clinical WHO criteria for OLP, and he also had typical lichen affection in nails (nail pterygium). Therefore patients diagnosed with OLP must be followed up, as well as those with OLR, especially in erosive, atrophic, bullous and keratotic forms, for early diagnosis of malignant transformation if that occurs (21). Conclusions CK1 was present in 64% of the specimens in the suprabasal layer and in 16% in the basal layer, in patients with OLP. CK13 was observed in 42% of the samples in the basal layer, this has not been described previously in other studies. The presence of CK13 in the basal layer was associated with a higher degree of vacuolization of the basal layer and with the presence of important exocitosis. CK14 was positive in 97% of the samples in the basal layer and in 94% in the suprabasal stratums. The three CK were altered in the OLP. In conclusion, the CKs in the OLP do not follow the normal patterns of keratinization in keratinized or non-keratinized mucous epitheliums.
2017-04-27T02:46:31.749Z
2014-03-08T00:00:00.000
{ "year": 2014, "sha1": "9f70da4d752ca3eae621fc7490943d28b372a4fc", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4317/medoral.19289", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9f70da4d752ca3eae621fc7490943d28b372a4fc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
238408747
pes2o/s2orc
v3-fos-license
Involvement of integrin-activating peptides derived from tenascin-C in colon cancer progression Tenascin-C (TNC) is an adhesion modulatory protein present in the extracellular matrix that is highly expressed in several malignancies, including colon cancer. Although TNC is considered a negative prognostic factor for cancer patients, the substantial role of the TNC molecule in colorectal carcinogenesis and its malignant progression is poorly understood. We previously found that TNC has a cryptic functional site and that a TNC peptide containing this site, termed TNIIIA2, can potently and persistently activate beta1-integrins. In contrast, the peptide FNIII14, which contains a cryptic bioactive site within the fibronectin molecule, can inactivate beta1-integrins. This review presents the role of TNC in the development of colitis-associated colorectal cancer and in the malignant progression of colon cancer, particularly the major involvement of its cryptic functional site TNIIIA2. We propose new possible prophylactic and therapeutic strategies based on inhibition of the TNIIIA2-induced beta1-integrin activation by peptide FNIII14. INTRODUCTION Extracellular matrix (ECM) proteins such as fibronectin (FN), collagen, and laminin provide a scaffold for cell adhesion and subsequently influence various physiological cellular processes, including cell differentiation, survival/proliferation, and migration. As one of the major components of the tumor microenvironment, the ECM affects the behavior of cells in the cancer microenvironment, such as cancer-associated fibroblasts (CAFs) and immune cells, resulting in cancer development [1]. It therefore plays major roles in carcinogenesis and the malignant progression of cancer. Integrins are a family of heterodimeric transmembrane glycoproteins composed of alpha-and beta-subunits that directly interact with components of the ECM. These integrins primarily mediate cell adhesion, migration, survival, proliferation, and differentiation. In contrast to membrane receptors for humoral factors such as cytokines and chemokines, integrins are unique in their ability to alter the binding affinity for ECM ligands. Integrins exist mainly in two different structural states, an inactive conformation lacking ligand-binding affinity and an active one with high affinity [2]. On the other hand, integrin signaling contributes to the malignant progression of many cancers. For example, integrin alpha5beta1, a major FN receptor, is highly expressed in glioma/glioblastoma, with its expression levels reported to be associated with poor survival in glioma/glioblastoma patients [3]. Alpha5-integrin promotes cell proliferation and the dissemination of glioblastoma cells[4], modulates angiogenesis [5], and contributes to temozolomide chemoresistance [6]. Thus, the integrin alpha5beta1-mediated adhesive interaction of glioma cells may be associated with the acquisition of a highly aggressive phenotype in glioma/glioblastoma. Therefore, inhibition of integrin functions might be a promising therapeutic approach for cancer. Tenascin-C (TNC) is a hexameric, multimodular ECM glycoprotein. It is poorly expressed in normal adult tissues but highly expressed in both inflammatory lesions and the tumor microenvironment[3,7-10]. TNC is an endogenous activator of toll-like receptor 4, which triggers and amplifies inflammatory responses [11]. In addition, TNC binds to integrin alphavbeta3 and alpha9beta1 to drive inflammatory responses by inducing the synthesis of proinflammatory cytokines, including interleukin (IL)-6, IL-1beta, and tumor necrosis factor-alpha [12]. TNC is highly expressed and is thought to act as a major driving regulator of acute and chronic inflammatory diseases, including cardiac disease [ [36] revealed that the expression levels of TNC are higher in adenomatous colon polyps and colon carcinoma in situ than in non-neoplastic colonic mucosa and are also correlated with TMN stages of colon cancer, further indicating that TNC might contribute to carcinogenesis and progression [36]. TNC contains several characteristic domains, such as a central domain, heptad repeats, epidermal growth factor (EGF)-like repeats, FN type III repeats (FN-III repeats), and a fibrinogen globe (Figure 1), which can interact with ECM proteins, soluble factors, and cell receptors and express various functions of TNC. In addition, human TNC contains nine alternative splicing sites in FN-III repeats, and 511 possible splice variants can theoretically be generated through alternative splicing [37]. This alternative splicing could control the versatile biological functions of TNC by modulating its interaction with specific binding partners, as well as by exposing posttranslational sites and proteolytic cleavage sites [37]. However, the substantial role of the TNC molecule in colorectal carcinogenesis and its malignant progression has remained elusive. This review presents the role of TNC in the malignant progression of colon cancer and the development of colitis-associated colorectal cancer (CAC), with a particular focus on the major involvement of TNIIIA2, the cryptic functional site of TNC. We propose new possibilities for prophylactic and therapeutic strategies based on peptide FNIII14-mediated inhibition of the TNIIIA2-induced beta1-integrin activation. PATHOLOGICAL SIGNIFICANCE OF ELEVATED TNC EXPRESSION IN MALIGNANT TUMORS Most ECM proteins harbor functionally cryptic functional sites that are buried within their molecular structures. These cryptic sites, called matricryptic sites, are revealed via structural/conformational changes triggered by interactions with adjacent cells or other ECM components and by remodeling/processing by ECM-degrading proteinases, including matrix metalloproteinases (MMPs) and cathepsins. (Figure 1). The mode of beta1-integrin activation induced by TNIIIA2 is entirely distinct from that induced by "inside-out" signaling, which is the commonly considered mode of integrin activation. Saito et al [47] have found that syndecan-4, one of the transmembrane heparin sulfate proteoglycans, serves as a membrane receptor for TNIIIA2 and that engagement with TNIIIA2 induces a lateral association with beta1-integrins, resulting in stabilization of the active conformation of beta1-integrin [47]. Based on this unique mechanism of integrin activation, this TNIIIA2-induced integrin activation is more potent and persistent than other known integrin activators, such as the various cytokines and chemokines that stimulate the "inside-out" signaling pathway [48]. Because TNC variants containing the alternatively spliced domain type III-A2 are highly expressed in malignant tumors [49], the activation of beta1-integrin induced by TNIIIA2 may be related to some forms of cancer pathogenesis. We previously found that TNIIIA2 contributes to the ability of glioblastoma to acquire aggressive properties such as excessive survival/proliferation, disseminative migration, and anoikis resistance through activation of beta1-integrin[50-52]. More recently, we reported that TNIIIA2 establishes inflammatory environments via the NOD-like receptor family pyrin domain-containing 3/caspase-1/IL-1beta pathway [53]. These findings suggest that the pathological significance of high TNC expression in inflammation and cancer may lie in activating beta1-integrins based on TNIIIA2 function. September 15, 2021 Volume 13 Issue 9 . However, although these drugs are effective, they have various problems, including certain adverse events, and eventually become ineffective. Therefore, further investigation is still necessary to develop novel strategies for colorectal cancer, and it is important to elucidate the molecular mechanisms that enable colorectal cancer to acquire malignant properties. TNC is highly expressed in colon cancer, and high expression levels of TNC in tissue specimens are correlated with distant metastasis, tumor recurrence, advanced TNM stage, and poor prognosis[10,36]. Moreover, colon cancer cells highly expressing TNC show high metastatic potential and are associated with lymph nodes with metastasis[36]. In addition, serum TNC levels, particularly those of large-spliced variants, are higher in patients with colon cancer compared with controls[64]. Such levels are also correlated with tumor depth, lymph node metastasis, and disease progression [64]. Therefore, the levels of TNC in tissue and serum may be a diagnostic or prognostic biomarker in colon cancer. Furthermore, the Wnt/beta-catenin signaling pathway plays a central role in carcinogenesis, and its mutation and activation are found in almost all patients with colon cancers [65]. Because TNC is a Wnt/betacatenin target gene in human colon tumors[66], the deregulation of Wnt/beta-catenin signaling might lead to the overexpression of TNC in colon cancer. Experimental observations indicated that TNC secreted by myofibroblasts might act as a proinvasive factor for colon cancer cells [67]. Furthermore, TNC promotes proliferation, migration, and invasion and also upregulates cancer stem cell markers via the Hedgehog signaling pathway [31]. However, the biochemical functions of TNC in the malignant progression of colon cancer have not yet been established. INVOLVEMENT OF TNC IN COLON CANCER MMP-2 is highly expressed in colon cancer tissues and its expression levels increase with an increase in the tumor stage [68]. Furthermore, the expression levels of MMP-2 are correlated with lymph vessel invasion and disease progression in colon cancer [69]. MMP-7 is another Wnt/beta-catenin target gene [70] and both MMP-2 and MMP-7 can degrade TNC [71]. Furthermore, TNC variants containing the alternatively spliced domain types III-A1, -2, and -4 are highly expressed in colon cancer [49]. It is presumed that the functional cryptic site TNIIIA2 of TNC may be released into the tumor microenvironment of colon cancer and contribute to its pathogenesis. Supporting this hypothesis, peptide TNIIIA2 has been shown to act directly on colon cancer cells to enhance their in vitro invasive potential by inducing MMP secretion [72]; peptide TNIIIA2 or TNC promotes colon cancer cell invasion by upregulating MMPs [72]. The cell invasion induced by peptide TNIIIA2 or TNC is completely suppressed by anti-TNIIIA2 antibody or MMP-2 inhibitor [72]. Moreover, an in vivo observation involving a spontaneous metastasis mouse model mimicking hematogenous metastasis exhibited that peptide TNIIIA2 boosted the metastasis of colon cancer cells to the lung [72]. Taken together, the activation of beta1-integrin by peptide TNIIIA2 (one of the biochemical functions of TNC) may help to promote colon cancer cell metastasis via induction of MMP (Figure 2). Alterations in the density, distribution, and composition of the ECM are common in malignancies. This process creates the tumor microenvironment that helps to confer cancer cells with malignant properties such as tumorigenesis and metastasis[1]. These alterations increase stiffness in the tumor microenvironment, which promotes protumorigenic mechanosignaling. The increased ECM stiffness of colon cancer has been associated with cancer progression [73]. Through analysis of clinical specimens, a gradient of increasing ECM stiffness was observed from healthy to perilesional and colon cancer areas, which might predispose invasion [74]. Furthermore, the expression levels of lysyl oxidase (LOX), which catalyzes the covalent cross-linking of collagens and elastin, are closely correlated with the progression of colon cancer [75]. Compared with control cells or cells expressing a catalytically inactive LOX, colon cancer cells expressing LOX exhibit increased mechanosignaling, ECM stiffness, metastasis, and tumor burden in in vivo models via activation of beta1-integrin and the focal adhesion kinase-SRC signaling pathway [76], indicating that beta1-integrin activation might be associated with malignant progression via increased ECM stiffness in colon cancer. In a recent insightful study on the role of TNC in ECM stiffness in the tumor microenvironment, Barnes et al [77] demonstrated that the glycocalyx/ECM-integrin loop induces glioblastoma aggression in a tissue tension-dependent manner, with human recurrent glioblastomas showing an increase in TNC-enriched stiffened ECM and enhanced integrin mechanosignaling [77]. It has also been pointed out that glioblastoma cells expressing a V737N beta1-integrin autoclustering mutant exhibit increased mechanosignaling and ECM stiffness and facilitate tumor growth [77]. It is unlikely that at least the antiadhesive effect of TNC, which has been considered a major biochemical function of this protein, is responsible for the ECM stiffening and consequent enhanced integrin signaling. However, it remains unclear whether proadhesive activity (a biochemical function of TNC) is directly associated with ECM stiffness in the tumor microenvironment of colon cancer. Further investigations are required to determine whether activation of beta1-integrin by peptide TNIIIA2 could actually increase ECM stiffness in colon cancer. Beta1-integrin is also highly expressed in colon cancer compared with normal mucosa. High expression levels of beta1-integrin have been associated with poor prognosis, and increased expression of beta1-integrin is independently correlated with decreased overall survival and disease-free survival in colon cancer patients [78]. In addition, alpha5-integrin, which is coupled with beta1-integrin, also shows upregulated expression in colon cancer and is expressed mainly in the tumor stroma of clinical samples [79]. Moreover, alpha5beta1-integrin expression is considered a significant independent prognostic factor. Experimental evidence indicates that overexpression of alpha5-integrin accelerates proliferation and suppresses apoptosis in colon cancer cells, with colon cancer cells overexpressing alpha5-integrin found to promote tumor growth in a murine xenograft tumor model INVOLVEMENT OF TNC IN CAC A link between chronic inflammation and the pathogenesis of many malignancies has been well documented. [96] determined that the serum levels of TNC are correlated with clinical and histological parameters of disease activity in IBD patients [96]. Moreover, high levels of TNC mRNA in the mucosa of ulcerative colitis have been associated with a poor response to infliximab therapy, an effective treatment for moderate-to-severe IBD, indicating that TNC may contribute to therapeutic resistance against IBD. Therapy resistance may participate in the malignant progression of IBD due to a lack of inflammatory control, resulting in an increased risk of CAC onset. Indeed, TNC derived from intestinal myofibroblasts promotes the onset of CAC in an azoxymethane (AOM)/dextran sulfate sodium (DSS) model via angiogenesis [102]. Thus, TNC might contribute to the development and/or malignant progression of CAC. Identification of the biological functions of the TNC responsible for the development of CAC would enable the design of agents with prophylactic and therapeutic potential for these diseases. However, the biochemical functions of TNC in CAC onset have not yet been established. ECM remodeling is often augmented in these pathological lesions, and proteolytic cleavage of ECM proteins is performed by several inflammatory proteinases, including MMPs and cathepsins. Indeed, increased expression levels of several MMPs have been observed in IBD and are associated with disease activity in IBD, indicating that degradation of the ECM, including TNC, might occur at high levels in IBD and during CAC onset [103]. Therefore, it is conceivable that the functional cryptic site TNIIIA2 might be exposed by the high levels of TNC molecules in the lesion and act as a specific pathogenic factor in the development of CAC. Supporting this assumption, our recent work demonstrated the presence of TNC and peptide TNIIIA2 in the stromal area of dysplastic lesions in AOM/DSS mice [104]. Assuming that peptide TNIIIA2 acts mainly on preneoplastic epithelial cells and fibroblasts, which are abundant in the stromal area of dysplastic lesions, our in vitro experiments focused on the effects of beta1-integrin activation on both preneoplastic epithelial cells and fibroblasts. Interestingly, although beta1-integrin activation by peptide TNIIIA2 promoted cell adhesion, it had no direct effect on the growth of preneoplastic epithelial cells [104]. Similarly, peptide TNIIIA2 had no direct effect on the growth of fibroblasts, but fibroblasts stimulated by peptide TNIIIA2 released humoral factors, or possibly factors, that drove the malignant transformation of premalignant epithelial cells in a paracrine manner, as judged by anchorage-independent cell growth and focus formation [104]. These factors secreted from peptide TNIIIA2-activated fibroblasts are also able to promote the survival/proliferation of colon cancer cells [104]. Furthermore, peptide FNIII14, a peptidic factor that induces a conformational change in beta1-integrin from the active to the inactive state [105], suppressed not only the TNIIIA2-induced dysregulated survival/proliferation of preneoplastic epithelial cells in vitro, but also polyp development in an AOM/DSS mouse model [104]. These results suggest that beta1-integrin activation by peptide TNIIIA2 in fibroblasts may be an important target for the prevention of CAC (Figure 3). Several studies have demonstrated that cells in the tumor microenvironment, such as CAFs and immune cells, influence tumor progression. Among them, CAFs are key determinants of cancer development and progression[106-108]. Sasaki et al [109] demonstrated that CAC incidence is abrogated in CC chemokine ligand 3-or CC chemokine receptor 5-knockout mice treated with AOM/DSS and coincides with lower accumulation of fibroblasts in dysplastic lesions compared with wild-type mice [109]. These fibroblasts express heparin-binding EGF-like growth factor to stimulate the proliferation of tumor cells in CAC in mice [109]. In addition, epiregulin derived from fibroblast promotes the proliferation of intestinal epithelial cells through activation of the ERK signaling pathway, augmenting CAC growth [110]. These studies indicate that CAFs might be responsible for CAC development and progression. However, there is increasing evidence that TNC is upregulated in CAFs and that a high TNC expression as a CAF marker in tumor stroma is correlated with worse prognosis in several malignancies, such as breast ductal carcinoma[7], esophageal squamous cell carcinoma[9], colorectal cancer[10], and prostate cancer [111]. Taken together with our results, the evidence indicates that fibroblasts produce TNC in the tumor microenvironment and that this TNC might activate CAFs to promote tumor onset and progression. Risk factors for CAC development include pancolitis, a younger age of IBD onset, a long disease duration, chronic cholestatic liver disease, family history [112], and stricture formation [113]. Intestinal fibrosis is a common complication in IBD, particularly Crohn's disease, and the resulting clinically relevant strictures have been observed in about one-third of patients [114]. Intestinal fibrosis is likely to involve increased ECM stiffness, and this stiffness could perpetuate fibrogenesis [114], leading to the development of fibrotic strictures. More recently, accumulating evidence has linked increased ECM stiffness to several malignancies, with recent studies showing that cancer progression and aggression are correlated with the stiffness of a TNCenriched ECM [115] (please see the previous section). In IBD, increased ECM stiffness has been observed in strictures, and the increased ECM stiffness enhances adhesive properties, such as the formation of focal adhesion and actin stress fibers of colonic fibroblasts [116]. Moreover, increased expression levels of TNC have been reported in lesions of ulceration in ulcerative colitis [98]. Erdem involvement of increased expression levels of TNC in the development of ulcerative colitis-related strictures [117]. Given that peptide TNIIIA2 can induce potent and persistent activation of beta1-integrin as well as its clustering [47,48], peptide TNIIIA2 in stromal lesions might contribute to the development of colitis-related strictures through increased ECM stiffness, leading to increased risk of CAC onset. Although further research is required to determine whether beta1-integrin activation by peptide TNIIIA2 actually increases ECM stiffness, TNIIIA2-targeting agents such as an anti-TNIIIA2 antibody might be a promising strategy for the prophylaxis or treatment of CAC development and malignant progression. Several studies have suggested that integrin inactivation could be a promising strategy for controlling CAC development and progression. ATN-161, a peptidic antagonist of integrin alpha5beta1 and alphavbeta3, suppressed disease activity by blocking angiogenesis in IL-10-deficient mice that develop spontaneous Crohn's disease-like colitis [118] as well as in a CD4 + CD45RB high T-cell transfer model that induced chronic pancolitis [119]. Furthermore, ATN-161 also inhibits CAC development via inhibition of integrin alphavbeta3-mediated angiogenesis in a chemically induced AOM/DSS mouse model of intestinal and colon carcinogenesis [102], although no recent development status of ATN-161 is available. More recently, Terasaki et al [120] showed that fucoxanthin induces anoikis in colonic adenocarcinoma through attenuation of beta1-integrin signaling, which blocks CAC development in AOM/DSS mice [120]. Taken together, beta1-integrin activation might become a promising target for preventing and treating CAC, and inactivation of beta1-integrin by peptide FNIII14, which can neutralize the detrimental effects of peptide TNIIIA2 on beta1integrin activation, might be a novel and promising strategy for the management of CAC development and malignant progression. BETA1-INTEGRINS AS POTENTIAL THERAPEUTIC TARGETS IN COLON CANCER Several antagonists of integrin alpha5beta1 and alphavbeta3 were well tolerated in clinical testing[121,122] but failed to show therapeutic benefits in patients with malignancies. Although inhibition of integrin alpha5beta1 and alphavbeta3 might be a safe therapeutic strategy, alternative approaches should be considered, including the application of integrin inhibitors as anti-cancer drugs (reviewed in Ref.[123]). Regarding other therapeutic modalities, OS2966, a humanized monoclonal antibody targeting human beta1-integrins, is undergoing testing in a phase I clinical trial for the treatment of recurrent/progressive glioma[124]. In addition, one possible strategy may be to develop drugs with modes of inhibition other than competitive inhibition of integrin. Unlike integrin antagonists, peptide FNIII14-which has the ability to induce a conformational change in beta1-integrin from the active to the inactive state [105] -has shown therapeutic efficacy against several malignancies in animal models, including CAC, glioblastoma, neuroblastoma, and acute myelogenous leukemia [105]. Although further research is needed regarding its effect on the malignant progression of colon cancer, peptide FNIII14 may possess promising therapeutic properties. CONCLUSION Although TNC is considered a negative prognostic factor in several malignancies, the September 15, 2021 Volume 13 Issue 9 substantial role of TNC molecule in the development of colorectal cancer and its malignant progression has remained elusive. We suggest that one of the pathological roles of TNC, which is highly expressed in colon cancer, may be in activating beta1integrins through TNIIIA2 function. This hypothesis and the previous findings open the door to prophylactic and therapeutic strategies for colon cancer that involve inhibition of TNIIIA2-induced beta1-integrin activation by peptide FNIII14.
2021-10-07T05:15:30.085Z
2021-09-15T00:00:00.000
{ "year": 2021, "sha1": "6d72c79fa57d7943503792a9d2cf010e40930eed", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.4251/wjgo.v13.i9.980", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6d72c79fa57d7943503792a9d2cf010e40930eed", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
235761417
pes2o/s2orc
v3-fos-license
Mind Causality: A Computational Neuroscience Approach A neuroscience-based approach has recently been proposed for the relation between the mind and the brain. The proposal is that events at the sub-neuronal, neuronal, and neuronal network levels take place simultaneously to perform a computation that can be described at a high level as a mental state, with content about the world. It is argued that as the processes at the different levels of explanation take place at the same time, they are linked by a non-causal supervenient relationship: causality can best be described in brains as operating within but not between levels. This mind-brain theory allows mental events to be different in kind from the mechanistic events that underlie them; but does not lead one to argue that mental events cause brain events, or vice versa: they are different levels of explanation of the operation of the computational system. Here, some implications are developed. It is proposed that causality, at least as it applies to the brain, should satisfy three conditions. First, interventionist tests for causality must be satisfied. Second, the causally related events should be at the same level of explanation. Third, a temporal order condition must be satisfied, with a suitable time scale in the order of 10 ms (to exclude application to quantum physics; and a cause cannot follow an effect). Next, although it may be useful for different purposes to describe causality involving the mind and brain at the mental level, or at the brain level, it is argued that the brain level may sometimes be more accurate, for sometimes causal accounts at the mental level may arise from confabulation by the mentalee, whereas understanding exactly what computations have occurred in the brain that result in a choice or action will provide the correct causal account for why a choice or action was made. Next, it is argued that possible cases of “downward causation” can be accounted for by a within-levels-of-explanation account of causality. This computational neuroscience approach provides an opportunity to proceed beyond Cartesian dualism and physical reductionism in considering the relations between the mind and the brain. INTRODUCTION A neuroscience-based approach has recently been proposed for the relation between the mind and the brain (Rolls, 2021a). The proposal is that events at the sub-neuronal, neuronal, and neuronal network levels take place simultaneously to perform a computation that can be described at a high level as a mental state, with content about the world. It is argued that as the processes at the different levels of explanation take place at the same time, they are linked by a non-causal supervenient relationship: causality can best be described in brains as operating within but not between levels. This mind-brain theory allows mental events to be different in kind from the mechanistic events that underlie them; but does not lead one to argue that mental events cause brain events, or vice versa: they are different levels of explanation of the operation of the computational system. This approach may provide a way of thinking about brains and minds that is different from dualism and from reductive physicalism (Kim, 2011), and which is rooted in the computational processes that are fundamental to understanding brain and mental events, and that mean that the mental and mechanistic levels are linked by the computational process being performed. Explanations at the different levels of operation may be useful in different ways (cf Dennett, 1991). For example, if we wish to understand how arithmetic is performed in the brain, description at the mental level of the algorithm being computed will be useful. But if the brain operates to result in mental disorders, then understanding the mechanism at the neural processing level may be more useful, in for example the treatment of psychiatric disorders. In terms of levels of explanation that apply to the brain and mental operations, a number of different levels of explanation can be identified (Rolls, 2021a). They include ion channels in neurons influenced by neurotransmitters released at the tens of thousands of synapses on each neuron through which currents pass to influence the firing rate of individual neurons; neuronal biophysics that influences how these currents are converted into firing rates; the firing rates of individual neurons; the computations performed by populations of neurons often involving collective computations as in attractor networks and competitive networks; how the activity of populations of neurons is reflected by functional neuroimaging; to behavioral and cognitive effects, including mental operations, verbal reports, and phenomenal consciousness (Rolls, 2016(Rolls, , 2020(Rolls, , 2021c. I regard these as different levels of explanation of the operation of a computational system such as the brain. Some key points are developed further here. One is what the implications are for theories of causality. A second key point is which level of explanation may provide a more accurate account for the cause of a choice or action: the mental level, or the computational neuroscience level. A third key point is whether there are any cases in which it might be appropriate to provide a "downward causation" account, in which a higher level of the system causes effects at a lower level. CAUSALITY Intervention The most widely considered approach to causality is an interventionist account (Woodward, 2005(Woodward, , 2015(Woodward, , 2020(Woodward, , 2021bCraver and Bechtel, 2007;Kim, 2011). If one intervenes to remove a potential cause, and the putative effect no longer occurs, then that makes it more likely that the potential cause does cause the putative effect. [More formally, where X and Y are variables, X causes Y if there are some possible interventions that would change the value of X and if such intervention were to occur, a regular change in the value of Y would occur (Woodward, 2020(Woodward, , 2021b]. So this is a necessary condition for causality. But I now argue that it is not a sufficient condition, at least in relation to mental and brain events. Causality Operates Within a Level of Operation and Explanation, Not Between Levels The argument follows from my approach to causality in minds and brains, that causality can best be considered as operating within a level of explanation, and not between levels. So a second condition I argue that needs to be satisfied for causality is that the cause and effect are within the same level of explanation. I made it relatively clear in my earlier exposition (Rolls, 2021a) that level here might refer to the mental level, for example a cause provided verbally by an individual for an action; or it might be at a computational level for what might be computed by a population of neurons; or it might be at the single neuron level; or it might be at the level of transmitters influencing ion channels to make neurons fire more or less, etc. (Rolls, 2021a). The bases for this argument, that causality operates within but not between the levels of operation and explanation of the system are set out for both minds / brains and for computers by Rolls (2021a). The bases include the point that the processes that occur at the different levels can occur simultaneously (for example the mental and brain event, or the mathematical or logic operation performed by a computer and the current flow within its arithmetic logic unit), whereas causal processes can be understood to involve sequences of events in time with the operations performed within a level. This point is important. If all I held was an interventionist account of causality, then I might find the conditions satisfied that a mental event might cause a brain event, and it would be difficult to exclude that in terms of possible interventions. But that would be incorrect, if one holds that causality should best be considered to operate within a level of explanation, and not between levels of explanation, as set out elsewhere (Rolls, 2021a). In brief, an interventionist account might not be able to reject the hypothesis that mental events cause brain events, for particular mental events will always and indissolubly be associated with brain events. The reason for this is that an interventionist account of causality might diagnose cases of causality that act across levels of explanation. The implication is that the interventionist account alone will not suffice as a criterion for causality, at least for operations in brains and computers. The criteria would have to include also a restriction to events at the same level of explanation. Temporal Order Temporal order may also be useful as a condition for whether causality applies. At its simplest, a cause cannot follow an effect, as least in the macro world that is considered here. In neuroscience (and this may be different from quantum physics), we think that when causes produce effects a time delay is a useful indicator. Following this thinking, when one step of a process at one level of explanation moves to the next step in time, we can speak of causality that would meet the criteria for Granger causality where one time series, including the time series being considered, can be used to predict what happens at the next step in time (Granger, 1969;Bressler and Seth, 2011;Ge et al., 2012). In relation to neuroscience, the timing of a set of events measured with an accuracy of in the order of 10 ms and for a sufficient period on either side of the causal event being tested would suffice. This time scale, with very many time-steps of 10 ms on each side of the putative cause-effect relationship should be adequate, in that the time-scale of computation in the brain is in the order of 10-15 ms, which is the time that it might take a pattern association network, a competitive network, or even an attractor network to perform its computation (Rolls, 2021c) (see below). The implication of temporal order for levels of explanation and causality is that when we consider the relationship between processes described at different levels of explanation, such as the relation between a step in the hardware in a computer and a step in the software, then these processes may occur simultaneously, and be inextricably linked with each other, and just be different ways of describing the same process, so that temporal (Granger) causality does not apply to this relation between levels, but only within levels. The whole processing can then be specified from the mechanistic level of neuronal firings, etc., up through the computational level to the cognitive and behavioral level, as described elsewhere (Rolls, 2021a,c). The thrust of this argument is that temporal order is also a useful criterion to identify causality, at least at the macro level of events in the mind and the brain; and in computers. Criteria for Causality These points lead to my proposal for conditions that need to be tested for and satisfied to assess whether causality applies in a particular case, as follows: 1. Interventionist tests need to be satisfied. Interventionist tests provide conditions that need to be satisfied for causality, but they are not sufficient conditions for causality to be identified. 2. The events should be at the same level of explanation. Further details are described elsewhere (Rolls, 2021a). 3. Temporal order needs to be satisfied, as set out above. Details about how this applies in the brain are provided elsewhere (Rolls, 2021a). Criterion (1), interventionism, follows Woodward (2005Woodward ( , 2015, and is what I would describe as a way of testing whether causality can be excluded in a particular case, rather than a substantive account of causality. Criterion (2), that causality operates within but not between levels of explanation, moves beyond a purely interventionist account of causality, and is a proposal that I made, and elaborated in considering how causality operates within a multilevel system such as the mind and brain, and the software and hardware of a computer (Rolls, 2021a). Criterion (3), temporal order, also goes beyond a purely interventionist account, and is helpful partly because it helps to diagnose that processes at different levels of operation and explanation of at least a computational system may be occurring at the same time, and therefore should not be diagnosed as influencing each other causally. The relation between what is happening at the different levels of explanation is instead described as supervenient (or subvenient) (Rolls, 2021a). Part of the aim of this paper is to make these proposed criteria for causality very explicit, in order to promote discussion of this approach to causality, as it may offer a useful way forward in helping to understand the relation between mental events and brain events, and for that matter between software events and hardware events in computers. My answer to the first key aim of this paper is that the theory of causality should be extended to include the three criteria listed above, and to go beyond purely interventionist approaches to diagnosing causality, at least for systems such as the brain and the mind, and for conventional digital computers. WHICH LEVEL OF EXPLANATION MAY PROVIDE A MORE ACCURATE ACCOUNT FOR THE CAUSE OF A CHOICE OR ACTION: THE MENTAL LEVEL, OR THE COMPUTATIONAL NEUROSCIENCE LEVEL? An appropriate level of description for the causes of events can be chosen in a levels of explanation account of the relation between the mind and the brain (Rolls, 2021a). Sometimes it may be the mental level, for example when we are explaining how we may have made progress with a problem such as the relation between the mind and the brain; and sometimes it may be the brain level, for example when we are considering which drug may be appropriate to treat a particular mental disorder. However, it is interesting to consider at which level of explanation causality may be most accurate. It is well-known for example that confabulation can occur, and the rational mind may fabricate an account for why a choice was made or an action was performed. Part of the reason for confabulation by the rational system may be to help it maintain a long-term autobiographical narrative about the person's self, and the need for the rational system to believe that it is in control, for otherwise it might stop trying (Rolls, 2012b). An example of confabulation is found in split brain humans who may say they prefer one house because it has some extras, or that there is no particular reason for their choice, when in fact they have been shown a picture to their nondominant hemisphere that the other house is on fire (Gazzaniga and LeDoux, 1978;Gazzaniga et al., 2019). Confabulation may happen frequently when the emotional brain contributes an input to a decision, and the rational brain confabulates an explanation for why the choice was made, because there are multiple routes to action (Rolls, 2014; Figure 1). In such cases, we can know about the real cause of the decision or action only by knowing which brain systems were involved in taking the decision, and how the computation was performed that led to the decision, rather than by relying on any verbal explanation from the rational system that may be provided for the decision, for that might be a confabulation. For emotion-related decisions, it is suggested that confabulation by the rational system may occur frequently (Rolls, 2014). But when the decisions are taken by the rational system, it is more likely to be able to provide a correct causal account of the steps in the decision-making process, because the report comes from the same neural system involved in the reasoning (Rolls, 2020). In patients with brain damage, confabulation is of course well known. It is common in patients with memory problems due to damage to the ventromedial prefrontal cortex (Schneider and Koenigs, 2017), or to the hippocampal memory system in for example Korsakoff 's psychosis associated with alcoholism (Dalla Barba and Kopelman, 2017). Although there are a number of different possible factors that account for confabulation in patients with brain damage (Dalla Barba and Kopelman, 2017), part of the problem may be a weaker signal in the memory system than is usual, so that the patient has to make up a rational explanation (in the form of a confabulation) in order to maintain a consistent model of the self (Rolls, 2020). This account may also fit why confabulation can occur in healthy people when the emotional decision-making system in the brain makes a decision, because the rational system has only imperfect access to the emotional decision system when the rational system is called on to provide reports. My hypothesis is that whether the emotional or the rational decisionmaking system actually takes a decision on a particular trial is itself a noisy decision-making process Rolls, 2011Rolls, , 2016Rolls, , 2020. The overall implication of this consideration of "multiple routes to action" is that some levels of explanation may provide more accurate evidence about the causes of decisions and actions than others. The best way to understand the operation of a system may not necessarily be at the level at which a simple account can be provided and even verbally reported, in our example at the mental level. To understand the mind more accurately, and to be able to compare different types of mind, it may be important to know exactly what computations are being performed in the brain, as set out previously (Rolls, 2021a). My answer to the second key question is thus that explanation of the causes of behavior and mental states at the mechanistic level of the operation of networks of neurons in the brain and what they are computing may provide a more accurate account for the cause of a choice or action than for example the report given by an individual at the mental level. Indeed, I argue that the best way of knowing about the properties of the system, including what it may be like to be the system, is to know exactly what computations are being performed in the system, rather than trying to make inferences about the system from tests such as the Turing test (Rolls, 2020(Rolls, , 2021a. THE QUESTION OF DOWNWARD (OR UPWARD) CAUSATION It has been argued that downward causation may apply in some circumstances (Woodward, 2020(Woodward, , 2021a, but there is important discussion about this (Craver and Bechtel, 2007). Do Environmental Events Cause Changes in Gene Expression? An example of possible downward causation that has been considered is that large scale environmental events may causally affect gene expression (Woodward, 2021a,b). But let us consider this further, in the way suggested in my withinlevels of explanation approach to causality. If say an increase in environmental temperature led to genetic changes, this could occur in two main ways. One is that random genetic variation might lead to changes that might increase the size of the ears, or panting (both good for losing heat), and these might increase reproductive success for individuals who did not die from the heat. A second is that the gene expression for certain genes that for example promoted sweating might be turned on by their sensitivity (whether direct or indirect) to body temperature. In both cases, the causal account can be at the level of mechanistic biology, which provides a complete causal account of how a change in the environment might affect genes. Stating that the environment affects the genes in this case may be thought of as stating that whatever interventionist tests have been performed do not exclude that there is a relationship between the environment and the genes, but I argue that we can understand that there is causality when we analyze the steps involved at the lower mechanistic level, when the operation of causality becomes clear. Thus any account of this in terms of "downward causation" may just be referring to a state in which strong correlations may be present between levels, but with the causal mechanisms involved best described at a different level, of how it is at the biological level that changes in genes can be produced by for example temperature sensed within the individual. The Relation Between Neuronal Events and Mental States Another example might be that excessive synaptic pruning or reductions in synaptic transmission produced by lower NMDA receptor efficacy may causally contribute to some of cognitive and behavioral symptoms of schizophrenia (Rolls, 2021b). As these changes in synaptic transmission relate to the symptoms (which involve the whole person), should this be considered as a case of across-level causation (Woodward, 2021a,b)? (In this case, it would be upward causation, from synapses to cognitive symptoms.) Examples of this type were considered by Rolls (2021a). The approach I take to such examples of relations between levels of explanation involving the brain, behavior, and mind is computational, that mental events can supervene on brain events, and that implies correlations between mental events and brain events, but that causality can best be understood as operating within a level of explanation. In the present case, the account is that reduced synaptic transmission FIGURE 1 | Multiple routes to the initiation of actions and responses to rewarding and punishing stimuli. The inputs from different sensory systems to brain structures such as the orbitofrontal cortex and amygdala allow the orbitofrontal cortex and amygdala to evaluate the reward-or punishment-related value of incoming stimuli, or of remembered stimuli. One type of route is via the language systems of the brain, which allow explicit (verbalizable) decisions involving multistep syntactic planning to be implemented. The other types of route may be implicit, and include the anterior cingulate cortex for action-outcome, goal-dependent, learning (Rolls, 2019); and the striatum and rest of the basal ganglia for stimulus-response habits (Rolls, 2014(Rolls, , 2021c. Pallidum / SN-the globus pallidus and substantia nigra. Outputs for autonomic responses can also be produced using outputs from the orbitofrontal cortex and anterior cingulate cortex (some of which are routed via the ventral, visceral, part of the anterior insular cortex) and amygdala (Rolls, 2021c). [From Rolls (2021c (caused for example by high synaptic pruning or reduced NMDA receptor conductances) reduces the firing rates of populations of neurons, which destabilizes the attractor neuronal networks in the prefrontal cortex (Rolls, 2021b,c). Now these prefrontal cortex attractor networks are involved in maintaining items in short-term memory, and in holding on-line in short term memory the top-down bias required to bias processing in some parts of the brain thus providing a mechanism for top-down attention (Deco and Rolls, 2005a,b;Luo et al., 2013;Rolls, 2021c). The computational level of events in the brain thus provides a causal, computational, account of how these synaptic events alter behavior so that attention and short-term memory change. But the causal level is within-level in this approach, at the level of synapses, transmitters, receptors, and neuronal networks; and the behavioral changes occur at the same time, but are descriptions at a higher level of explanation. In such systems we can describe correlations between levels, or superveniences between levels of operation of the system, but the mechanistic, causal, computational account is best dealt with in this case at the brain level of explanation. The Relation Between Higher Level Laws and Lower Level Computations in the Brain Another possible case considered as "downward causation" in physics is when a higher level Law "causes" an effect at a lower level (Ellis, 2020). Let us take as an example the interaction between neurons in a population that falls into a low energy attractor basin (Hopfield, 1982;Amit et al., 1985;Amit, 1989). This happens to be a system that is highly relevant to understanding the operation of the cerebral cortex, as the most characteristic attribute of cerebral cortex is the highly developed excitatory recurrent collateral local connections between pyramidal cells that enable local attractor networks to be implemented for short-term memory, long-term memory, topdown attention, decision-making etc. (Rolls, 2016(Rolls, , 2021c. If we have a set of non-linear neurons in a network with excitatory synapses of strength w ij between each pair of neurons i and j and the firing rate of each neuron is y, and this forms an attractor network in which the synaptic weights reflect the stored memory patterns (see Rolls, 2016Rolls, , 2021c, then the energy of the whole neuronal population can be expressed (Hopfield, 1982;Treves, 1991;Treves and Rolls, 1991) as where < y > is the average firing rate of all the neurons. This can be understood as follows. If two neurons i and j both have high firing rates (or in physics the magnetic spins are pointing in the same direction) and are connected by a strong synaptic weight, then they will support each other, and this will contribute to stability. If one neuron i has a high firing rate and j has a low firing rate and they are connected by a strong synaptic weight, then each neuron will tend to change the other into its state, and this will contribute to instability. In the same situation, if the linking weight is weak, this will make little contribution to the stability. The sum of all such interactions will be high when the system has reached stability as a result of interactions between the neurons, and this high stability can be expressed as a low energy E by using a -sign. The interaction between the neurons (equivalent to spins in a physics model) can be analyzed at the population level (but not at the single neuron level) to show how the whole network can fall into an attractor state, and to show that the number of possible attractor states, for example the maximum number of different memory patterns, p max , that can be stored and correctly retrieved is approximately where C RC is the number of recurrent collateral connections onto each neuron, k is a scaling factor that depends weakly on the detailed structure of the rate distribution, on the connectivity pattern, etc., but is roughly in the order of 0.2-0.3 (Treves, 1991;Treves and Rolls, 1991), and a is the sparseness of the representation. [For binary neurons with either a high or a zero firing rate, the sparseness is the proportion of neurons with a high firing rate (Treves and Rolls, 1991;Rolls, 2021c)]. For example, for C RC = 12,000 associatively modifiable recurrent collateral synapses onto each neuron, and a = 0.02, p max is calculated to be approximately 36,000. One concept of causality that has been advanced for systems with different levels is that because a Law can be specified for a system such as what is shown in Eqn (2) at a high level (the population of neurons level), then that Law or rule of operation formulated at the high level provides "downward causation" to the lower level, to in this case result in the number of stable attractor basins being limited to what is shown in Eqn. (2) (Ellis, 2020). But that is not how I see the system as operating in terms of causality. The individual neurons at the lower level do not wait for a top-down signal from the population level to tell them what to do next. Instead, it just is a property of the whole system that the individual neurons at the lower level operate as neurons each with a certain number of connections to the other neurons, and the result of the lower level interactions between the neurons is that only a certain number of stable states can be stored and correctly retrieved. To elucidate further, when we simulate such an attractor network in a computer, we set up for example neurons with threshold linear activation functions, and modify the synaptic connections between the neurons to store the memory patterns, and then we let the system run (Rolls, 2012a(Rolls, , 2021c. We find that as we increase the number of memory patterns stored in the system, at some point, the critical capacity, the recalled memories become very poor, and the system no longer works as a memory system (Rolls, 2012a). But we do not include in the program that we write that the neuron-level implementation should check up to some higher level to find out if the number of patterns specified by the Law specifying the critical capacity has been exceeded, and if so to fall into a random neuronal firing (or spin) state. Nor is there a high-level part of the program that knows about Eqn (2) and checks if p is too high, and if so causes the lower level to fall into a random spin state (i.e., random set of neurons firing). So the operation of the system is implemented only at the lower level, and that is where causality acts, by the firing of individual neurons influencing other neurons through the modified synaptic weights. Now of course the operation of the system in terms of its storage capacity can be explained, and analyzed, at the higher level, where the interactions between the whole population of neurons can be understood, and specified as rules or Laws of the operation of the system. But that does not mean that the higher level rules or Laws that describe the operation of the whole system have to act down to the lower level to cause effects there at the low level, whether synchronously, or after a time delay. Thus I reject the concept (Ellis, 2020), at least in relation to the operation of the brain, that Laws that apply at a high level act by "downward causation" to control the operation of the system at a lower level. The high level Laws just express some properties of the system. "Downward Causation," Confabulation, and Correlation An implication of the treatment above of confabulation at an upper level of the system is rather relevant to the issue of possible downward causation. We should be wary (due to the possibility of confabulation), because a claimed example of downward causation may in fact be incorrect, for in the case of confabulation the mental thought that is expressed is not in fact in the causal chain at all of why a behavior or action may have occurred. Indeed, many examples of what might be claimed to be topdown causation may be because the concept at the high level is inadequately defined for it to be really testable as a cause. Take the example that the position in the status hierarchy might be considered to be the cause for altered gene expression which alters serotonin levels. Should we consider this to be a case of "downward causation, " as suggested (Woodward, 2021b)? This is likely to reflect a general association or correlation. Position in a dominance hierarchy is likely to reflect the outcome of agonistic interactions such as fights, and we know that there is considerable individual variation in sensitivity of the lateral orbitofrontal cortex, which decodes this non-reward, to not winning or losing Xie et al., 2021). Moreover, the non-reward might lead to active behavior, perhaps initiating a fight, or to passive behavior, to opt out of trying (Rolls, 2014). Which of these behaviors is chosen depends on impulsiveness, which is influenced by similar brain regions (Dalley and Robbins, 2017). And what happens to serotonin system gene expression is likely to depend causally on the exact chain of processing and computations, and can be understood at that level. So a putatively causal statement that "status hierarchy causes gene expression changes" (Woodward, 2021b) may reflect a general correlation, but there is no necessary relation, and this is not a very substantive form of causality. The attempt at a topdown causal explanation here seems to reflect instead a general correlation; and the causal factors involved can be described at the more mechanistic neural level, of the extent to which the lateral orbitofrontal cortex non-reward neurons are activated in an individual by losing or not winning (Thorpe et al., 1983;O'Doherty et al., 2001;Rolls et al., 2020;Xie et al., 2021), and by the personality of the individual such as impulsivity and sensitivity to punishment, which do at the neural systems level provide an account of the causal links in the chain that lead to how gene expression might be altered. What Defines a Level of Operation/ Explanation in the Brain and Mental Systems? A Computational Neuroscience Approach It is useful to provide some guidance on what defines a level of operation / explanation, at least for what is being considered here, neural and mental systems. Different levels can be defined by for example matters of scale and numbers. Some examples follow. One level is the neuron level. There are very many small ion channels in a neuron that together with their arrangement on a neuron influence whether the neuron will generate an action potential. Each neuron has one output stream of information, reflected by its action potentials, directed to perhaps 20,000 other neurons. Each neuron has perhaps 20,000 synaptic inputs from other neurons, which act on the ion channels to influence whether a neuron produces an action potential. I argue that this neuron-level is one computational level of operation of the system, for what the neuron computes is reflected in its single output stream of information, its action potentials transmitted to 20,000 other neurons. This is the type of single neuron computational level of understanding that can be commonly applied in the mammalian brain (Rolls, 2021c). I include in this level the fact that it is a property of some ion channels that the currents that they pass depend on the voltage across the membrane, as for the n-methyl d-aspartate receptor (NMDAr) which is important in learning (Rolls, 2021c). I also include in this level that for the synaptic strengths to modify and be retained during learning, genes may need to be activated to help produce the chemicals needed to alter the structure and strength of the synapse (Kandel, 2001). It is essential to understand the operation at this level, in terms of the information conveyed by the train of action potentials from a single neuron, which can be 0.3 bits in even a short time period of 20-50 ms (Tovee and Rolls, 1995;Rolls et al., 1997bRolls et al., , 1999, but which is largely independent from even nearby neurons (up to tens of neurons), as shown by the evidence that the information rises linearly with the number of single neurons being recorded (Rolls et al., 1997a;Rolls and Treves, 2011;Rolls, 2021c). A higher computational level is that of a population of neurons. There are very many neurons in a population that influence how and what the population computes, with one example being the type of attractor network described above. In this, as shown in equation 1, coalitions of neurons linked by strong synapses and high firing rates can be formed and form a stable basin of attraction, and have the "emergent" property of completion of the whole memory from any part (Hopfield, 1982;Rolls, 2021c). These networks are typically localized to a small area of neocortex, to minimize the axonal connection length between the neurons that must interact in the same network. Typically there will be 100,000 excitatory neurons in such a local network, given approximately 10,000 synapses per neuron devoted to recurrent collateral connections, and a dilution of connectivity of about 0.1 (Rolls, 2021c). Other types of network include pattern association networks, and unsupervised competitive networks to learn new representations (Rolls, 2021c). In all cases, the computation can be understood at the network level, and not at the single neuron level (Rolls, 2021c). There is a characteristic time-scale of operation here too, in the order of 10-15 ms even for an attractor network, and determined primarily by the time constant of the excitatory AMPA (α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid) receptors that connect the excitatory neurons (Battaglia and Treves, 1998;Panzeri et al., 2001;Rolls, 2021c). These dynamics are made fast because the integrate-and-fire neurons have a low spontaneous firing rate, so that some neurons are always very close to threshold before the stimulus is applied, and start exchanging information through the trained synapses very rapidly. The dynamics of the operation of the system, while it falls into its attractor state, which is one of a limited number of possible stable memory states, occurs continuously in time, and does not require the neurons to ask the next level up, at which the theory of the number of stable states can be analyzed (Hopfield, 1982;Treves, 1991;Treves and Rolls, 1991), whether the current stable state meets the criteria: the neuronal population just falls into one of its possible stable states based on interactions between the population of neurons. The fact that transmitters such as acetylcholine with widespread effects modulate the excitability of the whole population of neurons of course influences how stable the states are (Rolls and Deco, 2015b), but does not raise new issues about causality. Another level of operation is that involved in solving a problem such as proving Pythagoras' theorem, or writing a paragraph of text. This is a typically serial computational operation that may require many populations of neurons (of the type just described) exchanging information with each other with different steps to the argument, which together may take seconds or minutes, not the 10-15 ms for a single network to operate. Another example is the production of speech, which is a serial operation, and which might be implemented by a forward trajectory through a state space of different attractor networks each representing a different part of speech (e.g., subject, verb, and object), and each attractor network connected with stronger forward than backward connections to the next network (Rolls and Deco, 2015a). Thus the spatial scale here is different, with many populations of neurons involved; and the timescale is different, with serial operations being performed. Due to the almost random spiking times for a given mean firing rate of individual neurons, the population of neurons under this stochastic influence, may sometimes jump to a new location in the high-dimensional space, and this is likely to be important in creativity Liu et al., 2018;Sun et al., 2019;Rolls, 2021c). At this level of explanation, we can see how sets of networks could implement a multistep algorithm. At a higher level of explanation, we might specify the operation at an algorithmic level, for example the computational steps taken to prove Pythagoras' theorem, or the steps in the firing cycle of a combustion engine, or the stages in the life history of a dragon fly. This is the most useful level for analysis of whether the algorithm operates correctly, and to describe the algorithm to other individuals. And causality can be understood at this level, as progress with one step of the algorithm can enable the next step to occur. But the processes can also be understood as operating at the lower level of sets of neuronal networks in the brain, which reflect in their connections and operation what has been learned previously by interaction with the environment, and so can be constrained by what has been learned to implement the steps of the algorithm with causality operating at that level of sets of neuronal populations, with the learned constraints influencing what is computed without the need for top-down causality of what can be explained at a higher level to cause things to change, after a small delay, at the lower level of sets of populations of neurons. At a higher level of explanation and operation of the system, it might be that when the neuronal networks are performing a particular type of computation, perhaps monitoring a multi-step chain of reasoning using higher order syntactic thoughts grounded in the world, that it is a property of such a system that it feels like something to be having those higher order thoughts about oneself that are grounded in the world. That is the computational processor that I suggest becomes engaged when we report phenomenal consciousness. Part of the argument is that much global processing can take place without phenomenal consciousness, for example riding a bicycle for a time while thinking about something else (such as a theory of consciousness), so that a special type of computation appears to be involved when we have phenomenal feelings of consciousness (Rolls, 2004(Rolls, , 2008(Rolls, , 2011(Rolls, , 2020. So scale and number seem often to be useful in describing levels. They provide an independent way of defining a level to ideas of for example whether any one scale (or several scales) is complete (Ellis, 2020). For me, no one scale or level of explanation or operation suffices for a complete explanation, in that although causality operates within a level, understanding of how the system operates at different levels of operation may be useful. For example, understanding at the neuronal / pharmacological level may be useful for treatment, whereas understanding at the level of reasoning may be useful to understand Pythagoras' theorem. I consider that the whole world is a set of different levels of both operation and explanation, and they are linked by the ideas of supervenience and subvenience, or better convenience (see below), which are non-causal but different properties of the operation of the same system, understood and analyzed at different levels, with causality operating within each level, and not between levels. A consequence of my approach is that causality can be described as operating simultaneously as each of several levels of operation or explanation, but this does not imply multiple causes: the operations at each level provide different ways of describing and analyzing the computational properties of what is a single system. Summary My response to the third key issue, possible cases of "downward causation" (from a higher level to a lower level), is that they can be accounted for, at least for mental vs brain levels of operation and explanation, by the approach to causality described here, in FIGURE 2 | Schematic representation of the relation between physical brain states (P1 and P2) and mental states (M1 and M2). Undirected edges indicate supervenience / subvenience relations which apply upward and downward and are non-causal. The edges with an arrow indicate a causal relation (supervenience.eps). which causality operates within but not between levels. Moreover, the neural level is more substantive, for it enables the links in the causal chains that might lead to different effects to be followed across time, whereas events expressed in words at a higher mental level may be too imprecise to reflect more than correlations; and further, may reflect confabulation. IMPLICATIONS FOR DUALISM AND PHYSICAL REDUCTIONISM Descartes took a dualist approach to the relation between the mind and the brain (Descartes, 1644), and that raised the problem of how the mind and brain relate to each other, which has been a problem in the philosophy of mind ever since. One solution has been to propose a reductive physicalism, in which it is argued that mental events can be reduced to brain events, with no differences in kind (see Kim, 2011;Carruthers, 2019). The approach that I have proposed is that the mental events [including phenomenal consciousness (Rolls, 2020)] can be different in kind from brain events, and that the mental events supervene computationally on brain events. How the computational levels relate to each other has been described with examples by Rolls (2020Rolls ( , 2021a. My approach proposes that there is a necessary relation between a lower level and an upper level of explanation / operation, with events at the neural level always (i.e., necessarily) being related to some mental event at the higher level; and vice versa. The correlation between the appropriate events at the neural level and at the mental level will be high. But this relation between the lower level and the higher level is not causal, because the events at the lower (neural) and higher (mental) level happen at the same time (Figure 2). Some philosophers use the term "supervenience" for how the high level relates to the lower level. However, the term "supervenience" may carry with it some implications for some philosophers. In this context, another term that I suggest for this is "convenience, " which from the Latin means "coming together" (con-veniens). This term, "convenience, " has the advantage that it could be applied to both supervenience and subvenience, and does not carry with it the implications of the term "supervenience" as it may be understood by some philosophers. My proposal is that in at least a computational system such as the brain, the higher level, for example mental, events are what are implemented by the lower level, neural, events, but that this is not a causal relationship because the events at the different levels happen at the same time, and is a "convenient" relationship. This computational approach to the relation between mental and brain events may offer a solution to the problems of dualism and of reductive physicalism, with the relations summarized in Figure 2. Given this computational approach to the relation between the brain and the mind, the events at the mental level can be different in kind at the mental level from the neural level. The mental events might include having thoughts about one's own syntactic thoughts, in order to correct one's lower order multistep planning and reasoning. If the reasoning and planning is grounded in the world, if it is for example about rewards and punishers that might have implications for life or death of the individual who can think ahead about its own future, then I suggest that one of the properties of the system may be phenomenal consciousness (Rolls, 2004(Rolls, , 2007a(Rolls, ,b, 2008(Rolls, , 2011(Rolls, , 2012b(Rolls, , 2020. The thoughts at that mental level are an example of what I mean by differences in kind from a lower level of explanation, which in this case might be the level of the operation of neurons in the brain, or of populations of neurons to implement a particular computation. All of those firings and the closely related network operations (Rolls, 2021c) are different I suggest in kind from mental events including that we feel conscious (Rolls, 2020). SUMMARY AND CONCLUSION In order to understand the relation between the mind and the brain, and whether mental events cause brain events, or vice versa, it is important to have a theory of causality that is useful in computational neuroscience. Here I have proposed an approach to causality at least within computational neuroscience that goes beyond interventionist tests to include also temporal order, and that the causality should operate within levels of operation or explanation, and not between levels. Second, I have shown that although different levels of explanation for the operation of the system may be useful for different purposes, some levels of explanation may be more accurate than others. In particular, I propose that the mechanistic neural level may be more accurate and reliable than the mental level provided by verbal report of the causes for actions, because for example of confabulation which can occur given that the brain contains multiple routes to produce behavior. It is in principle possible to know which of the multiple routes to action illustrated in Figure 1 was engaged for some behavior or decision, by measuring which system in the brain is active on a particular occasion (McClure et al., 2004;Rolls, 2021c). Further, I propose that the best way of knowing about the properties of the system, including what it may be like to be the system, is to know exactly what computations are being performed in the system, rather than trying to make inferences about the system from tests such as the Turing test. Third, I argue that the possible cases of "downward causation" (from a higher level to a lower level) that are discussed in the literature can be accounted for by the approach to causality described here, in which causality operates within but not between levels. Overall, these proposals offer a computational neurosciencebased approach to the problems raised by both dualism and reductive physicalism; and an approach to understanding causality in computational systems. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s. AUTHOR CONTRIBUTIONS The author confirms being the sole contributor of this work and has approved it for publication.
2021-07-08T13:24:35.053Z
2021-07-08T00:00:00.000
{ "year": 2021, "sha1": "3389bab544581f351234ede184fcd349bc58ba1b", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fncom.2021.706505/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3389bab544581f351234ede184fcd349bc58ba1b", "s2fieldsofstudy": [ "Computer Science", "Philosophy" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
54737076
pes2o/s2orc
v3-fos-license
Geochemical insight during archaeological geophysical exploration through in situ X‐ray fluorescence spectrometry Geophysical techniques are widely applied in archaeological exploration, providing rapid and non‐invasive site appraisal. Geochemical analyses contribute significantly in archaeometry, but conventional laboratory apparatus requires that samples are removed from their in situ context. Recent advances in field‐portable apparatus facilitate in situ geochemical analysis, and this apparatus is deployed in this paper alongside conventional geophysical analysis to characterize the archaeological prospectivity of a site. The target is subsurface debris at the crash site of a World War II Mosquito aircraft. Here, in situ XRF spectrometry is applied as part of a conventional deployment of magnetic and electromagnetic (EM) methods to characterize a potential archaeological site, specifically the crashsite of a World War II aircraft. The additional geochemical insight reduces the ambiguity in the interpretation of the geophysical data: geophysical anomalies are co-located with enriched concentrations of copper and zinc ions, associated with brass (copper-zinc) alloy in the aircraft's ammunition. The in situ data compare favourably to XRF and mass spectrometry applied under laboratory conditions, but the same survey locations show variability given the changing supply of chemical elements to the ground surface. In situ XRF spectrometry can offer a valuable complement to a campaign of exploratory field geophysics, but only under certain site conditions as considered in the discussion. 2 | X-RAY FLUORESCENCE (XRF) SPECTROSCOPY -FUNDAMENTAL THEORY XRF spectroscopy determines the elemental composition of a sample material using high-energy, short-wavelength (X-ray) radiation (note: spectroscopy and spectrometry are distinct; the former is a technique, whereas the latter is the quantitative analysis of data). When bombarded with X-ray radiation, different elements can be identified by the characteristic 'fluorescent' energy that they emit (Weltje & Tjallingil, 2008). Although challenging to define, bespoke calibrations can be made (Quye-Sawyer et al., 2015;Scott et al., 2016) and allow the XRF data to be used as an absolute rather than relative indicator of composition (Środoń, Drits, McCarty, Hsieh, & Eberi, 2001). Laboratory XRF practice mitigates the effects of surface morphology by (destructively) grinding samples into a fine powder. Equivalent sample preparation is impractical for in situ XRF spectrometry hence field-portable XRF instruments have faced scepticism in the geochemical community (Frahm, 2013). However, recent research (e.g. Schneider et al., 2016) has reported similar accuracy and precision between field-and laboratory-based observations. The instrument deployed here is a hand-held Bruker Tracer IV-SD spectrometer (Figure 1), an energy-dispersive instrument with a rhodium target. The detection of elements lighter than calcium can be challenging since these have a low 'fluorescence yield' (i.e. their energy emissions are weak; Krause, 1979;Berlin, 2011), but this is overcome here with the use of a silicon drift detector (Speakman, Little, Creel, Miller, & Inanez, 2011). Sensitivity is further improved by including a Bruker 3 V Vacuum Pump (Figure 1) to inhibit the attenuation of fluorescent energy by air in the spectrometer's analysis chamber. The presence of water also impedes XRF analysis, since water scatters the X-ray radiation; therefore, in situ XRF surveys may always be vulnerable to the presence of groundwater (e.g. Tjallingil, Röhl, Kölling, & Bickert, 2007), especially for low-yield elements. The sample area (spot size) of an XRF measurement is typically 1 cm in diameter. However, the depth penetration of XRF energy in soil is on the millimetre-to-centimetre scale, hence in situ XRF measures only the surface chemistry of host soil. While it may be detectable with geophysical methods, a target would therefore be invisible to XRF sampling unless the ground surface is enriched in relevant marker elements via some source-to-surface transport mechanism (e.g. ploughing, groundwater circulation; Hedges & Millard, 1995;Campana, 2009). Even then, such transport may not only be in a FIGURE 1 A Bruker Tracer IV-SD hand-held XRF spectrometer, deployed at Nuthampstead airfield (August 2015). Here, the Bruker spectrometer is held in the operator's right hand, and the 3 V Vacuum Pump in their left [Colour figure can be viewed at wileyonlinelibrary.com] vertical direction hence the strongest concentrations of ions may not be observed directly above the source. As such, in situ XRF prospection will probably always benefit from the constraint provided by conventional geophysical survey. Mosquito crashed in the grounds of Nuthampstead shortly after its take-off from RAF Hunsdon (also in Hertfordshire). Records suggest that the port engine detached from the aircraft, causing it to invert and impact the ground at a near-vertical angle. The crash caused an intense fire, and claimed the lives of the two crewmen (members of 487 Squadron Royal New Zealand Air Force). Their bodies were recovered from the site, along with some wreckage, but it is doubtful that all debris was cleared from the site and some components (including armaments and the starboard engine) may remain present today. | FIELD SURVEY The airfield has been extensively ploughed, but runways still remain and evidence of military infrastructure are present as cropmarks. The likely crash site has been identified by Nuthampstead Airfield Museum using contemporary photographs of the impact (e.g. Before describing these surveys in more detail, the detectability of the Mosquito aircraft is considered; first by geophysical survey, then through geochemical analysis. | Geophysical detection of the target The wingspan of a Mark VI Mosquito is 16.5 m, and it is 12.5 m noseto-tail. In horizontal flight, the tip of its fin and rudder is 3.8 m above the base of its belly ( Figure 4). The speed and steep angle of impact into soft clay soil suggests that any remaining components of the Mosquito could be buried several metres beneath the surface, although evidence for the potential depth is very sparsely reported. Most surveys for aircraft wreckage can exploit the presence of aluminium and/or steel in the ground (i.e. relying on contrasts in electrical and/or magnetic properties; e.g. Osgood, 2014), but the Mosquito was one of the few World War II aircraft to be made chiefly of wood. Aluminium is only used in the rudder and elevator and, at this site, the steel engine and armaments may not be present. Therefore, in addition to any remaining aircraft components, it was assumed that magnetic | Geochemical detection of the target With little precedent for similar XRF practice, it was initially unclear which elements could diagnose the crash site. While aluminium enrichment might ordinarily be consistent with buried aircraft wreckage, this is unlikely to be significant for the wooden Mosquito. Additionally, any small aluminium anomaly may be masked by the high background aluminium content in Nuthampstead's clay soil and, furthermore, attenuated by groundwater. To identify alternative geochemical targets, the XRF characteristics of surface debris from the putative crash site were considered, including: 1. brass ammunition cartridges: British cartridge brass from the World War II period, used in 0.303 ammunition, is an alloy of 70% copper and 30% zinc, occasionally containing small quantities of lead (Pb). Cartridges may also have jacket of cupronickel alloy. None of the cartridges recovered show signs of melting (the melting point of most brass alloys exceeds 900°C), but all had exploded. 2. cannon rounds: this ammunition is made principally from steel, possibly alloyed with a nickel-chromium-molybdenum (Ni-Cr-Mo) blend. British aircraft carried several variants: armour-piercing ammunition may be tipped with a tungsten (W) carbide alloy, whereas explosive and incendiary variants haveTNT and phosphorous (P) cores, respectively. 3. burnt wood: although dominated by light elements (e.g. carbon, oxygen), traces of heavier elements, such as lead, could be present in any paint residue. In addition to these fragments, a sample of burnt soil was tested to monitor any chemical alteration caused by the impact fire. Figure 5 shows the concentrations of elements in the debris fragments, expressed in parts per million (on a log scale due to the variability between elements). All XRF analyses use a 'trace mudrock' calibration for which the spectrometer operates at 40 kV. This manufacturer-defined setting was the most appropriate for Nuthampstead's clay rich soil, though this implies that the measured concentrations are relative rather than absolute indicators. Elements lighter than calcium and those too scarce to be detected (e.g. molybdenum, tin, antimony), are absent from this plot. Each concentration is compared to a background value (orange bars, Figure 5 The brass sample (green, Figure 5) is dominated by copper, with concentration exceeding 10 5 ppm. A high zinc fraction is also recorded (~80,000 ppm), with arsenic (As) and nickel also increased in abundance. The steel sample is iron-enriched, although with a surprisingly low concentration of~250,000 ppm. The low value could again indicate a calibration issue, or non-ideal conditions of the sample surface caused by corrosion (Dungworth, 1997;Scott et al., 2016). Lead is somewhat enriched in both metallic samples, but in very low concentrations which may approach the limit of instrumental sensitivity. The burnt wood sample is generally depleted in metallic elements although no element is obviously enriched against the background trend. The burnt soil samples show little significant alteration with respect to background. Despite the vulnerability to calibration effects, any geochemical anomaly presented by the Mosquito would likely be in elements associated with brass, specifically copper and zinc. In addition to ammunition, the Mosquito was held together with~50,000 brass screws, therefore brass may be highly abundant in the ground. While iron could also have been an attractive target, the concentrations of copper and zinc are more significant above the background geochemistry, and its associated variability, in our observations at Nuthampstead. Soil samples were also taken from each XRF survey position for laboratory validation. Laboratory XRF analysis was conducted with the Bruker spectrometer on soil samples that were kiln-dried for several days, at 60°C, then ground with a pestle and mortar. Selected samples (17 in total) were also analysed by inductively coupled plasma mass spectrometry (ICP-MS). ICP-MS is regarded as a more precise means of quantitatively measuring elemental composition than XRF (Pye & Croft, 2007), being less vulnerable to calibration issues, but requires more extensive preparation of samples. Aliquots of 100 mg of dried-and-ground soil were dissolved in 5 ml of hot Aqua Regia (37% hydrochloric acid and 68% nitric acid, in a molar ratio of 3:1) at 140°C for one hour. A dilution series of 1:100 was made in 2% nitric acid and analysed for elemental concentrations on an Agilent ICP-MS instrument. Quartz minerals can be resistant to dissolution in Aqua Regia hence differences can exist between compositions evaluated through ICP-MS and XRF analysis of dissolved and undissolved samples. However, the samples in this experiment appeared to be completely dissolved in the Aqua Regia, therefore measurements with the two systems should be comparable. Additionally, for the elements considered in this study, comparisons were made of reported XRF versus Aqua Regia digestion ICP-MS measurements for standard soil samples: no significant differences between the two methods was observed for any element. classified using Spearman's rank correlation coefficient (r s ). Figure 8 shows the correlation between different elements, with symbols coloured according to their distance along the transect. The frames in each plot are coloured according to the strength of correlation: green defines a strong correlation (r s > 0.65), red a moderate correlation (0.45 < r s < 0.65) and black a weak correlation (r s < 0.45) as no correlation. For clarity, only correlations between copper, zinc and lead are shown (others are included in Supporting Information Figure S1). Concentrations determined through ICP-MS analysis (Figure 9b) are of the same order of magnitude as the equivalent XRF data, but differences in base-levels (evident for nickel, copper, zinc and arsenic) are evident. These are attributed to the inappropriate calibration of the XRF survey, implying that these in situ surveys should be considered relative rather than absolute indicators of concentration. | Laboratory XRF and ICP-MS spectrometry Nonetheless, anomalies in copper and zinc remain well-defined, 60 m along the transect, but trends in arsenic and lead are inconsistent. A lead anomaly is distinct in the ICP-MS record, approaching 20 ppm above background. The XRF energies for arsenic and lead are very similar: 10.543 keV for Kα for arsenic and 10.551 keV for Lα for lead. Therefore, the spectral interference between these elements makes it challenging for XRF to distinguish between arsenic and lead, particularly at low concentrations. As such, the XRF anomaly in arsenic is likely a false positive. Lead is feasibly associated with the crash, since World War II aircraft were balanced using lead weights. It is worth noting that ICP-MS gives evidence of an aluminium anomaly. While the variability of the observed concentrations impedes its definition, aluminium concentrations appear consistently high 60-70 m along the transect, approaching 10,000 ppm (~5%) above background. (Figures 6 and 7) are observed at the study site, which appear consistent with an aircraft crash at this location. Specifically, these are a widespread magnetic anomaly and enriched concentrations of elements associated with brass alloy. The low-amplitude magnetic anomalies observed in both the Grad601 grid and the G-858 transect are interpreted as the response to the thermoremanence in burnt clay. Assuming a near-vertical impact, the area of this response is not inconsistent with the footprint of the Mosquito (~16 m × 4 m), which would have been affected by the impact fire. Additionally, the power spectrum of the G-858 response indicates that the magnetic source is located within 1 m of the ground surface, based on modelling the burnt layer as a thin layer with random magnetization. Spector and Grant (1970) show that for a verticallyextended random magnetic layer, the slope of linear sections of a power spectrum of log-power versus wavenumber (= 1/wavelength) is a factor of 4π times the source depth. Figure The higher amplitude magnetic anomalies (> ±100 nT/m) observed in the Grad601 grid could be responses from larger fragments of ferrous wreckage, but a further survey would be required to evaluate the size and/or depth of these potential targets. This interpretation is greatly strengthened by the XRF spectrometry. Co-located with the magnetic anomalies are local geochemical anomalies, particularly evident for elements (copper and zinc) FIGURE 9 Laboratory validation of in-field XRF spectrometry data. (a) laboratory analysis of handheld XRF following grinding of dried soil samples, again including a three-point median trend. b) concentrations as measured in ICP-MS analysis (including for aluminium, absent in previous XRF analysis). The dashed black line in these plots is the median average value for each element; error bars in ICP-MS analysis are smaller than the symbol [Colour figure can be viewed at wileyonlinelibrary.com] FIGURE 10 Power spectrum of magnetic field strength, recorded by the upper sensor of the G-858 gradiometer. Linear section i (fit to blue data) expresses a gradient of ˗24.7 m, corresponding to a depth of 0.8 m for the associated causative body. Linear sections ii (fit to red data) and iii (fit to grey data) are assumed, respectively, to correspond to elevation variations of the sensor and ambient noise [Colour figure can be viewed at wileyonlinelibrary.com] associated with brass. Besides iron and aluminium, brass is the most significant metallic component of the fully-armed Mosquito aircraft. The geochemical evidence is particularly compelling since, in the absence of other information, the air-crash is the most plausible means of introducing these elements into the ground at this location; by contrast, the burnt layer alone could be more simply explained by (for example) disposal at some point in the recent history of the site. The full suite of geophysical and geochemical observations is therefore consistent with an air crash at the site identified within Nuthampstead Airfield. | Efficacy of in situ XRF surveying To use in situ XRF surveying as an archaeological exploration tool, some mechanism must exist to transport 'exotic' (i.e. absent in the background) geochemical elements from their buried source to the ground surface. No metallic fragments were observed in the laboratory-powdered soil samples, suggesting that elements at the site are transported in groundwater rather being present in shards of metallic debris. At Nuthampstead, ploughing appears to be an effective transport mechanism, and the time since ploughing appears to be a key control on the clarity of the XRF anomalies. The survey in November 2014 was conducted soon after a period of ploughing, potentially supplying the ground surface with a 'fresh charge' of metal-rich groundwater. Anomalies and their correlation coefficients were both reduced in the August 2015 dataset (e.g. Figure 8) compared to November 2014. Ordinarily, it might be expected that the drier ground conditions in summer would yield higher geochemical concentrations (e.g. Schneider et al., 2016) but, at the time of this acquisition, the ground had been undisturbed for several months. Metal ions could therefore have been flushed from the site by (for example) rainfall, or transported back into the subsurface. However, some ions must also remain adsorbed onto soil grains, otherwise, XRF analysis of dry soil (including in the laboratory analyses) would have detected no geochemical anomaly at all. Given that the sample size of the XRF instrument is~1 cm 2 , it is unlikely that analyses are conducted at precisely the same location between different time periods; however, the changes in the XRF responses are not a shift in the position of the geochemical anomalies, but in the scatter and the correlation of geochemical concentrations. Separate to instrumental effects (e.g. calibration and sensitivity), the measured concentrations are therefore a function of: a. the abundance of a given element in the source material, b. the groundwater solubility and adsorption potential of that given element, c. the efficiency of any source-to-surface transport mechanism. Calibration issues are often unavoidable in archaeological XRF surveying (e.g. Scott et al., 2016). A non-specialist should therefore consider XRF spectrometry as a qualitative tool for 'anomaly spotting', rather than interpreting the absolute values of the recorded concentrations. Bespoke calibrations are recommended if absolute concentrations are required (for example) for comparative archaeometric purposes (Scott et al., 2016) or where forensic analysis may lead to litigation (Bergslien, 2013;Ruffell & Wiltshire, 2004;Sbarato & Sánchez, 2001). Validation with laboratory analysis is also advocated since XRF scattering effects are minimized in powdered samples; furthermore, such samples represent a homogenized volume of material, therefore the measurement is less susceptible to 'skin' anomalies. With respect to the efficiency of acquisition, in situ XRF spectrometry compares favourably with established geophysical methods. Not only is the cost of equipment similar to many geophysical systems, the rate of data return (40 samples/hour, here distributed across a 100 m transect) is comparable to (for example) surveying with electrical resistivity tomography. While XRF spectrometry would probably be impractical as an initial reconnaissance tool, it can contribute valuable insight to the understanding of a target once that target has been identified. | CONCLUSIONS In situ XRF spectrometry provided a valuable geochemical complement to a suite of geophysical field acquisitions. Localized increases in the concentration of diagnostic metallic elements improved the detectability of the crash site of a World War II aircraft, adding confidence to the interpretation of a suite of geophysical data. Specifically, increases in the local abundance of copper and zinc were identifiable as originating with brass ammunition cartridges among the aircraft wreckage. The applicability of in situ XRF at a given site requires not only that anomalous elements are present in detectable abundance, but that some source-to-surface transport mechanism (e.g. ploughing) is active. While in situ XRF responses should be validated under laboratory conditions, the portable XRF spectrometer offers a useful complement to a programme of field geophysical survey.
2018-12-14T19:44:10.796Z
2017-06-15T00:00:00.000
{ "year": 2017, "sha1": "9e54115c08be09675eb746c01bd773392a45ef89", "oa_license": "CCBY", "oa_url": "https://nottingham-repository.worktribe.com/preview/865712/BoothEtAl17.pdf", "oa_status": "GREEN", "pdf_src": "Wiley", "pdf_hash": "47a7371cf2738da1adc9081b96f255230ed3643f", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Geology" ] }
55137814
pes2o/s2orc
v3-fos-license
Susceptibility of Malaysian rice varieties to Fusarium fujikuroi and in vitro activity of Trichoderma harzianum as biocontrol agent Aims: Bakanae disease on rice has been widely distributed in all countries where rice is grown commercially, especially in Asian countries including Malaysia. As an alternative measure in controlling Fusarium fujikuroi, two approaches have to be adapted i.e. by using resistant varieties and biocontrol agents as reported in the present study. Methodology and results: A total of 31 Malaysian rice varieties were used in screening and results showed that variety MR211 was the most susceptible and MR220 was slightly susceptible. Out of 60 isolates of Trichoderma harzianum isolated from soils in Malaysia and tested against the pathogen under in vitro condition, 13 isolates showed high percentage of inhibition (PIRG > 60%). All isolates of T. harzianum showed that the PIRGs were significantly different at p≤0.05 with those of control plates. Conclusion, significance and impact of study: Biocontrol agent and resistant variety are better alternative for controlling plant diseases. We found a variety MR220 was slightly susceptible, but none of tested varieties is resistant towards pathogen of bakanae disease. T. harzianum has the ability to inhibit the growth of F. fujikuroi (T3068P) under in vitro condition. The findings of the Malaysian susceptible/resistant variety and potential T. harzianum isolate as a biocontrol agent of bakanae are important for future tests in the plant house and field trials. INTRODUCTION Bakanae of rice caused by Fusarium fujikuroi Nirenberg, was first described in Japan.The disease is also known as foot rot or elongation disease and is widely distributed in all rice-growing areas, especially in Asia.In Malaysia, it was seriously observed in 1985 during the second riceplanting season (Saad, 1986).If the disease becomes an outbreak due to lack of prevention through early detection, there will therefore be problems to worldwide food resources, especially for the majority of Asian. Bakanae is a seedborne disease; thus, by sowing the seeds using direct casting in infested soil will be giving first infection onto the seedlings.The infected seedlings usually die at advanced stage of infection.Some infected seedlings could also be stunted and chlorotic.The classic and most conspicuous symptom of the bakanae disease is abnormal elongation; this symptom can be seen from a distance in fields and seedbeds.The affected plants may several inches taller than normal plants, thin, yellowish green, and may produce adventitious roots at the lower nodes of the culms (Nur Ain Izzati et al., 2008a).The pathogen has ability to produce the growth hormone gibberellin, which is responsible for abnormal elongation (Nur Ain Izzati et al., 2008b).The diseased plants bear few tillers and leaves dry-up quickly.The affected tillers usually die before reaching maturity stage (Karov et al., 2009). In Malaysia, there are lacks of information on Malaysian rice varieties that are resistant to bakanae disease.Therefore, this study was conducted to distinguish the resistance of rice varieties against the disease.The indiscriminate use of chemical fungicides to control the rice bakanae pathogen may lead to the appearance of new resistant strains and increased the toxicological lead to the environment (Hajieghrari et al., 2008).Therefore, biocontrol agents are useful as an alternative method beside fungicide use in nurseries and rice field.Several organisms such as species of Trichoderma, Bacillus and Streptomyces have been demonstrated to be effective as biocontrol agents. Several isolates of Trichoderma have been developed as biocontrol agents against fungal plant pathogens (Howell, 2003;Harman, 2006;De Souza et al., 2008;Suhaida and Nur Ain Izzati, 2013).In addition, Howell (2003) (2008) stated that Trichoderma species is a mycoparasite that has proven effective as a biocontrol agent against a range of important plant pathogens. Trichoderma species is a filamentous fungus that is highly interactive in root, soil and foliar environments.It can produce a wide range of antibiotic substances (Sivasithamparam and Ghisalberti, 1998) and they parasitize other fungi (De Souza et al., 2008).Trichoderma could easily detect the other fungi and grow rapidly towards them.Trichoderma species have the ability to compete with soil microorganisms for space and nutrients consumption, in particular.Trichoderma species have also the ability to inhabit soilborne pathogens and grows in association with plant roots (Harman, 2000).These mycoparasites secreted degrading enzymes on the cell walls of fungal pathogens and subsequently function as elicitors of plant protection mechanisms (Kubicek et al., 2001;Woo et al., 2002).In this study, the biological potential of some Trichoderma isolates were evaluated against F. fujikuroi under in vitro conditions. Pathogen and rice varieties In previous study, the pathogenicity test was conducted by using a rice variety MR 211 (Nur Ain Izzati et al., 2008a).The result showed, among 34 isolates of five species of Fusarium, the isolate of F. fujikuroi T3068P was highly virulent to rice and caused bakanae disease (Nur Ain Izzati et al., 2008a).Thirty-one varieties of rice obtained from Malaysian Agricultural Research and Development Institute (MARDI), Seberang Perai, Pulau Pinang were used to screen the resistant variety against F. fujikuroi. Conidial suspension and artificial inoculums Isolate T3068P was cultured on potato dextrose agar (PDA) and incubated for 7 days.The plates were flooded with sterile water and the conidial suspensions were pooled before adjusted to 1×10 6 conidial/mL.The seeds were soaked in 50 mL spore suspension for 12 h but the control (non-inoculated) seeds were soaked in 50 mL sterile water.Inoculated and control seeds were sown on rice field soils in plastic trays (38×28×10 cm).Fifteen seeds were planted in each tray of triplicates and arranged in complete randomized design (CRD) in the plant house at University Agricultural Park, Universiti Putra Malaysia.The seedlings were irrigated daily with the tap water and the fertilizer 15N:15P:15K was given every 10 days. Development of symptoms and Disease Severity Index (DSI) The height of the seedlings and the external disease symptoms were observed continuously based on the disease scale of 0 to 4 as shown in Table 1.Disease Severity Index (DSI) was calculated as follows: DSI = ∑ numbers of plants in the specific scale × disease scale Total number of plants observed Isolation of Trichoderma species Trichoderma isolates used in whole trials were obtained from soil samples in Selangor and Terengganu, Malaysia.Isolation of fungal was done using the soil dilution-plating technique following Nur Ain Izzati and Faridah (2008). One mL from 10 -3 , 10 -4 , 10 -5 soil dilutions were spread on Rose Bengal Agar (RBA) and incubated for 7 days at room temperature (28±1 °C).The colonies of Trichoderma species were subcultured and single-spored on PDA to obtain the pure isolate.All isolates were identified based on morphological characteristics according to Harman and Kubicek (1998) and Rahman et al. (2011). Challenging T. harzianum isolates against F. fujikuroi isolate T3068P Six mm diameter of colonies of T. harzianum isolates and T3068P were placed 5 cm apart on PDA.Plates were incubated at room temperature for 5 days.The control plates inoculated with T3068P without T. harzianum isolate.Radial growth of both fungal was measured using the following formula: Percentage of inhibition growth rate (PIRG) = (R1-R2)/R1 R1= Diameter (cm) of colony growth of F. fujikuroi in control R2= Diameter (cm) of F. fujikuroi in antagonist-tested plate PIRG was calculated and all the data analyzed by using SPSS programme version 17.0.The descriptive assessment for the antagonistic activity was converted as follows (Soytong, 1998): Based on screening study, no Malaysian varieties tested were resistant to bakanae disease, but the most slightly susceptible variety was MR220.Seedlings inoculated with conidial suspensions of isolate T3068P showed the typical symptoms of bakanae disease with abnormal growth, thin, yellowish green and produced adventitious roots at the lower nodes of the culms (Figure 1A-D).The DSI for all varieties increased after day 10 to 30 (Table 2). The DSI was significantly (p ≤ 0.05) different days and varieties.The highest DSI was recorded from MR211 seedlings at day 40 which is 3.20 followed by MR27 (2.93) and MR123 (2.80).The lowest DSI was MR220, which is 0.68 and followed by MR219 (0.85) and MR185 (0.90).All the DSI for each variety were significantly different at p≤0.05 from the control seedlings (Figure 2).The most critical period for the infection occurred in the first 3 days during germination of seeds.This is because of secretion of amino acids and sugars that act as rich energy substrates for the effectively growing pathogen.Hence, the infected plants were taller than control after 5 to 40 days and DSI was significantly (p ≤ 0.05) different between days.In Malaysia, there are lacks of information on screening varieties of rice that resistance to bakanae disease, this study was therefore conducted.However, screening varieties of blast have been practiced for many decades.For example in 1900 to 1910, the Japanese varieties Kameji and Aikoku rice were considered highly resistant to blast, while, Shinriki variety was very susceptible to blast.In this study, the slightly susceptible rice variety against bakanae disease was MR220 and the most susceptible variety was MR211.MR220 is reported that resistant to blast, bacteria leaf blight and tungro (MARDI, 2006). Many factors would influence the disease development of bakanae.Other than variety of rice as a host, the pathogen and environmental conditions such as temperature, wind, moisture, sunlight, nutrition, and soil quality have major impact on development and severity of the disease (Doohan, 2005).In order for disease to occur, a pathogen must be virulent toward and compatible with specific host.The aggressiveness of a pathogen also influences disease severity (Doohan, 2005).Fusarium fujikuroi isolate T3068P has been proven as highly virulent and caused bakanae disease (Nur Ain Izzati et al., 2008a).Fungal genes encode proteins that make the fungal specific and virulent towards particular host, and similarly for the host to have genes that are susceptible or resistant to pathogen.Incompatible interactions result in no disease development and the plant will not be infected (Table 3).Only avirulent-resistant (AR) interaction is resistant to the pathogen, which will induce the disease development in all cases (Doohan, 2005). Screening of biocontrol agent under dual culture The antagonistic activity of 60 isolates of T. harzianum that were obtained from soil in various locations was recorded after day 5 of incubation (Table 4).T. harzianum isolated from Semenyih (isolate T31) demonstrated the highest response with PIRG of 65.51% (Figure 3) followed by isolates T07, T08, T11, T16, T24, T28, T36, T38, T40, T47, T48 and T55 in range 65.30-62.14%.Antagonistic activities of all T. harzianum isolates (95.51-42.52%)were significantly different with control plates (0%).T. harzianum was previously reported as an influential biocontrol agent against seedborne pathogens.Studies of the efficacy of Trichoderma spp. as a fungal biocontrol agent (Harman, 2000) are well known.However, no any report was documented on T. harzianum as biocontrol agent of bakanae disease.Trichoderma spp.have been intensively studied as potential biocontrol agents because of their ability to reduce the incidence of disease caused by plant pathogenic fungi, particularly common soil borne pathogens (Howell, 2003;Nur Ain Izzati and Faridah, 2008).Different isolates of Trichoderma have different optimum temperature for growth and have different strategies to inhibit the growth of pathogen (Kredics et al., 2003;Hajieghrari et a., 2008).In the present study under in vitro condition, all isolates of T. harzianum from soil have the ability to inhibit the growth of F. fujikuroi (T3068P), the pathogen of bakanae disease on PDA plates with PIRG between 42.52% and 65.51%.Somehow the isolates are only considered as promising antagonists when the PIRG exceeding 60% (Noveriza and Quimio, 2004).In this study, out of 60 isolates of T. harzianum, 13 isolates had shown promising antagonist (Table 4).Noveriza and Quimio (2004) reported that, T. harzianum as effective antagonists can grow very fast rate outpacing the growth of the pathogen (F.fujikuroi) and covered the entire medium surface after 5 days of incubation. Trichoderma as biocontrol mechanisms can inhibit the fungal pathogens by mycoparasitism (Howell, 2003) and antibiosis (Sivasithamparam and Ghisalberti, 1998).The processes of mycoparasitism are recognition of the host, attack and subsequent penetration and killing.During this process, Trichoderma secretes a source of cell wall degrading enzymes that hydrolyze the cell wall of the host fungus and then releasing oligomers from the pathogen cell wall (Kubicek et al., 2001;Howell, 2003;Woo et al., 2006).It is believed that Trichoderma can detects the presence of another fungus by secretes of hydrolytic enzymes at a constitutive level and sense the molecules released from the host by enzymatic degradation (Harman et al., 2004;Woo and Lorito, 2007). The interaction between Trichoderma and fungal pathogen showed the inhibition of growth of the pathogen by competition for carbon, nitrogen and other growth factors, together with competition for space (Noveriza and Quimio, 2004).The presence of different carbon sources, such as mono-or polysaccharides, colloidal chitin, or fungal tissues, can encourage the secretion of source of cell wall degrading enzymes (Mach et al., 1999).Hyakumachi (2000) reported that Trichoderma isolates can give a stable and obvious suppressive effect against different soilborne pathogen compared to Penicillium spp., Mucor sp., and Fusarium equiseti isolates. CONCLUSION It is therefore we can conclude that two alternative measurements can be applied for controlling F. fujikuroi, pathogen of bakanae disease, which is by using resistant variety and biocontrol agent.Among 31 Malaysian rice varieties, MR220 was found slightly susceptible variety against the disease; however, none of the tested varieties is resistant.Out of 60 isolates of T. harzianum tested, 13 isolates showed high percentage of inhibition, which that can be used as biocontrol agent for future trials. Figure 1 : Figure 1: Typical symptoms of bakanae disease.A, normal and healthy plants (n) and infected seedlings showing abnormal elongation (i); B, infected seedling, showing abnormal elongation, thin and yellowish leaves (arrow); C, pinkish fungal mass above water level on dried-up seedling (arrow); D, infected tiller produced wiry (stiff) adventitious roots (arrow). Figure 2 : Figure 2: Comparison DSI value of inoculated and control rice varieties at day 40 after inoculation. Figure 3 : Figure 3: Percentage of inhibition growth rate (PIRG).A, T31 showed the highest antagonistic activity with the highest PIRG 65.51%; B, T50 the lowest antagonistic activity with PIRG 42.52%.A B as biocontrol agents against several plant diseases in commercial agriculture.De Souza et al. Table 1 : Disease scale and disease symptoms for seedling scoring (adapted from Nur Ain Izzati et al., Table 2 : Disease Severity Index (DSI) of inoculated and control rice seedlings at different days after inoculation (sowing). Table 4 : The percentage of inhibition growth rate (PIRG) and antagonistic activity of T. harzianum isolates.
2018-12-10T23:46:36.903Z
2015-03-01T00:00:00.000
{ "year": 2015, "sha1": "cafbb801f449bed41e897bfc038502feff1d1f5c", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.21161/mjm.61714", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "cafbb801f449bed41e897bfc038502feff1d1f5c", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
264833627
pes2o/s2orc
v3-fos-license
Convolution Quadrature for the quasilinear subdiffusion equation We construct a Convolution Quadrature (CQ) scheme for the quasilinear subdiffusion equation and supply it with the fast and oblivious implementation. In particular we find a condition for the CQ to be admissible and discretize the spatial part of the equation with the Finite Element Method. We prove the unconditional stability and convergence of the scheme and find a bound on the error. As a passing result, we also obtain a discrete Gronwall inequality for the CQ, which is a crucial ingredient of our convergence proof based on the energy method. The paper is concluded with numerical examples verifying convergence and computation time reduction when using fast and oblivious quadrature. Introduction Consider the following quasilinear subdiffusion equation with vanishing Dirichlet condition on a smooth domain Ω ⊆ R d      ∂ α t u = ∇ • (D(x, t, u)∇u) + f (x, t, u), x ∈ Ω, t > 0, α ∈ (0, 1), u(x, 0) = 0, x ∈ Ω, u(x, t) = 0, x ∈ ∂Ω, where ∂ α t is the partial Caputo time derivative defined with the help of the fractional integral Note that the vanishing of the initial condition in (1) can be assumed without any loss of generality. Our assumptions on the regularity of the coefficients are as follows.Let D ∈ C(Ω, R + , R) and and this guarantees well-posedness of the problem.However, even with weaker conditions, it has been proven in [49] that (1) has a unique strong solution.More specifically, for a C 2 -smooth domain Ω and with p > d + 2/α we have u ∈ W α,p ([0, T ]; L p (Ω)) ∩ L p ([0, T ]; W 2,p (Ω)).Additional solvability results can be found in [1].Furthermore, large-time decay estimates have been established in [46,10]. On the other hand, viscosity solutions to (1) have been studied in [45].In particular it has been shown that |u(x, t)| ≤ Ct α for t ∈ [0, T ], suggesting α-Hölder continuity at the time origin.Some additional results that are also valid in the degenerate case when diffusivity can vanish were proved in the weak setting in [1,48,4].The semilinear constant coefficient case, that is when D =const.and f = f (x, t, u), has been investigated, for example, in [2] where α-Hölder continuity of the solution was established under sufficient regularity conditions on the source.The linear case is very well understood in the constant and x-dependent diffusivity.Details on the solution can be found in [40,21].The most spectacular difference between a solution to classical diffusion and its slower, subdiffusive version is the amount of smoothing of the initial data.To be precise, it is known that for the linear PDE with D = const.and f ∈ C m−1 ([0, T ]; L 2 (Ω)) with I α t (∥∂ (m) t f (t)∥) < ∞ we have [15] (Theorem 2.1, (iii)) where the superscript denotes the time derivative of the solution regarded as a mapping from [0, T ] into the L 2 (Ω) space.This means that even for very smooth f , the solution can still be of limited regularity at t = 0 unless sufficiently many time derivatives of f initially vanish.It is also known from [2] that the solution to the semilinear equation with an initial condition u 0 ∈ H ν with ν ∈ (0, 2] satisfies the following regularity estimates Although the solution is continuous on [0, T ] it has a singular time derivative, and we have to expect that for our quasilinear problem the situation can be at most as regular as above.However, in what follows we will assume the following much relaxed regularity requirement The above means that the function is Lipschitz continuous far from the origin while the regularity deteriorates near it.This modulus of continuity captures the typical and realistic behavior of solutions to subdiffusion equation [41].Note, however, that in contrast to (6) in this paper we do not assume existence of higher-order derivatives. The governing equation (1) arises in many areas of science as a model of subdiffusive phenomena.Roughly speaking, subdiffusion denotes a slower than usual random motion of a collection of particles in contrast to classical diffusion and faster evolution known as superdiffusion.To be more precise, if we consider a randomly moving particle with a mean-squared displacement proportional to t α , then we say that it is classically dispersing when α = 1.Sub-and superdiffusive evolution occurs when 0 < α < 1 and 1 < α < 2, respectively [35,18].Another important application of (1) arises in hydrology when considering moisture percolation inside a porous medium [12,22].A derivation of our governing equation in the hydrological setting has been given in [37] where it has been shown that the slower than classical evolution can be a consequence of the fluid being trapped in some regions of the porous medium.This can be the result of nonhomogeneity or chemical reactions taking place in the domain [12].Note that it has been observed that the evolution of moisture inside a porous medium necessarily has to be described by a nonlinear equation since the diffusivity can change by orders of magnitude when the pores are filled with water [8].Other important applications of the subdiffusion equation can be found, for example, in: biology [43], finance [34], and chemistry [47] to name only a few examples. Fractional differential equations are being extensively investigated both analytically and numerically.In order not to go too far in reviewing the previous results, we will focus only on numerical methods for the subdiffusion equation in its various forms.Several numerical schemes have been devised to study (1).As mentioned above, the L1 scheme was applied in [39] along with convergence proofs.An interesting account of an even more general problem -including a stochastic term -has been investigated numerically in [28].According to our knowledge, this is just the beginning of rigorous numerical analysis of the quasilinear subdiffusion equation, and several authors are making progress in this field.For the time-fractional parabolic PDEs of a simpler form, one can also find many interesting results.For example, a semilinear equation with constant diffusivity has been discretized with the backward Euler scheme in time and FEM in space in [2] and with higher order convolution quadratures in [23].In these papers, the authors allowed for nonsmooth initial data, which is a realistic and more difficult case.The numerical analysis of this problem was later expanded to include the variable in space and time diffusivity, with a linear source in [17,36].Lately, optimal-order estimates for the semidiscrete Galerkin numerical method with nonsmooth data and fully general semilinear subdiffusion equation have been obtained in [38] under weak assumptions.Finally, we mention a few notable papers that introduced and analyzed various numerical methods for purely linear equations with the Caputo time derivative.In [14] two fully discrete schemes based on modified convolution quadrature have been developed and have been shown to achieve the optimal order of convergence with respect to the smoothness of the initial data.The L1 method has been utilized, for example, in [19,26,42], where optimal order estimates have also been given even for nonuniform grids. We discretize (1) in space by applying the Finite Element Method with piecewise linear elements and consider for the temporal approximation of the semidiscrete problem a semi-implicit scheme where the fractional derivative is approximated by Lubich's Convolution Quadrature (CQ) method [29].We derive sufficient conditions for the CQ that guarantee the stability and convergence of the resulting scheme.As a side result, we are able to obtain a new version of the Grönwall's inequality that is suitable for use in the context of admissible convolution quadratures.To the best of our knowledge, this is the first CQ approach to discretization of the time-fractional quasilinear diffusion equation.A crucial point in our convergence proof is based on the aforementioned Grönwall's lemma and a new coercivity result for the CQ methods (other results concerning a different approach to CQ coercivity can be found in [6] and in the monograph [7], Section 2.6).Thanks to these, the quasilinear case can be analyzed via the energy method as opposed with previous operator approaches that are not suitable in this case.Therefore, we are able to present a rigorous analysis of a nonlinear subdiffusion equation based on a CQ scheme that allows for a fast and oblivious implementation.Indeed, by applying the algorithm in [5], the memory requirements can be reduced from the O(N ) required by a straightforward implementation of the CQ, see [30], to O(log(N )), being N the total number of time steps, and the complexity can be reduced from O(N 2 ) to O(N log(N )).Moreover, for BDF(p) quadratures, the order of convergence of our method remains optimal, having the same form as in linear subdiffusion equations.That is, the error in time behaves as t α−1 n h, where h is the time step (for the Euler scheme we also observe a logarithmic factor).Away from t = 0 the method exhibits the first order of convergence which deteriorates to order α when t → 0 + .This behavior is typical for linear equations [9,14], also for formulas of higher order [16], and it is visible in the L1 method discretization [19], too.We are able to extend this result to quasilinear equations with minimal regularity assumptions on the solutions. The paper is organized as follows.In Section 2 we prove a coercivity result for the CQ discretization of the fractional derivative and a suitable version of Grönwall's lemma for the analysis of (1).In Section 3 we present our numerical scheme and provide complete error estimates in both time and space.Section 4 describes the implementation in time of our method and the application of the fast and oblivious algorithm from [5] in this setting.The numerical results confirming our theoretical results are shown in Section 5, together with some comparisons of complexity requirements with other schemes in the recent literature. Properties of convolution quadratures for the Caputo derivative We will start by discussing some properties of CQ that will be useful for the energy method.Fix the uniform time mesh 0 ≤ t n := nh ≤ T , with step h > 0, and a function y(t).Provided that the initial condition vanishes, that is y(0) = 0, the CQ approximation of the Caputo derivative ( 2) is given by w n−j y(t j ), with the CQ weights given by with δ(ζ) the symbol of the underlying ODE solver.For the implicit Euler-based CQ it is δ . CQ based on high-order Runge-Kutta methods are also available [31], but will not be considered in the present work.The notation in (8) for the CQ discretization extends in a straightforward way to vectors v = (v j ) N j=1 , so that we will also denote ∂ α h v the vector with components given by In what follows we will always assume that the weights have the following signs This assumption is motivated by our subsequent results in this section.The simplest example of the above condition is the Backward Euler scheme for which we have δ From the binomial series we have the weights where in the last equality we have taken the minus sign from all j factors canceling the (−1) j term. The overall sign follows due to the fact that all parentheses are positive for α ∈ (0, 1).For the BDF2 weights can also be written explicitly. Proposition 1.Let w j given by (8) be the weights associated with the BDF2 formula.Then, where the hypergeometric function is defined by with the Pochhammer symbol (a) k = a(a + 1)...(a + k − 1). Proof.The symbol for BDF2 can be factored as δ Therefore, by the definition of the weights (8) we have where we have used the Cauchy product formula for Taylor series for (1 Therefore, it is sufficient to evaluate the inner sum.First, notice that Therefore, by the definition of the binomial coefficient, The above is precisely the definition of the hypergeometric function and this can be seen by noticing that j! The proof is complete since due to the factor (−j) k , the series terminates after k = j. It is not straightforward to prove that the BDF2 weights satisfy the condition (A), however, we can easily see that and hence, the first weight is negative, the second one only for 0 < α < 5/8 = 0.625, while the third one for 0 < α < 7/8 = 0.875.We have numerically checked the sign of a number of subsequent weights and it confirmed that all of them satisfy our assumption (A).That is to say, all BDF2 weights are admissible for 0 < α < 5/8.This can also be visualized numerically.In Fig. 1 we have plotted the respective weights w j for all 0 < α < 1. Computations confirm that |w j | are also j−decreasing, as can be seen in Fig. 2. It can be seen that the numerical calculations confirm our hypothesis. Remark 1.We note that obtaining exact explicit formulas for the weights for the general CQ quadratures can be impossible, but only proving that they have a sign satisfying (A).However, according to the general theory of CQ we have the following (see [32], Theorem 2.1) 18) for different 0 < α < 1 with h = 1.Weight w 2 is negative for 0 < α < 5/8, weight w 3 is negative for 0 < α < 7/8, while all other weights are always negative.for p ≥ 1 depending on the order of the CQ formula.From the above we can see that for j sufficiently large and h small, the sign of the weights is the same as the sign of −α/Γ(1 − α), that is, negative.Note that this conclusion is not necessarily valid for small j which is precisely the case for BDF2 weights. Going back to the general case, by the very construction of the approximation to ∂ α t we have the consistency condition which follows by putting ζ = 1 into (8) or by requiring that any CQ scheme for the Caputo derivative is exact for constant functions (and hence, identically equals zero).From this it follows that for any n ≥ 1 we have by the assumption (A) that the weights w j are negative for j ≥ 1.We also have the truncation error The term ζ n (h) can be estimated with the help of [32], Theorem 2.2.Under the assumption that the solution has the typical regularity near the origin, that is ∥u(t)∥ ≤ Ct α we have This clearly states how the error deteriorates near the origin due to the lack of smoothness of the solution. We will now prove two auxiliary results that are discrete generalizations of known continuous inequalities for the Caputo derivative.They will be used later in the following section, but they are also interesting on their own.First, it is clear that the first ordinary derivative satisfies 1 2 d dt y(t) 2 = y(t) dy dt .For the continuous Caputo derivative, it becomes an inequality 1 2 D α y(t) 2 ≤ y(t)D α y(t) which is very useful when applied in the energy method (see [3,46,20] for recent proofs in the case where y = y(t) ∈ L 2 is a time-dependent mapping from the Hilbert space).As the following proposition states, this inequality is still valid for the convolution quadrature constructed above (the L1 scheme version has recently been proved in [24]).Proposition 2. Let the weights w j for the convolution quadrature (8) satisfy (A).Then, for any sequence of functions (y n ) n ⊆ L 2 with y(0) = 0 we have Proof.By using the definition of weights ( 8) we can write Since by assumption (A) the negativity of w j for j ≥ 1 holds, we can use the Cauchy-Schwarz inequality to obtain which is the first inequality that we had to prove.Furthermore, by the Cauchy inequality ab ≤ (a 2 + b 2 )/2 we can estimate each product with ∥y n ∥ by the sum of squares or, by gathering terms into the first sum with the use of the fact that w 0 > 0 we obtain But, according to the consistency (21) the first term is positive leading us to the assertion. The next result concerns the discrete Grönwall inequality for the Caputo derivative.Its version for the L1 discretization has been proved, for example, in [25,26].The proof is based on two steps: first, we invert the derivative and then use the following classical lemma, which is a generalization of the discrete integral Grönwall inequality. Proposition 3 (Discrete fractional Grönwall inequality (integral version) ( [11], Theorem 2.1)).Let (y n ) n be a positive sequence satisfying for some positive constants M 1,2,3 , which may depend on h.Then, for 0 < α < 1 we have where the Mittag-Leffler function is defined by Now, we can proceed to our result. Lemma 1 (Discrete Grönwall inequality for the convolution quadrature).Let (y j ) ∞ j=0 and (F j ) ∞ j=0 be positive sequences of numbers such that there exist F (h) > 0 bounding the discrete fractional integral of F , that is, Assume that y(0) = 0 and the discrete Caputo derivative ∂ α h is constructed as the convolution quadrature (8) with weights that satisfy (A).Then, the inequality implies that there exists a constant C(α, λ 0,1 , T ) > 0 and a time-step h 0 > 0 such that for all 0 < h ≤ h 0 < 1.For λ 0 = 0 the above is valid without any restriction on the time step h. Proof.The proof proceeds by mathematical induction.First, we will show that where b j is defined as the CQ weight for the fractional integral (3) with the same symbol as in (8), that is For n = 1 from ( 8) and (A) we have but This proves the initial step.Next, for convenience, set a j := λ 0 y j + λ 1 y j−1 + F j .We then assume that (35) holds for j = 1, ..., n − 1.Then, by the definition of the quadrature (8) and our assumption (33) we have Since w j < 0 for j ≥ 1, the inductive assumption then gives or, by changing the order of summation, and we can focus on the resulting double sum.If we change the variable to i = n − j we can write The series above can be computed using generating functions.To see this, consider the following Cauchy product of power series Therefore, when m > 0 the coefficients of the rightmost power series vanish, leading to hence, An observation that w 0 = b −1 0 finishes the inductive step, and we have proved (35).To proceed further we will use the fact known from the convolution quadrature theory that the weights b j approximate the continuous kernel, that is (see for example [32], formula (2.6)) for some constant C = C(α).Therefore, separating the last term in the sum (35) yields All terms involving y j for j ≤ n − 1 can be estimated by a common sum.To see this observe that in the sum of y j−1 we can change the summation variable j − 1 → j and use the elementary inequality to obtain Next, fix any time-step h 0 > 0 for which 1 − C(α)λ 0 h α 0 > 0.Then, for 0 < h ≤ h 0 we can factor out y n , where we have used again (47), with the index shifted by one, in the sum with F j .This can be further estimated with the use of the assumption of bounded fractional integral of F n , that is, using (32) we obtain (after changing the summation variable j → j + 1) Finally, we have t n ≤ T and after redefinition of the constant C(α, λ 0,1 , T ) we arrive at the conclusion. 3 Fully discrete scheme for the quasilinear subdiffusion equation We can now proceed to the derivation of the fully discrete scheme to solve (1).Take any test function χ ∈ H 1 (Ω), then by integrating by parts, we can obtain the following with u(0) = φ.Here, we defined the form To discretize the above in time, we use the convolution quadrature (8) for the Caputo derivative and the finite element method (FEM) for the spatial variables.Let T h be the family of shape-regular quasi-uniform triangulations of Ω with the maximal diameter k = max K∈T k diamK.By V k ⊂ H 1 0 (Ω) denote the standard continuous piecewise linear function space over T k that vanish on the boundary Therefore, denoting by U n ∈ V k the numerical approximation of u(t n ) we devise the semi-implicit scheme In what follows, we will utilize some notions of projecting a function on a finite-dimensional space. For example, we can use the orthogonal projection P k defined as or the Ritz elliptic projection R k for fixed 0 The latter is particularly useful in the convergence proof.Observe that to find R n u(t) it is necessary to solve a linear elliptic problem.From the general theory of PDEs we know the error estimates on these projections when u ∈ C([0, T ]; H m (Ω)) [44,33] ∥u Finally, note also that in all nonlinearities of the equation, that is, D and f , the time step has been delayed by one in order to obtain a fully linear scheme for the solution to the nonlinear equation. Having the results from the previous section, it is straightforward to prove that the scheme is stable. Proposition 4 (Stability).Let U n be the solution of (55).Suppose that there exists a function g = g(t) such that ∥f (t, u)∥ ≤ g(t) with I α g(t) ≤ F .Then, we have Proof.Let χ = U n ∈ V k in (55), then from the Proposition 2 and the Cauchy inequality 2 we have Since, by definition (53) the a-form is positive-definite we further have Now, notice that there exists a constant C such that since the rectangle discretization of the fractional integral converges to the continuous one.The application of Lemma 1 ends the proof.Now, we can proceed to the convergence proof.As mentioned in the Introduction, the regularity assumption on the solution is the typical for the subdiffusion equation.Due to the lack of relevant results in the literature for quasilinear equations, we have to put this regularity requirement as an assumption.Investigating this issue further is the subject of our future work. Theorem 1 (Convergence).Let U n be the solution of the scheme (55) as a numerical approximation at t = t n to the solution u of (1).Assume that for all t, s ∈ [0, T ] the solution satisfies the assumption of time regularity (7) and is H 1 0 (Ω) ∩ H 2 (Ω) in space.Then, for sufficiently small h > 0 we have where A(h) and B(h) satisfy and ζ j+1 (h) is the truncation error (22) for the convolution quadrature for the Caputo derivative defined in (8) with assumptions (A). Proof.We will start by writing the error equation for the problem.Set in a standard way The decomposition of e n into ρ n and θ n is very useful since the estimate on ρ n follows from general theory (58) while θ n belongs to the finite-dimensional space V k .Therefore, it is sufficient to find a bound on the latter error.Hence, observe that for any χ ∈ V k from the definition of error decomposition into ρ n and θ n we have Now, using the numerical scheme (55) we can identify the source term Next, the definition of ρ n and the PDE itself (52) leads to Now, we can use the definition of the Ritz projection (57) to write R k u instead of u in the a-form We see that the right hand side in the equation for the error θ n above decomposes into four terms: 1. Ritz projection error, 2. truncation error of the Caputo derivative, 3. nonlinearity of the diffusivity, 4. nonlinearity of the source. Therefore, by setting χ = θ n ∈ V k we can obtain where we have used the assumption of Lipschitz continuity of f as in (4).Now, from the definition of the a-form (53) we can further estimate where we have used (59) and the Lipschitz continuity of D and Schwartz inequality.Now, since by (4) we have D(x, t, u) ≥ D − , it follows that where we have explicitly written down the truncation error (22) for the quadrature.The next step is to bound the difference of solutions at a retarded time Due to the assumed temporal regularity (7) we have By using Poincaré-Friedrichs inequality we can write ∥θ n ∥ ≤ C∥∇θ n ∥ and factor out the norm of the gradient Furthermore, we can use the ϵ-Cauchy inequality ab ≤ ϵ 2 a 2 + 1 2ϵ b 2 , with an appropriate choice of ϵ to cancel the gradient term or by using Proposition 2 and a simple inequality The above form is almost ready for the discrete fractional Grönwall inequality (Lemma 1).Before doing that, we have to estimate the ρ terms in the first inner parentheses.To this end, notice that by (58) we immediately have Whence, Now, by invoking Lemma 1 and using some elementary estimates on (a + b) 2 we obtain where A(h), and B(h) are defined in (66) and the quantity E(h) is Finally, by definition we have θ 0 = R k u(0) − U 0 = 0 since our initial condition vanishes, and this brings us to and the proof is complete. For the BDF(p) CQ of the order p ≥ 1 we can infer the exact order of the convergence of our numerical scheme. Corollary 1.Let the assumptions of Theorem 1 be satisfied.Then, when the Caputo derivative is discretized with the BDF(p) CQ we have the following for n large enough and sufficiently small h. Proof.To prove the assertion, we have to find out the form of A(h) and B(h) defined in Theorem 1. They come from the bound of the discrete fractional integral of ζ n (h), that is, the truncation error of the Caputo derivative as in (22).First, assume that p = 1, that is, we consider the Euler scheme.From (23) we have where we have changed the summation order via j → j − 1.As can be seen, the resulting expression is the Riemann sum of a convergent integral, hence with a suitable choice of the constant This integral can be evaluated exactly with the help of the hypergeometric function, but for our needs, we only have to find its leading order behavior for large n.To this end, for arbitrary 1 n < ϵ < 1, we split the integral into two parts and, hence, for sufficiently small h > 0 This gives us A(h) = 0, B(h) = Ch ln h −1 and proves the case with p = 1.Now, assume that p ≥ 2. By a similar reasoning we can identify the Riemann sum of the corresponding integral and obtain As can be seen, the overall error for our scheme based on BDF1-CQ is equal to t α−1 n h ln h −1 + k 2 , which is always second order in space.For a fixed time t n > 0, that is, locally, the order in time is 1 (apart from the logarithmic term).On the other hand, the global (maximal) error in time is of the order α.This behavior is precisely what can be expected from the same scheme applied to the linear subdiffusion equation, due to lower regularity of the solution near the origin.However, note that our assumption (7) does not require the existence of higher derivatives as opposed to the various requirements found in the literature. Fast and oblivious implementation To implement our numerical scheme (55) , we fix a basis of the V k space and expand the solution U n , that is Taking χ = Φ j , denoting y n = (y n 1 , ..., y n M ), and plugging the above into (55) we obtain the following where the mass matrix B = {B ij } M i,j=1 , the stiffness matrix A = {A ij } M i,j=1 , and the load vector (93) Finally, we discretize the Caputo derivative according to the CQ scheme ( 8) to arrive at a linear system of algebraic equations which clearly indicates the non locality in time: the right-hand side depends on the historical values of the solution y i for i = 0, 1, ..., n − 1.As a simple example of the basis, in one spatial dimension we can have Ω = (0, 1) for which we can take the usual tent functions However, we do not directly implement (94).Instead, we apply the fast algorithm developed in [5] for the evaluation of the fractional integral, in order to deal with memory more efficiently and reduce computational cost.To do this, we use the preservation of the composition rule by all CQ schemes, which implies In the case of the Euler based CQ, this yields Table 1: Estimated order of convergence for the second example (99) for different α.The basis our calculation has been taken h = 2 −8 for the formula (100) and T = 1. Parameters k, x 0 , and δ are chosen accordingly for a particular simulation.Note that in this case, we do not possess an exact analytic solution. In the first example (98) we can compare the numerical solution with the exact one and compute the error at the final time of the simulation t = T , that is, for each α we find ∥u(T ) − U n ∥, where nh = T .Note, however, that this error is limited by the spatial discretization error that can be eliminated by choosing a sufficiently small grid spacing.The results of our calculations are presented in Fig. 3.As we can see, numerical computations verify that the scheme is convergent even for the nonsmooth in time case.The real order of convergence is consistent with 1 for most values of α, which is the order of Euler discretization.This is consistent with the results of Corollary 1 apart from the logarithmic part, which is difficult to resolve numerically and can cause certain discrepancies for small h.However, we can conclude that numerical simulations confirm the theoretical results. For the second example (99) the error cannot be computed directly and we will estimate the order of convergence by the Aitken extrapolation [27].Assume that the error can be estimated with max n ∥U n h − U n h/2 ∥ ≈ Ch p , where as a reference solution we take the one computed on a twice finer grid.Then, by halving the grid once more and taking the logarithm we can write , with fixed nh = t n = T. (100) That is to say, the order is estimated based on the pointwise norm in time and the L 2 norm in space, in line with our results from previous sections.The results of our computations are gathered in Tab. 1.As we can see, the estimated order is close to 1 for all values of α, again consistent with the Euler discretization measured pointwise in time. The final example concerns the temporal complexity of our algorithm.We have compared the computation times of three ways of implementing the time integration of our PDE: with and without The problem tested is our second example (99).In our calculations, we have taken h = 2 −9 but also tested other values.Also, to obtain Fig. 4 independent of various computer background processes, we have conducted simulations 100 times and taken the mean values.The results are uniform with respect to α (note the vertical scale) and indicate that the fast and oblivious implementation is on average twice as fast as the standard implementation. Conclusion The Convolution Quadrature can be applied to the quasilinear subdiffusion equation yielding a convergent scheme for quadratures satisfying (4).When supplied with fast and oblivious implementation, the computation time can be reduced by at least twice, which is much desired in the time-fractional setting.Numerical computations opened up the problem of carefully investigating the behavior of the error in the quasilinear case as part of future work.In particular, it would be interesting to find optimal error estimates for nonsmooth data in which quasilinearity can produce significant difficulties.The semilinear problem was investigated in the L1 scheme discretization in [38], and this work needs to be carried over to the CQ methods. 90) since the appearing integral has an exact primitive − 1 α(tn+h) tn−s h+s α .This time we have A(h) = 0 and B(h) = Ch.Combining the case p ≥ 2 with (65) finishes the proof. Figure 4 : Figure 4: Mean ratio of calculation times: without and with the implementation of the fast and oblivious algorithm.The comparison with the L1 scheme is also presented.
2023-11-02T06:42:20.607Z
2023-10-31T00:00:00.000
{ "year": 2023, "sha1": "543efc65845b33389e2027b3ba44d89ceb31a305", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "543efc65845b33389e2027b3ba44d89ceb31a305", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
122773446
pes2o/s2orc
v3-fos-license
Adaptive Exponential Stabilization for a Class of Stochastic Nonholonomic Systems and Applied Analysis 3 x(t 0 ) = 0 or t = ∞. It is concluded that x 0 does not cross zero for all t ∈ (t 0 ,∞) provided that x(t 0 ) ̸ = 0. Remark 8. If x(t 0 ) ̸ = 0, u 0 exists and does not cross zero for all t ∈ (t 0 ,∞) independent of the x-subsystem from (6). 3.2. Backstepping Design for u 1 . From the above analysis, the x 0 -state in (1) can be globally exponentially regulated to zero as t → ∞, obviously. In this subsection, we consider the control law u 1 for the x-subsystem by using backstepping technique. To design a state-feedback controller, one first introduces the following discontinuous input-state-scaling transformation: Introduction The nonholonomic systems cannot be stabilized by stationary continuous state feedback, although it is controllable, due to Brockett's theorem [1].So the well-developed smooth nonlinear control theory and the method cannot be directly used in these systems.Many researchers have studied the control and stabilization of nonholonomic systems in the nonlinear control field and obtained some success [2][3][4][5][6].It should be mentioned that many literatures consider the asymptotic stabilization of nonholonomic systems; the exponential convergence is also an important topic theme, which is demanded in many practical applications.However, the exponential regulation problem, particularly the systems with parameterization, has received less attention.Recently, [3] firstly introduced a class of nonholonomic systems with strong nonlinear uncertainties and obtained global exponential regulation.References [4,5] studied a class of nonholonomic systems with output feedback control.Reference [6] combined the idea of combined input-statescaling and backstepping technology, achieving the asymptotic stabilization for nonholonomic systems with nonlinear parameterization. It is well known that when the backstepping designs were firstly introduced, the stochastic nonlinear control had obtained a breakthrough [7].Based on quartic Lyapunov functions, the asymptotical stabilization control in the large of the open-loop system was discussed in [8].Further research was developed by the recent work [9][10][11][12][13][14][15][16].[17][18][19] studied a class of nonholonomic systems with stochastic unknown covariance disturbance.Since stochastic signals are very prevalent in practical engineering, the study of nonholonomic systems with stochastic disturbances is very significant.So, there exists a natural problem that is how to design an adaptive exponential stabilization for a class of nonholonomic systems with stochastic drift and diffusion terms.Inspired by these papers, we will study the exponential regulation problem with nonlinear parameterization for a class of stochastic nonholonomic systems.We use the inputstate-scaling, the backstepping technique, and the switching scheme to design a dynamic state-feedback controller with ∑ ∑ ̸ = ; the closed-loop system is globally exponentially regulated to zero in probability. This paper is organized as follows.In Section 2, we give the mathematical preliminaries.In Section 3, we construct the new controller and offer the main result.In the last section, we present the conclusions. Problem Statement and Preliminaries In this paper, we consider a class of stochastic nonholonomic systems as follows: where 0 ∈ and = [ 1 , . . ., ] ∈ are the system states and 0 ∈ and 1 ∈ are the control inputs, respectively.Consider the following stochastic nonlinear system: where ∈ is the state of system (2), the Borel measurable functions: : +1 → and : +1 → × are assumed to be 1 in their arguments, and ∈ is an -dimensional standard Wiener process defined on the complete probablity space (Ω, , ). Controller Design and Analysis The purpose of this paper is to construct a smooth statefeedback control law such that the solution process of system (1) is bounded in probability.For clarity, the case that 0 ( 0 ) ̸ = 0 is firstly considered.Then, the case where the initial 0 ( 0 ) = 0 is dealt with later.The triangular structure of system (1) suggests that we should design the control inputs 0 and 1 in two separate stages. To design the controller for system (1), the following assumptions are needed. Theorem 7. The 0 -subsystem, under the control law (6) with an appropriate choice of the parameters 0 , 01 , 02 , is globally exponentially stable. Backstepping Design for 1 .From the above analysis, the 0 -state in (1) can be globally exponentially regulated to zero as → ∞, obviously.In this subsection, we consider the control law 1 for the -subsystem by using backstepping technique.To design a state-feedback controller, one first introduces the following discontinuous input-state-scaling transformation: Under the new -coordinates, -subsystems is transformed into where In order to obtain the estimations for the nonlinear functions and , the following Lemma can be derived by Assumption 6. Lemma 9.For = 1, 2 . . ., there exist nonnegative smooth functions (⋅), (⋅), such that Proof.We only prove (11).The proof of ( 12) is similar to that of (11).In view of ( 6), ( 8), (10) where To design a state-feedback controller, one introduces the coordinate transformation where 2 , . . ., are smooth virtual control laws and will be designed later and 1 = 0. θ denotes the estimate of , where Then using ( 9), ( 10), (14) and It ô differentiation rule, one has where The proof of Lemma 10 is similar to that of Lemma 9, so we omitted it. We now give the design process of the controller. Switching Control and Main Result.In the preceding subsection, we have given controller design for 0 ̸ = 0. Now, we discuss how to choose the control laws 0 and 1 when 0 = 0. We choose 0 as 0 = − 0 0 + * 0 , * 0 > 0. And choose the Lyapunov function 0 = (1/2) 2 0 .Its time derivative is given by 0 = − 0 2 0 + * 0 , which leads to the bounds of 0 .During the time period [0, ), using 0 = − 0 0 + * 0 , new control law can be obtained by the control procedure described above to the original -subsystem in (1).Then, we can conclude that the -state of (1) cannot be blown up during the time period [0, ).Since at ( ) ̸ = 0, we can switch the control inputs 0 and to ( 6) and (31), respectively.Now, we state the main results as follows. Theorem 11.Under Assumption 5, if the proposed adaptive controller (31) together with the above switching control strategy is used in (1), then for any initial contidion ( 0 , , θ) ∈ , the closed-loop system has an almost surely unique solution on [0, ∞), the solution process is bounded in probability, and {lim → ∞ θ() exists and is finite} = 1. Proof.According to the above analysis, it suffices to prove in the case 0 (0) ̸ = 0. Since we have already proven that 0 can be globally exponentially convergent to zero in probability in Section 3.1, we only need prove that () is convergent to zero in probability also.In this case, we choose the Lyapunov function = , and > + ; from (32) and Lemma 3, we know that the closed-loop system has an almost surely unique solution on [0, ∞), and the solution process is bounded in probability. Conclusions This paper investigates the globally exponential stabilization problem for a class of stochastic nonholonomic systems in chained form.To deal with the nonlinear parametrization problem, a parameter separation technique is introduced.With the help of backstepping technique, a smooth adaptive controller is constructed which ensures that the closed-loop system is globally asymptotically stable in probability.A further work is how to design the output-feedback tracking control for more high-order stochastic nonholonomic systems.
2019-01-02T00:51:31.533Z
2013-11-26T00:00:00.000
{ "year": 2013, "sha1": "be38c8e96b418864f68fb2f3c5b448e3abcc6c38", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/aaa/2013/658050.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "be38c8e96b418864f68fb2f3c5b448e3abcc6c38", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
260134362
pes2o/s2orc
v3-fos-license
“ That ’ s all it takes to be trans ” : counter-strategies to hetero-and transnormative discourse on YouTube : The discourse surrounding transgender people has for a long time been in fl uenced by certain narrative practices necessary to authenticate people ’ s trans status to medical professionals. This conventional narrative (master narrative), based on ideals of hetero-and cisnormativity, has led to stereotypical representations of trans identities. These largely continue to exist today. Nevertheless, counter-discourse to these stereotypical representations is becoming more prominent. Particularly YouTube has become an increasingly popular platform for counter-discursive action. The current case study therefore focusses on two transgender YouTubers who challenge the normative ideals by creating their own counter-discourse. The YouTubers address four major topics of stereotypical representation: the ideal of binary gender, heterosexuality, the wish to transition in order to pass as cisgender, and the belief that transgender people have always identi fi ed as the other gender. The two creators recognise the discursively reproduced stereotypes and use a combination of fi ve di ff erent strategies to refute them: I NVERSION , P ARODY , C OMPLEXIFICATION , S HIFT , and P ERSONAL E XPERIENCE . Making use of these strategies, the subjects ’ positive discourse aims at presenting a multi-faceted representation of transgender identities. Introduction The discourse surrounding transgender people has for a long time been influenced by certain narrative practices necessary to authenticate people's trans status to medical professionals (Dame 2013: 43). The conventional narrative, largely based on ideals of heteronormativity, has led to stereotypical representations of trans identities and normative ideals believed within the community. These continue to exist today. Currently, YouTube is one of the most prevalent platforms for trans people to exchange ideas and information on transgender issues. However, even a lot of the videos found there often show conventionalised narratives, representing normative expectations. This kind of discourse may make some trans people feel misrepresented and marginalised. Nevertheless, there also exist other videos that provide a counter-discourse to these normative ideals. The two YouTubers that are the subjects of this study share such videos on their channels. The data will be approached from the perspective of Positive Discourse Analysis, focussing on how the four normative ideals presented above can be challenged. Specifically, I ask the question: Which strategies are used by the two YouTubers to construct counter-discourse to the normative ideals typically represented in transgender YouTube videos and to authenticate non-normative identities (including their own)? To answer this question, I first discuss the connection between identity and language, focussing specifically on the expression of gender and normative ideals surrounding transgender discourse before detailing previous work on counterdiscourse (Section 2). Section 3 explains the method used for this analysis, while Section 4 shows the results and discussion. Lastly, Section 5 summarises the main points. 2 Transgender discourse 2.1 Identity and heteronormative discourse Bucholtz and Hall approach identity as "constituted in linguistic interaction" (2005: 585) and define it as "the social positioning of self and other" (2005: 586, emphasis omitted). They make sense of identity in interaction by way of five principles, which are interconnected: the EMERGENCE PRINCIPLE sees identity as emerging in and through interaction (2005: 588). The POSITIONALITY PRINCIPLE describes identity as the positioning of a speaker in different roles and categories during interaction (2005: 592). The INDEXICALITY PRINCIPLE explains how an identity can be indexed, i.e., signalled, by several processes in language, such as the labelling of identity categories (2005: 594), which is one of the most overt ways in which identity is expressed in communication. Other processes include "implicatures and presuppositions" (Bucholtz and Hall 2005: 594) to signal one's identity and position oneself in relation to others. The RELATIONALITY PRINCIPLE assumes identity to be constructed through relations between different identity categories and other people, e.g., whether they are similar or different from one another (2005: 598). In interaction, identity can never be represented as whole, only localised. The last principle outlined is the PARTIALNESS PRINCIPLE, which explains identity as being made up of several partial construction processes, rather than one specific one (2005: 606). Schneider (2001) makes a point for people possessing a "structure of identities" (Identitätengefüge;2001: 36). This structure of identities is made out of several dimensions of identity and group belongings (e.g., ethnicity, social class) as well as several identity roles in social interaction. In all of this, gender is assumed to be one of the most important dimensions that make up a person's identity (cf. e.g., Bucholtz and Hall 2010: 590). It is defined as a concept apart from anatomy (Feinberg 2006: 205) and considered to be a cultural construct (Stryker 2008: 11). Sex, on the other hand, is often considered biological (e.g., DeFrancisco and Palczewski 2014: 10). 1 While in arguably most people, both concepts align (i.e., a person who is assigned female at birth grows up to be a woman), this relationship is neither necessary, nor deterministic (Stryker 2008: 11). People whose gender is congruent with the sex that they were assigned at birth are often called cisgender (Stryker 2008: 22). People whose gender identity does not align with the gender they were assigned at birth are referred to as transgender. Many people take this to mean that transgender people identify with the opposite (i.e., other binary) gender (cf. e.g., Stone 2006: 22). This idea is part of a heteronormative viewpoint, which is defined as "those structures, institutions, relations and actions that promote and produce heterosexuality as natural, selfevident, desirable, privileged and necessary" (Cameron and Kulick 2003: 55). Heteronormativity, therefore, assumes binary gender and heterosexuality (Motschenbacher 2014: 244), excluding the possibility of non-binary genders, when it has been shown that, in fact, there is a multitude of gender categories which cannot simply be explained by a simple binary: "Numerous conceptions of gender confront one another in a multi-dimensional space, each of them proposing a different alternative to male and heterosexual norms" (Beaubatie 2021). With all these different gendered possibilities, the importance of agency and selfidentifiation is increasingly stressed (e.g., Borba and Milani 2017: 16). While the social and legal sphere concerning the LGBTQ+ community has changed a lot in the 21st century (e.g., legalisation of same-sex marriage, antidiscrimination laws, the declassification of transgender as mental disorder), transgender people still find themselves struggling for access to resources: "These resources […] allow those in the dominant position to impose their conception of gender as being the only legitimate one" (Beaubatie 2021). In consequence, transgender people often still find themselves in the position of needing to authenticate their transgender identities to medical professionals in order to gain access to clinical assistance to medically transition (Dame 2013: 43;cf. also Cromwell 2006). Not only do transgender people need to produce such a life narrative at all, this story also often is expected to follow a very stereotypical pattern, as aptly summarised by Zimman (2012: 12-13): [A] trans person knows from a young age that they are not meant to be a boy/girl, despite others' perception of themperhaps there was some kind of mistake. They likely spent each night hoping that they would wake up the next morning with a different body. They always found themselves romantically and sexually attracted to members of the 'same' sex, while preferring the friendship and activities of members of the 'opposite' sex. The earlier these patterns emerged, the stronger the patient's claim to an unchangeable gender identity. […] For the true transsexual, intense distress over the gendered characteristics of the body create a desire for hormonal treatment, genital surgery, and any other procedure that might be needed to produce a normative male or female body. This narrative delineates what is considered the norm for transgender people. The four main points considered are: a) the self-identification with the other binary gender than assigned at birth, which might be based on the fundamental belief that there are only the two binary genders; b) heterosexuality, with transgender people identifying as homosexual before the recognition that they are actually the other (binary) gender than assigned at birth; c) the need to 'fully' transition (i.e. undergo hormone treatment and surgeries) in order to pass; 2 and d) the idea that people who are transgender have always known they identified as the other gender. Raun (2014: 371) calls this the "archetypal story of transsexuality". Hausman (2006) stresses this narrative's importance for the transgender community as it can help others to navigate "the strict protocols of the gender clinics" (2006: 337). However, these normative ideals can also be harmful, as they are based largely on heteronormativity and other features of cisnormativity, such as the consistency of gender identity (cf. Vergueiro 2015, as discussed in Borba and Milani 2017: 9), especially since they not only exist as ideals in society in general, but are also partially assumed within the transgender community as well (cf. Jones 2019) and continue to exist today. Counter-discourse (on YouTube) The data in this study are approached with Positive Discourse Analysis (PDA). PDA is related to the field of Critical Discourse Analysis, with which it shares the assumption that language can assert power. However, PDA shifts the focus from how inequalities are reproduced to texts that "seek […] possibilities for transformations which can overcome or mitigate limits on human well-being" (Fairclough 2013: 14). The driving factor behind PDA, then, is to analyse texts that are positively connotated (Martin and Rose 2007: 315). It assumes that the way the world is represented in discourse canpositivelyimpact and change the way people see the world by providing a COUNTER-DISCOURSE (cf. Macgilchrist 2007: 75) to the mainstream representations. "These stories, or counter-stories […] help to document, and perhaps even validate, a 'counter-reality'" (Andrews 2004: 2). Counter-narratives are said to relate back to master narrativesi.e., the prevalent stereotypical narrativesand are sometimes created by first presenting the master narrative in order to subvert it later on (2004: 1-2). DeFrancisco and Palczewski refer to this act of counterdiscourse as "talking back", which "is not mere talk but talk with a political consciousness" (2014: 118). Many young transgender people use the internet to gain information and knowledge on trans issues (Jones 2019: 86). These online resources have become increasingly used to discuss relevant issues such as "gaining access to medical care, 'passing' guides, the nature of trans as an identity category, as well as performing advocacy on these issues" (Dame 2013: 45). Not only do these kinds of videos offer trans individuals a way to simply express their identity, but they also offer them the opportunity to challenge and complement typical media representation of transgender people, i.e., to provide counter-discourse. Horak (2014: 574) therefore also views such videos (vlogs) as "a form of political action" in that they provide a selfauthoritative media outlet to represent transgender experience and gather communal support. Especially YouTube gives everyone the chance to 'talk back' (Raun 2012: 11) as well as find a community of support. Nevertheless, some of these YouTube videos, Jones (2019: 86) argues, still show "clichéd and homogenized representations, with young amateur broadcasters following a seemingly fixed approach to both creating and editing a transition diary". Jones addresses this prevalence of normative ideals within transgender discourse by focussing on two transgender YouTubers who make claims of authenticity for their own identity by abiding to transnormative representations. Her subjects, for instance, claim authenticity "by drawing an essential link between heterosexual desire and gender identity" (Jones 2019: 98) and make reference to passing as cisgender being desirable, normal, and typical (2019: 91). This shows the subject's stance towards "what is acceptable or legitimate in terms of transgender identity for her viewers" (Jones 2019: 92). In these videos, creators often position themselves as EXPERTS, a concept often at the core of transgender identity discourse (Dame 2013;Meyerowitz 2006). Expertness can be achieved through personal expression, i.e., through story-telling or in the form of advice to the viewers (Dame 2013: 42). People who give advice online often tend to establish their qualification as advice giver by referencing personal stories and experience (Morrow 2006: 542). While many trans people will identify strongly with the normative script (i.e., the master narrative), "those who do not align with it may feel marginalised or inauthentic due to its prominence" (Jones 2019: 88; see also Garrison 2018 on not feeling "trans enough"). The videos analysed in this study represent a particular part of the counterdiscourse. The vlogs considered in this study actively challenge the normative ideals existent within the typical counter-discourse itself, therefore representing an even more marginalised part of the community. Other studies have considered this specific part of counter-discourse before. For example, Dame (2013: 58-59) discusses extracts from a YouTuber who clearly states that not every transgender person identifies as binary as well as the fact that some people might not want to transition and/or pass as cisgender. Garrison (2018) presents interviews with trans people who report on the feeling of not being trans enough based on "their atypical life-history and transition narratives" (2018: 626). Crowley (2022) looks at legitimising discourses of non-binary YouTubers and finds that in the videos analysed, 'personal feeling', 'lexical definition' and 'historical fact' were used as legitimising strategies for non-binary identity, which supports the idea of the trans person as expert on the lived experience. What sets this study apart from these mentioned above is the specific focus on the discursive strategies used by the YouTubers to achieve their argumentative goal. While this kind of discourse can indeed be found online, we should also not forget that this deviation from the master narrative "may be available only to those who are privileged by other identity vectors (being white, middle class, partnered, and able-bodied, for example)" (Rondot 2016: 547), and not the majority of trans people. Subjects and data Two YouTubers are the subject of this paper: Ryan 3 indexes himself as a 'trans guy' and 'trans-masculine' who is not binary. Ryan uses he/him/his pronouns and identifies his sexuality as queer. Jess 4 is non-binary, pronoun-indifferent 5 and identifies as bisexual. The two creators are friends and collaborate on videos from time to time. Both subjects are middle-class, white North-Americans and therefore represent the most widely acknowledged demographic group on YouTube (cf. Horak 2014: 576). Nevertheless, it should also be noted that these two individuals hardly represent the entire community that engages in counter-discourse to normative ideals within transgender discourse. Rather, they shall be seen as one example of the kind of counter-discourse that people may come across. The data for this study are 13 YouTube videos collected from the YouTube channels of the two creators, of which three are collaborations between the two. The videos are roughly between four and 20 minutes in length (with most of them between nine and 15 minutes). In their videos, Jess also experiments with including pictures and writing and has other guest appearances. All videos chosen were uploaded in the year 2018 and selected based on the criteria that they address issues relating to the four main topics relevant for the analysis. The four overarching topics are as follows: a) Gender, b) Sexuality, c) Transitioning and passing as cisgender, and d) Questioning one's identity. The videos were transcribed by first downloading the subtitles automatically generated by YouTube, and then checking those transcriptions for correctness (including the addition of pauses and stress markers). 6 Coding and analysis The approach to data coding is adapted from the framework of PDA, as described by Macgilchrist (2007). She makes use of the concept of framingderived from cognitive linguisticswhere a FRAME is defined as the "background knowledge 'activated' by one particular word (concept)" (2007: 75). However, in her paper, she uses the idea of REFRAMING only as part of her strategies for counter-discourse. I propose to regard reframing as the general aim of counter-discourse. Using Macgilchrist's definition of frame, the aim of counter-discourse is to allow for more possibilities of "background knowledge" (2007: 75) than represented by the master narrative, and therefore to reframe a concept either partially or in its entirety. In order to achieve this goal, counter-discourse then makes use of several strategies (cf. Table 1). All these 4 A pseudonym. 5 In this paper, I will refer to Jess with the singular use of they/them/their to distinguish clearly between Jess and Ryan. 6 The transcription conventions were adapted from Jefferson (2004). discursive strategies may occur on their own, but more often occur in combination with each other. The transcribed data was first sorted according to topic (whereas some topics, naturally, overlapped). After the topics were assigned, the data was coded according to the five counter-discursive strategies. Cases in which the strategy was not immediately apparent were discussed with another researcher. While discourse on gender and sexuality cannot always be categorised as counter-discursive and other dimensions of discourse might appear more often in other contexts, the data analysed here mostly presented the identities spoken of as conflicting with normative representations, making the counter-discursive nature of this particular data explicit. Furthermore, the two YouTubers approach the topic mainly as an issue of identityoften with a focus on their own experienceand discuss other sociopolitical dimensions of impact only rarely, unless it is part of their own experience. For the analysis, mainly linguistic strategies were considered. However, as the data is video data, other modalities than speech, e.g., gestures and writing, could not be entirely disregarded. The different modes of communication are often understood "as intimately connected, enmeshed through the complexity of interaction, representation and communication" (Jewitt 2009: 1; see also Bucholtz and Hall 2016) and therefore aid in meaning-making. The gestures considered here present merely a small part of those that could be considered in a fully multimodal study; explained in detail are those needed to understand the linguistic utterances. These are METAPHORICS, which are often used to express abstract concepts, as well as DEICTIC GESTURES, which are used to "point to a location in gesture space that stands for an abstract concept" (McNeill and Pedelty 1995: 65). 4 Results and discussion Gender The normative representation of binary gender has several facets to be considered here: one of those is the fact that some people deny the existence of non-binary people (JESS: "they don't believe what I'm saying. […] it's just that like (.) non-binary people don't exist" 7 ). Another is the idea that non-binary people should not be considered under the transgender label. Ryan discusses this point in Example 1: (1) RYAN: […] there's a lot of things that happen in this community. A lot people that like (.) like gatekeep or like give attitude to people who are non-binary and say that non-binary people shouldn't be under the trans umbrella and you should just be non-binary or trans, like you can't (.) be […] together. In order to counter these stereotypes, the two subjects first allude to them in order to make people aware of these norms, creating their counter-narrative in relation to the master narrative (cf. Andrews 2004: 1-2). The stereotypes are countered by way of a very open and broad gender conceptualisation and by Jess and Ryan explaining their own gender experience. The subjects explicitly state that there are more gendered variants than just the two binary genders usually recognised. To conceptualise these different genders, both YouTubers make use of COMPLEXIFICATION, as will be shown in Example 2 in combination with Figure 1. The excerpt starts with an instance of PARODY, where Ryan mocks the fact that people assume gender to be only binary. He indicates the parodic meaning by laughing ("gender is binary. @"), and saying "l o l". Then, he goes on to conceptualise gender as a spectrum with the two binary genders (male and female) on both ends and other identities existing anywhere on the spectrum. This is a COMPLEXIFICATION of the issue, as it makes room for other possibilities of gender identities than just the two binary genders (cf. Borba and Milani 2017: 15;Richards et al. 2016: 96). This statement is underlined with PERSONAL EXPERIENCE when Ryan locates his identity "in the middle, but more on the man side". This representation is aided by his gestures (cf. Figure 1): First, Ryan indicates the two binary end points of the spectrum. He then moves one of his hands to indicate the middle, and moves it again into the direction of his left hand (which is indicating the male point). This shows a combination of metaphoric and deictic gestures: He uses his hands to indicate how he conceptualises gender metaphorically (an abstract concept) and combines this with deictic gestures and language to indicate a point on the conceptualised spectrum (e.g., "on this side"). The conceptualisation of genders -first by Ryan and then also by Jessmakes use of orientational metaphors (cf. Lakoff and Johnson 2003: 14-21), both in gestures and speech, as a way to anchor the abstract concept of gender categories in space and also to contrast different categories (e.g., the two binary genders as two poles). Even though his own conceptualisation of gender is quite specific, Ryan also leaves other options of conceptualisation open by saying "that's how I see it". Jess then uses COMPLEXIFICATION even further by stating that other conceptualisations of gender could also be possible and that different identities could exist in different locations than those indicated by Ryan. In order to do so, they make use of Ryan's gesture space, and construct their conceptualisations on top of Ryan's by moving their hand above the spectrum indicated by Ryan and then moving their hand below the spectrum. Again, these gestures are both metaphoric and deictic in nature. Jess also uses deictic language that accompanies the gestures by stating that people could "exist like up he:re […] and down] there". They then COMPLEXIFY the issue even more and state that the "gender map" could exist in one placeindicating the map in Ryan's gesture-space which includes Ryan's spectrumand "somebody is not even on it". Ryan agrees with Jess's conceptualisation ("a:bsolute[ly]"), and even goes a step further by saying "this doesn't even exist". This indicates Ryan's awareness that these conceptualisations are just an imagination and do not exist as an actual space. These different kinds of conceptualisations exemplify the idea that there is a vast amount of gender categories and identities that could exist (cf. Borba and Milani 2017: 15-16). Ryan furthermore mentions that his identity shifted over time: While he now identifies more towards the middle of the spectrum, a few years ago he identified more towards the male pole of the spectrum. This questions the unchanging nature of gender (cf. Vergueiro 2015, as discussed in Borba and Milani 2017: 9). This COMPLEXIFICATION of his own identity is also presented from the perspective of PERSONAL EXPERIENCE. Ryan not only argues against the stereotypical representations as an objective observer, but also authenticates his point of view by infusing it with his own experience. This is related to the relationality principle by Bucholtz and Hall (2005). However, as opposed to what Bucholtz and Hall discuss, in this case, the authority is not granted by way of "institutionalized power and ideology" (2005: 603), but by the idea that transgender people are to be regarded as experts on the lived trans experience (cf. Dame 2013;and Meyerowitz 2006). This authority then gives credit to Ryan's arguments. Ryan indexes his identity with several labels ("trans-masculine", "trans guy"), and openly refuses to take on other specific labels ("binary", "non-binary"). This relates to the indexicality principle, as mentioned by Bucholtz and Hall (2005: 594). In doing so, Ryan positions himself as a non-normative person. In a similar vein, Jess indexes their identity by using several labels, which they then also evaluate: "I recently came out a:s genderqueer as trans as non-binary as a whole bunch of squiggly cool words." This self-description is an instance of PERSONAL EXPERIENCE. After Jess uses the labels above to describe their identity, the conversation continues as follows: Here, Ryan uses PARODY to talk about Jess's identity by invoking the norm that nonbinary people should not identify under the trans umbrella (i.e., trans people are always binary), and turns it around by mocking this idea. Since Ryan and Jess are friends, Ryan is fully aware of Jess's identity. The question "Did you just say trans (.) and non-binary?" therefore serves as exemplification of what people might stereotypically think when they hear Jess describing themselves with those two identity markers. Ryan uses polyphony to convey irony in his parodic performance, imitating a person who is of the opinion that a non-binary person should not be considered trans. This process is described as denaturalisation by Bucholtz and Hall (2005: 601-602), in which speakers may use parodic performance to create a false image of their own identity (or ideological stance). Ryan uses this strategy to position himself as a person who does not condone such normative thinking (see positionality principle, Bucholtz and Hall 2005). However, afterwards, he also seemingly feels the need to clarify his position on the issue, using INVERSION: "today we're gonna talk about how you ca:n identify as nonbinary and as trans at the same time? And how that's okay". On another occasion (Example 5), Jess again uses PERSONAL EXPERIENCE to express their own identity. In this segment, they make their claim to authority on the issue of their non-binary identity based solely on their own experience. This therefore exemplifies a rare occurrence of PERSONAL EXPERIENCE without the co-occurrence of another strategy. Sexuality Sexuality is not discussed as much in the videos chosen for the analysis as the topic of gender. However, the two YouTubers still show an awareness of the fact that sexuality is often represented heteronormatively in discourse, even if this is sometimes not explicitly named. Especially Ryan talks about his own sexuality and his journey to finding his sexual identity. He was confronted with stereotypes by his family who assumed that (after coming out as trans) he "still only wanted to be with women, and […] wanted to be with lesbian women" as well as the fact that they "expect you to like women forever". This seems to be related to the ideal of an unchanging identity, similar to what Vergueiro (2015, as discussed in Borba and Milani 2017: 9) describes for cisnormative ideology. It is important to notice, however, that this norm was reproduced by Ryan's family members, who are not part of the trans community. In fact, the videos analysed do not contain any mention of these stereotypes being reproduced within the community. It is possible that this is indicative of normative heterosexuality not being as prevalent anymore within the trans community. In any case, the stereotypes reproduced outside the community still have an impact on the perceived norms within the trans community. In his videos, Ryan talks about his sexuality very openly and introduces his own story on finding and accepting his sexuality. His own journey starts off with a very stereotypical narrative representation: […] I did not know that it was okay for girls to like girls This narrative, as Zimman (2012: 12) describes it, is very normative: trans people identify as homosexual before their realisation that they are trans, and identify as heterosexual once they come out. However, after this initial part of his story, Ryan quickly diverts from the master narrative: (7) RYAN: […] when I came out as trans, I was like 'I still like women, so I guess I don't know how to identify myself, I'm a lesbian'. But here's the thing, okay, I:-(.) I feel like I have (.) repressed so much of: who I was, because I wasn't (.) happy with how I was presenting, and (.) I wasn't out as trans, and people did not see me as male, that I felt like I was repressing so much that, when I came out and I was finally able to be seen as who I am inside, and I felt (.) so comfortable with my body, I was able to say 'Yo, you know you know what, I actually also like men'. So I like men, and I like women. And I also like people who are in between and people who have no gender als-it l:i:terally doesn't matter to me. Because (.) people are people. In Example 7, three different strategies are represented: PERSONAL EXPERIENCE is used to describe Ryan's own identity. By relaying his personal story, Ryan can give credibility to the arguments he makes, since his own experience in this area makes him an expert on this issue (cf. Dame 2013: 44). It is furthermore a case of COMPLEXIFICATION, since he is stating that he is not only attracted to women and men but also to other genders, e.g., non-binary people, and that to him "people are people". It is therefore not only a COMPLEXIFICATION of the topic of sexual orientation, but also of gender identities. Finally, this example can also be seen as a SHIFT: Ryan proposes that the way he identified his sexuality when he was not out as trans was not actually his real sexual identityinstead, the way he felt and presented came about because of his repressed feelings of actually being male. He SHIFTS the issue from his identity as being attracted to women (i.e., being heterosexual, or, before being out as trans, indexing himself as lesbian), to the topic of his own comfort level with his gender identity and the way he is perceived by society. Once he was seen as the person (and gender) he actually is, he finally had the opportunity to express his sexuality truthfully. Similar to the conceptualisation of gender, sexuality is represented very openly in the videos analysed, often with the metaphor of a spectrum. This spectrum, however, does not only include sexual orientation, but also represents other dimensions of a person's sexual identity, e.g., concerning asexuality or polyamory. This leads to a multifaceted representation in terms of the "structure of identities" (cf. Schneider 2001: 35-36), for which even this one part of identitysexualityis made up of several entities and group belongings. The spectrum that Jess and their video-guest describe for sexual orientation in another video is conceptualised as the "range between things", which can have "multiple points". This is a case of COMPLEXIFICATION, since they describe sexual identity as consisting of several layers, as opposed to just one which indicates two identities, hetero-and homosexuality. Jess does not make clear reference to their own sexual identity in the videos chosen for analysis. They only mention it implicitly once, while using INVERSION to state that "bisexuals are wonderful". While Jess does not explicitly refer to themselves here, they use both their hands to caress their face while saying this, signalling that they are talking about their own identity. Therefore, it also counts as a PERSONAL EXPERIENCE, even if it is just implicit. By stating this, they also not only index their own identity but position themselves as someone who thinks positively about untypical identities. Transitioning and passing as cisgender The stereotypical representation of transgender people includes that everyone wants to transition and pass as cisgender. This normative ideal is referenced in the videos several times. For instance, Ryan recalls that transitioning used to be represented in a very specific way. This normative narrative was reproduced online as well as offline and other forms of transitioning, or the option of not transitioning, were not accepted: Jess explicitly mentions that a person's transition status does not determine their trans identity. Jess furthermore explicitly states that there are several people who do not want to, or cannot afford to, undergo certain steps in the transition process. Example 9 therefore also represents the strategy of COMPLEXIFICATION and Jess especially pays attention to people's self-determination in the transition process, since the individual transition process might include certain steps in transition, or it might not. It is also a PERSONAL EXPERIENCE, though, since Jess talks about their own transition journey and is able to validate their own trans identity with the definition they provide. They therefore clearly position themselves as a member of the trans community. Agency over one's own body is another topic in both Jess's and Ryan's videos. In Example 10, which represents overall a PERSONAL EXPERIENCE, Ryan describes why he does not want to have a cis-looking body and how he feels in his own body: So kind of like going off of tha:t, it's really important that I make the distinction that I had top surgery 'cause I wanted a flat chest and it wasn't, I don't think, related to I'm a boy I need a flat chest. […] I would love to have phalloplasty in the future and it's not because I wanna be a man. It's because it's something that would make me more comfortable and it doesn't make me more of a man to have phalloplasty or not, doesn't make me more of a man to wear a packer or stand to pee, it doesn't. What makes me more of a man is that I don't say it makes me more of a man. That's not a thing that I say. I'm more me. I'm more Ryan. […] I've always just wanted the body that I felt comfortable with, and not the body that someone else was telling me (.) that I should feel comfortable in. Ryan begins with describing that he used to feel like he had no agency over his body. Because of the high amounts of dysphoria that he experienced due to having no agency, he wanted to change his body to appear more masculine. However, he did not do it because he wanted to appear cis. Instead, he chose to undergo surgery (and take testosterone) in order to feel comfortable in his own body. He therefore SHIFTS the issue away from the ideal of transitioning in order to pass as cis to taking steps to ensure that he feels comfortable in his body, whatever those steps are. While Ryan does admit that the body he wishes to have is a masculine body, he explicitly states that his reason is not that he believes that his body appearance determines his gender (also representing an INVERSION). He therefore clearly rejects the norm that a cis body is the "ideal body" and explicitly states that he does not wish to follow the path that is represented by the master narrative. Nevertheless, Jess and Ryan also realise that some trans people might want to pass as cisgender and might want to transition in a normative way: This can be seen as a further COMPLEXIFICATION, since both Jess and Ryan accept parts of the two frames presented, i.e., they accept that there are people whose identity does align with the typically represented norms as well as people whose identity does not. They furthermore recognise all these identities as valid, and also specifically pay attention to assuring every individual that they should transition the way they wish by emphasising "follow YOUR path" in writing, therefore aligning themselves with the view that self-identification and self-determination are important (cf. Borba and Milani 2017: 16). Questioning one's identity The last normative representation to discuss in this study is the idea that every transgender person is expected to have known that they are not the gender assigned at birth from a young age. The stereotype also includes the view that everyone who does not fit this ideal or is questioning their gender as well as everyone who is hesitant regarding transitioning is not really transgender. This is recognised by the two YouTubers in several ways. Ryan, for instance, explains: (12) RYAN: […] when I was younger, I (.) questioned myself constantly. […] and because I questioned myself so much, I was like 'Ah, I can't be trans. Trans people don't question themselves this much.' This clearly shows that Ryan was influenced by the representation of transgender people not being allowed to question their identity. Because of this representation, Ryan believed that he was not transgender for a long time. Jess makes explicit reference to representation on YouTube: (13) JESS: […] when folks do come out as not cis on YouTube, it's usually an assured proclamation, followed by their plan of action. Something like this 'I need to tell you all that I am a trans guy, and I've already been on hormones for almost two weeks now. This is who I am and you need to know because changes are coming quick'. Or 'I am a trans woman, I'm totally sure, I've always known from a very young age and I'm starting my medical transition as soon as I can. It's going to happen, welcome to it'. These instances above clearly show what people perceive to be the norm. Jess talks about this very assured representation again in Example 14, addressing the fact that it is actually common for people to question their identity as well as their transition process: (14) JESS: […] And it turns out, and I didn't know this, that a lot o:f people, transguys, trans-masculine people, non-binary who-people who want surgery, a lot of people who: think about having surgery are not totally sure at first. And I didn't think that was the case. I used to think that all trans guys, like, always knew they wanted top surgery and they never questioned it, and they did it as fast as they could and as soon as the surgery was done they felt ama:zing. It was like euphoric and fantastic. But so-so much of that is (.) inaccurate. Surpri:se, I learned that the trans narrative isn't the same for everyone. Jess mentions how the representation online influenced the way they perceived the reality of being trans, namely thinking that trans people were never wondering, questioning, or doubting themselves and what they wanted. However, Jess counters this view by using INVERSION, stating explicitly that there are a lot of people who "are not totally sure at first" and addressing their impression of the stereotypical narrative often being "inaccurate". They also use a combination of INVERSION and PARODY when they say "Surpri:se, I learned that the trans narrative isn't the same for everyone". The parodic effect is created by referencing the stereotypes in the master narrative which give people the impression that every trans person has the same story. Their mention of "Surpri:se" indicates that they actually were surprised to find out some trans people do question themselves (because they also had had the impression that they did not). However, it also shows that they now feel foolish for having believed that the trans experience actually was the same for everyone. This indicates the power that the master narrative can have over people. Ryan's story of how he found out that he is trans starts out similar to the master narrative, but he also says he questioned himself a lot: (15) RYAN: […] And I knew that I was somehow somewhat trans but I didn't even know what that meant. And I kept going back and forth there so: long 'yes, I'm trans; no, I'm not; yes, I am; no I'm not.' […] So I think that the reason why I know I'm trans is because of the back and forth. And because of the repressing my trans identity so much that it kept coming back so hard. Ryan uses this personal story to INVERT the master narrative (the questioning being the reason for his self-discovery). He does so by referring to PERSONAL EXPERIENCE, stating that it is not only normal but that it is actually good to question oneself, since it can help someone come to the right conclusion. Furthermore, both Jess and Ryan use SHIFTS to represent the issue of questioning and doubting. Jess uses an instance of PERSONAL EXPERIENCE and recalls that they did not identify as trans for a long time, and were also not able to accept their identity as nonbinary. This was, however, not due to the fact that they truly doubted their own identity, but more so because the stereotypical representations found in the master narrative made them feel that their identity was not valid. That is the reason why they were not able to accept their own identity and come out for a long time: Jess explains that the reason why they felt like their identity was not valid was because of internalised transphobia and that they struggled with accepting their identity: "for a lo:ng time I convinced myself that I just wasn't comfortable (.) using the word trans […] I didn't feel trans enough". Jess describes how the normative ideals represented in transgender discourse influenced the way they perceived themselves. They did not only fear indexing their identity as trans when talking to others, but also to admit it to themselves. Even though they had, in a way, accepted their gender identity, they did not feel comfortable labelling their identity in this specific way. It therefore SHIFTS the issue from Jess actually doubting their identity to a fear of accepting and expressing this identity because of the representations they were surrounded by. Jess uses this PERSONAL EXPERIENCE to also give their story credit. Since they experienced it this way, they can directly oppose the people who criticise their identity. The feeling of not being trans enough is a topic that is mentioned quite often in the videos analysed and seems to be a re-occurring feature of non-binary trans discourse in general (cf. Garrison 2018). Ryan experienced this feeling with his doctor who told him that he is "not trans enough (.) to get into the program to transition". Jess and Ryan posit this representation of trans people's identity being only valid when they adhere to the normative ideals as the reason that someone might doubt their identity. Ryan explains this SHIFT Borba and Milani 2017: 9), people might assume trans identities to not actually exist. Ryan uses this point to argue for the oppositethere is no denying an identity. The reason for feelings of doubt is a different one: people are influenced by society to think that being trans might not be a valid option. Ryan actually sees people who question themselves not as being in doubt about their identity but believes that they instead "doubt the process of figuring out who they are". General discussion and conclusion Generally, both Jess and Ryan provide counter-narratives to the mainstream discourse by addressing the master narrative directly. They show awareness of the topics in focus and how they are typically represented. In 2018, normative ideals surrounding transgender people were still reproduced online, as can be clearly seen in some of the examples presented in Subsections 4.1 through 4.4. This explicit mention of the normative representations supports the idea that counter-stories are usually produced with an awareness of the master narrative and may be created by referring to it in order to subvert it (cf. Andrews 2004: 1-2). In fact, the counterdiscourse created by the two YouTubers heavily relies on their own personal experience with the normative stereotypes and how they were used to marginalise them. The research question regarding the strategies used for creating counterdiscourse was addressed by focussing on four topics of stereotypical representations of trans identities: identification with the other binary gender than assigned at birth, heterosexuality, the wish to transition in order to pass as cisgender, and the belief that people who are transgender have always known they identified as the "other" gender (cf. Cromwell 2006: 511, 514;Zimman 2012: 12-13). The concepts of gender and sexuality are represented very openly by the two YouTubers. For both identity dimensions, Ryan and Jess present their own conceptualisations, but also leave room for ideas by other people. This is not only shown in their speech, but also in visual representations such as metaphorical and deictic gestures. Ryan and Jess especially pay attention to people's selfidentification (cf. Borba and Milani 2017: 16). The two subjects also subvert the master narrative by arguing against stereotypical representations about both the topics of transitioning in order to pass as cis and the idea of questioning one's identity. Both Jess and Ryan specifically mention self-determination and agency (cf. Zimman 2017: 92) as valuable factors in a person's transition. Furthermore, they present the process of questioning one's identity as positive, since it can help people make the right decision. This way, the two You-Tubers reframe the discourse surrounding gender and sexuality to offer up more possibilities for background knowledge than can be found in the master narrative and thereby create an open online space for people in the community who feel misrepresented by the stereotypical story. Their wish to provide a platform for transgender people who might feel marginalised by the mainstream discourse is voiced by them explicitly. The counter-discourse described above is accomplished by a combination of five strategies. The YouTubers use INVERSION to explicitly state that the norms usually represented do not constitute the full truth. By using PARODY, Jess and Ryan activate knowledge of the master narrative by imitating and mocking the views represented in it, often making use of polyphony in doing so. COMPLEXIFICATION is used to uncover aspects of trans identity which are usually omitted from mainstream discourse. In SHIFTING the issue, the two subjects address the fact that the stereotype usually represented is not actually the issue of importance, but that other explanations take precedence. PERSONAL EXPERIENCE is used to express the two YouTubers' personal story, which serves as an authentication strategy, similar to what Dame (2013) describes as expertness. All of these strategies might be used independently, but most commonly occur in combination with each other. They serve to oppose and supplement the mainstream representation, but also to validate non-normative identities, which is achieved especially in relation to the value of self-identification and self-determination (Borba and Milani 2017: 16;Zimman 2017: 92). This can be seen again explicitly in instances in which the YouTubers make general statements on this topic such as "[there] is no right way to trans" (JESS). However, both YouTubers also acknowledge that, over time, stereotypes concerning transgender people have changed. For instance, Jess and Ryan emphasise the fact that there are more options for transitioning available for people who do not identify with the normative script now than there were a few years ago. In Example 18, both are discussing Jess's top surgery which they had without taking testosterone: Insofar, both recognise that there are not as many "strict protocols" (cf. Hausman 2006: 337) concerning transitioning anymore. However, they also agree that the master narrative is still prevalent within the transgender online community. Despite this prevalence, Jess and Ryan manage to authenticate their own as well as various other trans identities without following the normative discourse. Furthermore, both Jess and Ryan do not make any reference to normative representation of sexuality within the trans community. While they both recognise that stereotypes still exist in society in generalor at least did when they grew upthis stigma seems to not be as prevalent anymore in 2018. This could well be an indication for changing social norms, making sexuality not an issue to consider in being transgender anymore. Since the topic addressed in the study at hand had not been considered from this perspective before, the current analysis can only act as a case study that aims at exploring counter-discourse existent in the transgender community on YouTube. Both the time-frame (2018) and the focus on just two individuals (both white middle-class North-American) 8 make it relatively narrow in focus and there can be no claim for the results to be representative of counter-discourse in the transgender community at large. Nevertheless, the analysis has proven to be fruitful in showcasing strategies occurring in this type of counter discourse, and future studies could benefit from making use of the present framework. It would also be especially interesting to see how this kind of counter-discourse is received by viewers. This could be addressed by analysing the comment section of such counter-discursive videos. Furthermore, the data of this study could be used for experimental research designs testing to what extent these strategies can be deemed successful in making people aware of other identities than those proclaimed by the master narrative. Finally, the present findings could be used as departure for an analysis of counter discourse on other topics and on other channels of communication.
2023-07-26T13:06:47.480Z
2023-07-26T00:00:00.000
{ "year": 2023, "sha1": "65d19e3f7efa64fa496294816aeb4b8316576460", "oa_license": "CCBY", "oa_url": "https://www.degruyter.com/document/doi/10.1515/ijsl-2023-0002/pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "946b855474cf90b357e132c5210fdff6e0a39dcb", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [] }
102742408
pes2o/s2orc
v3-fos-license
Structural Relaxation of Oxide Compounds from the High-Pressure Phase In this chapter, several types of structural relaxation of oxide compounds from the high-pressure phase are systematically introduced in terms of high-pressure comparative crystallography. Structural relaxation of various ABO3 compounds from the perovskite phase to the lithium niobate phase is explained in detail from rotation of the BO6 octahedral frameworks. Depressurized amorphization of ASiO3 perovskites containing large divalent cations (A = Ba2+, Sr2+, and Ca2+) is elucidated by the characteristics of the hexagonal and cubic perovskite structures. The unquenchable Rh2O3(II) phases of group-13 sesquioxides, such as Ga2O3 and In2O3, are confirmed by both experimental and computational studies. Ab initio calculations of Y2O3 show that the unquenchable pressure-induced phase (A-type structure) is not the stable phase under high pressure. Knowledge about the unquenchable and/or metastable phases in recovered high-pressure products is beneficial for advanced computational materials design. properties of most of the recovered structures can be investigated under ambient pressure. In this case, even the equilibrium phase boundary can be thermodynamically determined by measuring the enthalpy and heat capacity at ambient pressure [1,3]. However, the high-pressure phase is not always quenchable. Because high-pressure phases tend to undergo structural relaxation during decompression, the high-pressure structures cannot be characterized from the recovered products. The structure can be elucidated by in situ X-ray observation under pressure. In particular, a synchrotron radiation X-ray source combined with a diamond anvil cell (DAC) can shed light on the real structure of the unquenchable phase under pressure. Some high-pressure perovskites in ABO 3 compounds exhibit unquenchable phenomena during decompression to atmospheric pressure. There are two types of structure instability: conversion to perovskite-related structures and amorphization. Structural relaxation in the former case accompanies a symmetry change to a non-centrosymmetric structure, retaining the ferroelectricity. The representative example is structural relaxation from the orthorhombic perovskite structure to the lithium niobate structure. Many compounds with the lithium niobate structure have been found by high-pressure synthesis. In other simple oxides, there are peculiar high-pressure phases in sesquioxides that revert to a lower pressure phase under room temperature. In some cases, there are definite crystallographic relationships between their lower pressure phases. Ab initio computational studies are indispensable to confirm whether the phase appearing by structural relaxation is metastable. Recent computational studies have predicted novel materials with high-performance functionalities. In particular, a data-driven material design approach has identified many candidates for high-pressure synthetic materials. However, the predicted materials are not always realized in the recovered products because of structural relaxation during decompression. To enhance the capability of material design by computational approaches, systematic information about structure relaxation would be highly beneficial. In this chapter, we focus on relaxation structure and quenchability from the high-pressure phase. By classifying the relaxation process, we discuss the recovery compounds from high-pressure synthesis. Phase Transition from the Perovskite Structure to the Lithium Niobate Structure Crystal Structure Relationship Among Lithium Niobate, Perovskite, and Ilmenite Phases The typical lithium niobate phase of Li-bearing compounds, which is represented by LiNbO 3 and LiTaO 3 , is only found in similar lithium-bearing compounds, such as LiUO 3 [4] and LiReO 3 [5], and all of these lithium niobate phases show stability under ambient conditions. In contrast, high-pressure synthesis makes it possible to crystallize lithium niobate phases of various Li-free compounds, such as A 2+ B 4+ O 3 -type [6][7][8][9][10][11][12][13] and A 4+ B 2+ O 3 -type [14] oxides. One of the lithium niobate structures is shown in Fig. 12.1. It is widely known that lithium niobate phases appear with retrogressive transition from high-pressure perovskite phases. Such a hidden perovskite phase is difficult to confirm with only the recovered high-pressure products, but it has been directly elucidated by in situ experiments under high pressure [6][7][8][9][12][13][14]. It should be noted that these lithium niobate phases convert from the perovskite structure with structural relaxation during decompression, which is closely related to the rotation of BO 6 octahedra. This is a first-order transformation accompanied by a 2-3% volume change. The typical structural relationship among the ilmenite, perovskite, and lithium niobate phases is shown in Fig. 12.2. As shown in Fig. 12.2, where a specific crystallographic orientation is chosen, the transformation from lithium niobate to perovskite appears to be much easier than that from the ilmenite structure to the perovskite structure. In other words, there must be large displacement of the BO 6 octahedra to trigger the ilmenite-perovskite transition, where atomic rearrangement should be controlled by diffusion under high temperature. In fact, for many ABO 3 compounds, the perovskite to ilmenite transition is not observed at room temperature throughout the pressure range even though the density of ilmenite is smaller than that of lithium niobate. Perovskite Tolerance Factor It is believed that such instability is closely correlated with the ionic radii of the Aand B-site cations forming the perovskite structure. The Goldschmidt tolerance factor [15] indicates the distortion from ideal cubic perovskite and it is also applicable to such instabilities during decompression: t = (r A + r o )/√2(r B + r o ), where r is the effective ionic radius of each element [16]. The tolerance factor is determined from the geometrical relationship of the ionic radii, as shown in Fig. 12.3. The right-hand side figures show the polyhedral types of the A-site cations. Ideal cubic perovskite (t = 1) is composed of cubo-octahedral coordinated A cations. Orthorhombic distortion (t < 1) incorporates A-site cations, forming square-antiprism-type polyhedra. The Goldschmidt diagram is useful for understanding the degree of distortion from the ideal perovskite structure. The cation radius ratios of various ABO 3 compounds are plotted in the Goldschmidt diagram in Fig. 12.4. In Fig. 12.4, the white arrow indicates compounds in the lower right region that tend to convert to the lithium niobate phase, whereas the black arrow indicates compounds in the upper left region that tend to retain the perovskite structure. This trend means that orthorhombic distortion induces conversion to the lithium niobate phase. Orthorhombic distortion is derived from rotation of BO 6 octahedra. Therefore, rotation of BO 6 octahedra can be used to understand the degree of rotation for conversion to the lithium niobate structure. O'Keeffe et al. [17] suggested that a single rotation Φ about the triad [111] axis of a pseudocubic perovskite lattice (the direction is indicated in Fig. 12.3) can be represented as rotation of the BO 6 octahedra. The angle can be calculated from the atomic coordinates [18] or estimated from the cell dimensions: Φ = cos −1 (√2c 2 /ab) [17,19]. According to the calculated Φ values of the various perovskite compounds listed in Table 12.1, the critical angle for conversion is estimated to be 15°-16°, except for MgSiO 3 perovskites. This value is useful for exploring compositions that may adopt the lithium niobate structure. 4 Goldschmidt diagram with tolerance factor (t) of ABO 3 compounds. The tolerance factors (dashed lines) were calculated from the ionic radii of the six-fold coordinated B cations (x axis) and eight-fold coordinated A cations (y axis). Open squares are compounds that convert to the lithium niobate structure under decompression. Solid squares are compounds that quench as the perovskite structure at ambient pressure Structure Stability from a Computational Viewpoint Ab initio calculations provide useful information about the phase stability under high pressure. Enthalpy calculations have revealed the structural stability of the perovskite, lithium niobate, and ilmenite phases of several compounds. All of the lithium niobate phases are metastable under pressure. As an example, the relative differences of the enthalpies of the three phases of ZnGeO 3 perovskite are plotted as a function of pressure in Fig. 12.5. The lower pressure phase (the imenite structure) directly change to the perovskite structure. Therefore, we can conclude that the lithium niobate phase is a metastable structure of ZnGeO 3 [22]. Similar trends have been found for MnTiO 3 [31], MgGeO 3 [32], and ZnTiO 3 [10] by enthalpy calculations. A further transformation from the perovskite structure to the postperovskite structure has been confirmed for ZnGeO 3 [22] and MgGeO 3 [32]. Phase Transition Sequence of Silicate Perovskites For a tolerance factor less than one, as represented by MgSiO 3 perovskite, the BO 6 octahedra in the perovskite structure tilts to make an allowance for the small divalent cations in the BO 6 octahedral corner-sharing framework. The tilting in perovskite has been discussed in detail by many researchers (e.g., Glazer [33]), where rotation does not disrupt the corner-sharing connectivity. As mentioned in Sect. 12.2, if rotation of the BO 6 octahedra reaches a limit, conversion to the lithium niobate phase occurs with a displacive-type phase transition. In contrast, perovskites bearing large divalent cations, which is formally expressed as a tolerance factor of greater than one (as shown in Fig. 12 Crystal Structures of Hexagonal Perovskite and Structural Relation with Cubic Perovskite Perovskites containing large divalent cations tend to expand and form a BO 6 face-sharing octahedral framework to accommodate the large cations, where the B 4+ ions in the face-sharing octahedra cause oxygen anions to move to closer to [37] are shown in Fig. 12.7. In Fig. 12.7, both the SiO 6 octahedra and barium atoms are shown along the caxis direction to clarify the relationships of the stacking sequences. The 9R phase (space group R3m ) resembles the 6H phase (space group P6 3 /mmc) in that the SiO 6 octahedra are periodically connected by face sharing. The difference is the periodicity of the face-and corner-sharing of SiO 6 octahedra. In the c-axis direction, 9R perovskite exhibits a (chh) 3 sequence whereas 6H perovskite exhibits a (cch) 2 sequence, where c and h correspond to corner-and face-sharing octahedra, respectively. For perovskites, It is known that such hexagonal polytypes lie in a [46] sequence from 9R to 3C (space group R3m ) cubic perovskites. In this hexagonal sequence, pressure increases the frequency of corner-sharing octahedra. This relation can be extended to cubic perovskite (3C), which only consist of corner-sharing octahedral, as shown in Fig. 12 Phase Diagrams: Experiments and Ab Initio Calculations The ionic radius can be controlled under high pressure. In particular, larger A-site cations in perovskites, such as Sr 2+ and Ba 2+ , are sensitive to pressure. The A-site cations are compressed to SiO 6 octahedra and the face-sharing octahedral frequency then gradually decreases with increasing pressure. Furthermore, as shown in the phase diagram based on high-pressure experiments in Fig. 12.9, for cubic perovskites, there is a systematic relation between the transition pressure and the A 2+ radius. For the BaSiO 3 compound, the transition occurs above 130 GPa [46]. In contrast, the transitions of the cubic perovskites CaSiO 3 and SrSiO 3 occur at significantly lower pressures of 15 and 38 GPa, respectively. Note that SrSiO 3 does not transform to a 9R-type hexagonal perovskite, such as that of BaSiO 3 . Furthermore, no hexagonal perovskites are found for CaSiO 3 . These results can be simply explained by the difference of the cation radii in the A sites. Figure 12.10 shows the phase diagram of BaSiO 3 at 0 K from ab initio calculations [46]. The phase transition sequence is consistent with that from high-pressure experiments, although the calculated transition pressures are underestimated. Amorphization Under Decompression at Room Temperature In the cubic and hexagonal perovskites stabilized under high pressure, the A-site cations are compressed to retain the BO 6 framework structure. In other words, the cations expand under decompression. Among the high-pressure phases of silicate perovskites, the first reported example was amorphization of CaSiO 3 perovskite, which was confirmed at a pressure very close to 1 atm. Because the ambient wollastonite phase is composed of a SiO 4 tetrahedral chain structure, the cubic perovskite structure cannot revert to the ambient structure at room temperature. The corner-sharing BO 6 framework can be adjusted for smaller cations, as suggested by conversion to the lithium niobate structure. However, the framework is not as flexible for larger cations. Therefore, expansion of the A-site cations disrupts the framework and makes the structure amorphous. Amorphization of the cubic perovskite structure has also been observed for SrSiO 3 [38] and BaSiO 3 [46]. Considering the structural similarity, the hexagonal perovskite structures could become amorphous during decompression. The pressure for amorphization is believed to be related to the A-site cation size in the hexagonal structure because the BO 6 face-sharing frequency of hexagonal perovskites is correlated with the cation size. The experimental results for BaSiO 3 are shown in Fig. 12.11. The 6H phase begins to decompose at 21.9 GPa. In contrast, the 9R phase persists at 8.9 GPa and suddenly changes to amorphous at 4.8 GPa. At 1.8 GPa, both of the phases completely change to amorphous. As a result, we can conclude that the stability of 9R is higher than that of 6H. However, this type of amorphization has not been elucidated by computational approaches. If the ionic radii are determined under pressure, this type of structural instability related to amorphization could be clarified. Rh 2 O 3 (II) Structure Reverting to the Corundum Structure in Group 13 Sesquioxides Group-13 sesquioxides, such as aluminum oxide, gallium oxide, and indium oxide, have been widely investigated as attractive electroceramics. Their most stable phases under ambient conditions, corundum (Al 2 O 3 ), monoclinic β-Ga 2 O 3 , and cubic In 2 O 3 (bixbyite-type structure, C-type rare earth sesquioxide structure, hereafter denoted as C-RES), are used for many application, such as lasers and transparent electronic devices [47,48]. It is believed that their dense phase is the corundum structure [49]. However, in situ X-ray diffraction experiments have revealed that the Rh 2 O 3 (II) structure that appears as a post-corundum phase under pressure reverts to the corundum structure under decompression. In Al 2 O 3 , the corundum structure that transforms to the Rh 2 O 3 (II) phase under very high pressure above 95 GPa reverts to the corundum structure at ambient pressure after decompression [50,51]. In other instances, the Rh 2 O 3 (II) phase in Ga 2 O 3 identified under pressure transforms to the corundum phase after decompression rather than changing to β-Ga 2 O 3 [52], as shown in Fig. 12.12. Figure 12.13 shows the crystal structures of the Rh 2 O 3 (II)-type and corundum structures of Ga 2 O 3 with a specific direction for comparison. A twin-like relation between the Rh 2 O 3 (II) and corundum phases can be seen in the vertical direction. Considering the structural resemblance between Rh 2 O 3 (II) and corundum, we The differences in the static enthalpies of β-Ga 2 O 3 and Rh 2 O 3 (II)-type Ga 2 O 3 relative to corundum-type Ga 2 O 3 calculated by density functional theory (DFT) with the local density approximation(LDA) are shown in Fig. 12.14. The transitions from β-Ga 2 O 3 to corundum-type Ga 2 O 3 and corundum-type Ga 2 O 3 to Rh 2 O 3 (II)-type Ga 2 O 3 occur at about 0 and 30 GPa, respectively [52]. According to further phase investigation, the stability field continues to 130 GPa until the CaIrO 3 -type structure appears [53]. For In 2 O 3 , in situ X-ray experiments reveal that the stability region for corundum phase is very narrow because the single corundum phase is not observed at any pressure [52]. This is consistent with the calculated results, which suggest the absence of a stability area for the corundum phase ( Fig. 12.15) [52]. However, the recovered phase after decompression exhibits the corundum phase. Therefore, it can be concluded that the corundum phase appearing in the recovered sample is converted from the Rh 2 O 3 (II) phase. The volume change from the Rh 2 O 3 (II) phase to the corundum phase is estimated to be 2.1%, which is comparable with the changes of 3.1% for Al 2 O 3 [51] and 2.3% for Ga 2 O 3 . The Rh 2 O 3 (II) phase does not transform to the CaIrO 3 structure, which had been predicted by a computational study [54]. Instead, a more dense and higher coordinated phase with the Gd 2 S 3 -type structure has been confirmed at about 40 GPa from an experimental and computational study [55]. The enthalpy relations from DFT calculations are shown in Fig. 12.15. A-RES Structure of Y 2 O 3 Reverting to the B-RES Structure Yttrium has a similar ionic radius to the ionic radii of lanthanides, so lanthanide ions can be incorporated into yttria to make optical ceramics, such as Eu 3+ :Y 2 O 3 phosphor [56] and Yb 3+ :Y 2 O 3 laser [57]. Yttria crystallizes in the bixbyite structure (C-RES) under ambient conditions, similar to lanthanide sesquioxides. B-RES has been confirmed as the high-pressure phase in the recovery sample from high-pressure experiments. The A-RES phase was not found, which is expected to be part of the phase transformation sequence of lanthanide sesquioxides [58]. In situ X-ray diffraction experiments performed at room temperature using a DAC revealed the existence of the A-RES phase [59]. Back transformation to the B-RES structure was also confirmed. The reversible transformation mechanism from B-RES to A-RES can be explained from a crystallographic viewpoint, as shown in Fig. 12.16. The B-RES structure of yttria consists of three different yttrium sites. Among these sites, only the Y3 site can be considered to possess six-fold oxygen coordination because the Y3-O2 distance is too long to be classified as seven-fold coordination, as shown in Fig. 12.16b. With increasing pressure, O2 moves closer to Y3, which results in the formation of seven-fold polyhedra. Upon further compression to 15-20 GPa, the Y3-O2 distance becomes shorter than the average Y3-O distance. The B-RES structure finally changes to the structure shown in Fig. 12.16c, which is equivalent to the A-RES structure. This means that the A-RES structure can be directly derived from the B-RES structure. The volume change from the B-RES structure to the A-RES structure (2.5%) is characteristic of a first-order phase transition. Contrary to confirmation of the A-RES structure by compression experiments at room temperature, enthalpy calculations performed by DFT with the LDA indicate no stability region of the A-RES structure (Fig. 12.17) [59]. The transition to the other high-coordination structure (Gd 2 S 3 -type structure, Fig. 12.16d) occurs before the appearance of the A-RES phase. In fact, laser heating experiments under high pressure result in Y 2 O 3 crystallizing in the Gd 2 S 3 structure at about 10 GPa. Therefore, it can be concluded that the A-RES structure appearing under room temperature compression is a metastable phase. Concluding Remarks Large volume high-pressure apparatus (e.g., cubic, belt, and KAWAI-type presses) is a fundamental tool for materials scientists, because high-pressure methods enable the synthesis of novel materials under ambient conditions. High-pressure synthesis provides the opportunity to obtain high density and/or highly coordinated compounds. However, the recovered product does not always reflect the structure under pressure. If a new structure is found, the stability relation with the lower pressure phase(s) should be evaluated using computational approaches, such as ab initio calculations. If the structure is a metastable phase, the structure should be examined for crystallographic similarity with an objective structure. Conversion to the metastable phase would be clarified by structural relaxation. A trace amount of a high-pressure phase is sometimes found in the recovered products as a defect origination from twin structures. This is also an indication to identify the unquenchable high-pressure phase. In situ X-ray diffraction is the most powerful approach to determine structures under pressure. In some cases, recompression of the metastable phase gives the high-pressure structure. During structural relaxation, symmetry change likely occurs, as exemplified by the transition from the perovskite to the lithium niobate phase as described in Sect. 12.2. Relaxation from a centrosymmetric to a non-centrosymmetric structure is important to determine the functionality, such as ferroelectricity. As mentioned in Sect. 12.3, amorphization is a usual phenomenon for high-pressure products under decompression. Therefore, if there is a complete or part of an amorphous-like pattern in the X-ray diffraction profile of the recovered product, the amorphous structure is an indication of an unquenchable high-pressure phase. In situ X-ray experiments using a laser-heated DAC reveal the structure of the unquenchable phase. Amorphization can be triggered by the expansion of specific cations during decompression. In particular, elucidation of the compression behavior for relatively large cations, such as K + , Ca 2+ , Sr 2+ , and Ba 2+ , would aid in understanding the quenchability of high-pressure structures containing such cations. Therefore, an approach to determine the ionic radii under pressure is required for prediction of the quenchability. Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
2019-04-09T13:08:13.220Z
2018-01-15T00:00:00.000
{ "year": 2018, "sha1": "bfbfd9e03603ed95c7b343735d37a448f23fbac0", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/978-981-10-7617-6_12.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "d54c377b3695dc97f2378c64598cb023d9faf0c9", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
118539313
pes2o/s2orc
v3-fos-license
Fermi Gases with Synthetic Spin-Orbit Coupling We briefly review recent progress on ultracold atomic Fermi gases with different types of synthetic spin-orbit coupling, including the one-dimensional (1D) equal weight Rashba-Dresselhaus and two-dimensional (2D) Rasbha spin-orbit couplings. Theoretically, we show how the single-body, two-body and many-body properties of Fermi gases are dramatically changed by spin-orbit coupling. In particular, the interplay between spin-orbit coupling and interatomic interaction may lead to several long-sought exotic superfluid phases at low temperatures, such as anisotropic superfluid, topological superfluid and inhomogeneous superfluid. Experimentally, only the first type - equal weight combination of Rasbha and Dresselhaus spin-orbit couplings - has been realized very recently using a two-photon Raman process. We show how to characterize a normal spin-orbit coupled atomic Fermi gas in both non-interacting and strongly-interacting limits, using particularly momentum-resolved radio-frequency spectroscopy. The experimental demonstration of a strongly-interacting spin-orbit coupled Fermi gas opens a promising way to observe various exotic superfluid phases in the near future. II. THEORY OF SPIN-ORBIT COUPLED FERMI GAS We consider a spin-1/2 Fermi gas with SOC subject to attractively interaction between unlike spins. One great advantage of the atomic system is its unprecedented controllability. The interatomic interaction can be precisely tuned using the Feshbach resonance technique [18], which has already led to the discovery of the BEC-BCS crossover from a Bose-Einstein condensate (BEC) to a Bardeen-Cooper-Schrieffer (BCS) superfluid [19]. Different forms of SOC, many of which do not exist in natural materials, can also be engineered. The interplay between interatomic interactions and different forms of SOC may give rise to a number of intriguing physical phenomena. Here let us make some general remarks concerning the distinct features that can be brought out by SOC in a Fermi gas: • SOC alters the single-particle dispersion which may lead to degenerate single-particle ground state, and may render the topology of the Fermi surface non-trivial [20]. In the many-body setting, a spin-orbit coupled superfluid Fermi gas contains both singlet and triplet pairing correlation [20,22,24,27] and therefore may be regarded as an anisotropic superfluid [22]. • SOC may greatly enhance the pairing instability and hence dramatically increases the superfluid transition temperature [22,23,28]. In the remaining part of this section, we will discuss two particular types of SOC. The first is the equal-weight Rashba-Dresselhaus SOC [56] which is the only one that has been experimentally realized so far. The second is the Rashba SOC which is of particular interest as it occurs naturally in certain semiconductor materials. However, before we do that, in the next subsection we first summarize the theoretical framework and explain the basics of momentum-or spatially-resolved radio-frequency (rf) spectroscopy, which turns out to be a very useful experimental tool for characterizing spin-orbit coupled interacting Fermi gases. For those readers who are interested in the physical consequences of a detailed type of SOC, this technical part may be skipped in their first reading. A. Theoretical framework In current experimental setups of ultracold atomic Fermi gases, the interactions between atoms are often tuned to be as strong as possible, in order to have an experimentally accessible superfluid transition temperature. With such strong interactions, there is a significant portion of Cooper pairs formed by two fermionic atoms with unlike spin. Theoretically, therefore, it is very crucial to treat atoms and Cooper pairs on an equal footing. Without SOC, a minimum theoretical framework for this purpose is the many-body T -matrix theory or pair-fluctuation theory [57][58][59][60][61][62]. In this subsection, we introduce briefly the essential idea of the pair-fluctuation theory using the functional path-integral approach and generalize the theory to include SOC [24]. Under this theoretical framework, both twoand many-body physics can be discussed in a unified fashion [24]. We also discuss the mean-field Bogoliubov-de Gennes equation, which represents a powerful tool for the study of trapped, inhomogeneous Fermi superfluids at low temperatures [42,45,47,48,50,51]. Functional path-integral approach Consider, for example, a three-dimensional (3D) spin-1/2 Fermi gas with mass m. The second-quantized Hamiltonian reads, whereξ k ≡k 2 /(2m)−µ = −∇ 2 /(2m)−µ with the chemical potential µ, ψ (r) = [ψ ↑ (r) , ψ ↓ (r)] T describes collectively the fermionic annihilation operator ψ σ (r) for spin-σ atom, and V SO (k) represents the spin-orbit coupling whose explicit form we do not specify here. The momentumk α ≡ −i∂ α (α = x, y, z) should be regarded as the operators in real space. For notational simplicity, we take = 1 throughout this paper. The last term in Eq. (1) represents the two-body contact s-wave interaction between unlike spins. The use of the contact interatomic interaction leads to an ultraviolet divergence at large momentum or high energy. To overcome such a divergence, we express the interaction strength U 0 in terms of the s-wave scattering length a s , where V is the volume of the system. The partition function of the system can be written as [61] Z =ˆD[ψ (r, τ ) ,ψ (r, τ )] exp −S ψ (r, τ ) ,ψ (r, τ ) , where the action is written as an integral over imaginary time τ . Here β = 1/(k B T ) is the inverse temperature and H ψ,ψ is obtained by replacing the field operators ψ † and ψ with the Grassmann variablesψ and ψ, respectively. We can use the Hubbard-Stratonovich transformation to transform the quartic interaction term into a quadratic form as: from which the pairing field ∆ (r, τ ) is defined. Let us now introduce the 4-dimensional Nambu spinor Φ (r,τ ) ≡ [ψ ↑ , ψ ↓ ,ψ ↑ ,ψ ↓ ] T and rewrite the action as, where the 4 × 4 single-particle Green function is given by, with the Pauli matricesσ i (i = 0, x, y, z) describing the spin degrees of freedom. The Nambu spinor representation treats equally the particle and the hole excitations. As a result, a zero-point energy appears in the last term of the action. Integrating out the original fermionic fields, we may rewrite the partition function as where the effective action is given by where the trace is taken over all the spin, spatial, and temporal degrees of freedom. Here G −1 0 is the inverse mean-field Green function and has the same form as G −1 in Eq. (7) with ∆(r, τ ) replaced by ∆ 0 (r). We note that the static pairing field ∆ 0 (r) can be either homogeneous or inhomogeneous. In the latter case, a typical form is ∆ 0 (r) = ∆ 0 e iq·r , referred to as the Fulde-Ferrell superfluid [63], in which the Cooper pairs condense into a state with nonzero center-of-mass momentum q. Let us now focus on a homogeneous system, where the momentum is a good quantum number so that we take ξ k =ξ k and V SO (k) = V SO (k). The fluctuating part of the effective action may be formally written in terms of the many-body particle-particle vertex function Γ (q, iν n ) [61], where Q ≡ (q, iν n ) and ν n is the bosonic Matsubara frequency. By integrating out the quadratic term in δS, we obtain the contribution from the Gaussian pair fluctuations to the thermodynamic potential as [61] Within the Gaussian pair fluctuation approximation, naïvely, the vertex function may be interpreted as the Green function of "Cooper pairs". This idea is supported by Eq. (14), as the thermodynamic potential Ω B of a free bosonic Green function G B is formally given by Ω B = k B T q,iνn ln[−G −1 B (q, iν n )]. At this point, the advantage of using pair-fluctuation theory becomes evident. For the fermionic degree of freedom, we simply work out the single-particle Green function G 0 and the related mean-field thermodynamic potential Ω 0 = k B T S 0 . An example will be provided later on in the study of the Fulde-Ferrell superfluidity. While for Cooper pairs, we calculate the vertex function and the fluctuating thermodynamic potential δΩ. In this way, we may obtain a satisfactory description of strongly-interacting Fermi systems [59,60,62]. In the normal state where the pairing field vanishes, i.e., ∆ 0 = 0, we may obtain the explicit expression of the vertex function. In this case, the inverse Green function G −1 0 has a diagonal form and can be easily inverted to give [24]: where K ≡ (k, iω m ) and ω m is the fermionic Matsubara frequency. Here we have introduced the 2 × 2 particle Green function G 0 (K) and hole Green functionG 0 (K), which are related to each other byG 0 (K) = −[G 0 (−K)] T . It is straightforward to show that, The detailed expression of the vertex function depends on the type of SOC. In the study of Rashba SOC, we will give an example that shows how to calculate the vertex function. 2. Two-particle physics from the particle-particle vertex function The vertex function can describe the pairing instability of Cooper pairs both on the Fermi surface and in the vacuum. In the latter case, it describes exactly the two-particle state. The corresponding two-body inverse vertex function Γ −1 2b (Q) can be obtained from the many-body inverse vertex function by discarding the Fermi distribution function and by setting chemical potential µ = 0 [64]. One important question concerning the two-body state is whether there exist bound states. For a given momentum q, the bound state energy E(q) can be determined from the two-particle vertex function using the following relation (iν n → ω + i0 + ) [22,24]: A true bound state must satisfy E(q) < 2E min where E min is the single-particle ground state energy. It is straightforward but lengthy to calculate the two-particle vertex function for any type of SOC. Here, we quote only the energy equation obtained using Eq. (17) for the most general form of SOC [34], where λ i is the strength of SOC in the direction i = (x, y, z) and h i denotes the effective Zeeman field. The eigenenergy E(q) of a two-body eigenstate with momentum q satisfies the equation: where E k,q ≡ E(q) − q 2 +k − q 2 −k and k = k 2 /(2m). We note that, in general, the lowest-energy two-particle state may occur at a finite momentum q. That is, the two-particle bound state could have a nonzero center-of-mass momentum. Later, we shall see that this unusual property has nontrivial consequences in the many-body setting. Another peculiar feature of the two-particle bound state is that the pairs may have an effective mass larger than 2m. For example, for the bound state with zero center-of-mass momentum q = 0, it would have a quadratic dispersion for small p, The effective mass of the bound state M i (i = x, y, z) can then be determined directly from this dispersion relation. Another approach to study the two-particle state with SOC , more familiar to most readers, is to use the following ansatz for the two-particle wave function [21,23,65,66], where c † k↑ and c † k↓ are creation field operators of spin-up and spin-down atoms with momentum k and C is the normalization factor. We note that, in the presence of SOC, the wave function of the two-particle state has both spin singlet and triplet components. Then, using the Schrödinger equation H |Φ 2B (q) = E(q) |Φ 2B (q) , we can straightforwardly derive the equations for coefficients ψ σσ appearing in the above two-body wave function and then the energy equation for E(q). For the general form of SOC, Eq. (18), it leads to exactly the same energy equation (19) [34]. Each of the two approaches mentioned above has its own advantages. The vertex function approach is useful to understand the relationship between the two-body physics and the many-body physics. For example, it can be used to obtain the two-particle bound state in the presence of a Fermi surface. The latter approach of using the two-particle Schrödinger equation naturally yields the two-particle wave function. Both approaches have been used extensively in the literature. Many-body T-matrix theory The functional path-integral approach gives the simplest version of the many-body T -matrix theory, where the bare Green function has been used in the vertex function. Here, for completeness, we mention briefly another partially self-consistent T -matrix scheme for a normal spin-orbit coupled Fermi gas, by taking one bare and one fully dressed Green function in the vertex function [13,28]. In this scheme, we have the Dyson equation, where the self-energy is given by is the (scalar) T -matrix with a two-particle propagator where the trace is taken over the spin degree of freedom only. Note that a fully self-consistent T -matrix theory may also be obtained by replacing in Eqs. (23) and (24) the bare Green functionG 0 (K − Q) with the fully dressed Green functionG(K − Q). We note also that Eqs. (22)-(24) provide a natural generalization of the well-known many-body T -matrix theory [62], by including the effect of SOC, where the particle or hole Green function, G(K) orG(K), now becomes a 2 × 2 matrix. In general, the partially self-consistent T -matrix equations are difficult to solve [62]. At a qualitative level, we may adopt a pseudogap decomposition advanced by the Chicago group [67] and approximate the T -matrix t(Q) = t sc (Q) + t pg (Q) to be the sum of two parts. Here t sc (Q) = −(∆ 2 sc /T )δ (Q) is the contribution from the superfluid with ∆ sc being the superfluid order parameter, and t pg (Q) represents the contribution from un-condensed pairs which give rise to a pseudogap The full pairing order parameter is given by and We note that, at zero temperature the pseudogap approximation is simply the standard mean-field BCS theory, in which Σ(K) = −∆ 2 0 (iσ y )G 0 (K)(iσ y ). Above the superfluid transition, however, it captures the essential physics of fermionic pairing and therefore should be regarded as an improved theory beyond mean-field. To calculate the pseudogap ∆ pg , we approximate where the residue Z and the effective dispersion of pairs Ω q = q 2 /2M * are to be determined by expanding χ (Q) about Q = 0 in the case that the Cooper pairs condense into a zero-momentum state. The form of t pg (Q) leads to is the bosonic distribution function. We finally obtain two coupled equations, the gap equation 1/U 0 + χ (Q = 0) = Zµ pair and the number equation n = k B T K TrG(K), from which the superfluid order parameter ∆ sc and the chemical potential µ can be determined. This pseudogap method has been used to study the thermodynamics and momentum-resolved rf spectroscopy of interacting Fermi gases with different types of SOC [13,28]. Bogoliubov-de Gennes equation for trapped Fermi systems All cold atom experiments are performed with some trapping potentials, V T (r). For such inhomogeneous systems, it is difficult to directly consider pair fluctuations. In most cases, we focus on the mean-field theory by using the saddlepoint thermodynamic potential Eq. (10) and minimizing it to determine the order parameter ∆ 0 (r). This amounts to diagonalizing the 4 × 4 single-particle Green function G −1 0 (r, τ ; r , τ ) with the standard Bogoliubov transformation, where α η is the field operator for Bogoliubov quasiparticle with energy E η and Nambu spinor wave function Φ η (r) ≡ [u ↑η (r) , u ↓η (r) , v ↑η (r) , v ↓η (r)] T , which satisfies the following Bogoliubov-de Gennes (BdG) equation, The BdG Hamiltonian in the above equation includes the pairing gap function ∆ 0 (r) that should be determined self-consistently. For this purpose, we may take the inverse Bogoliubov transformation and obtain The gap function ∆ 0 (r) = −U 0 ψ ↓ (r)ψ ↑ (r) is then given by, where f (E) ≡ 1/[e E/(k B T ) + 1] is the Fermi distribution function at temperature T . Accordingly, the total density takes the form, The chemical potential µ can be determined using the number equation, N =´drn (r). This BdG approach has been used to investigate topological superfluids in harmonically trapped spin-orbit coupled Fermi gases in 1D and 2D [42,45,47,48,50,51]. It will be discussed in greater detail in later sections. It is important to note that, the use of Nambu spinor representation enlarges the Hilbert space of the system. As a result, there is an intrinsic particle-hole symmetry in the Bogoliubov solutions: For any "particle" solution with wave function Φ (p) These two solutions correspond exactly to the same physical state. To remove this redundancy, we have added an extra factor of 1/2 in the expressions for pairing gap function Eq. (31) and total density Eq. (32). As we shall see, this particle-hole symmetry is essential to the understanding of the appearance of exotic Majorana fermions -particles that are their own antiparticles -in topological superfluids. Momentum-or spatially-resolved radio-frequency spectrum Radio-frequency (rf) spectroscopy, including both momentum-resolved and spatially-resolved rf-spectroscopy, is a powerful tool to characterize interacting many-body systems. It has been widely used to study fermionic pairing in a two-component atomic Fermi gas near Feshbach resonances in the BEC-BCS crossover [68][69][70][71][72]. Most recently, it has also been used to detect new quasiparticles known as repulsive polarons [73,74], which occur when "impurity" fermionic particles interact repulsively with a fermionic environment. The underlying mechanism of rf-spectroscopy is rather simple. The rf field drives transitions between one of the hyperfine states (say, |↓ ) and an empty hyperfine state |3 which lies above it by an energy ω 3↓ . The Hamiltonian describing this rf-coupling may be written as, where V 0 is the strength of the rf drive. For a weak rf field, the number of transferred atoms may be calculated using linear response theory. At this point, it is important to note that a final state effect might be present, which is caused by the interaction between atoms in the final third state and those in the initial spin-up or spin-down state. This final state effect is significant for 6 Li atoms; while for 40 K atoms, it is not important [19]. For momentum-resolved rf spectroscopy [71], the momentum distribution of the transferred atoms can be obtained by absorption imaging after a time-of-flight. This gives rise to the information about the single-particle spectral function of spin-down atoms of the original Fermi system, A ↓↓ (k, ω). In the absence of the final-state effect, the rf transfer strength Γ(k, ω) at a given momentum is given by, Here, we have assumed that the atoms in the third state have the dispersion relation k = k 2 /(2m) in free space and have taken the coupling strength V 0 = 1. Experimentally, we can either measure the momentum-resolved rf spectroscopy along a particular direction, say, the x -direction, by integrating along the two perpendicular directions or after integrating along the remaining direction, obtain the fully integrated rf spectrum Γ(ω) ≡ k Γ(k, ω). We note that, in the extremely weakly interacting BCS and BEC regimes, where the physics is dominated by singleparticle or two-particle physics, respectively, we may use the Fermi golden rule to calculate the momentum-resolved rf spectroscopy. This will be discussed in greater detail in the relevant subsections. We note also that momentumresolved rf spectroscopy is precisely an ultracold atomic analogue of the well-known angle-resolved photoemission spectroscopy (ARPES) widely used in solid-state experiments. Alternatively, we may use rf spectroscopy to probe the local information about the original Fermi system. This was first demonstrated in measuring the pairing gap by using phase-contrast imaging within the local density approximation for a trapped Fermi gas [69]. A more general idea is to use a specifically designed third state, which has a very flat dispersion relation [75]. This leads to a spatially-resolved rf spectroscopy, which measures precisely the local density of states of the Fermi system, It could be regarded as a cold-atom scanning tunneling microscopy (STM). As we shall see, the spatially-resolved rf spectroscopy will provide a useful although indirect measurement of the long-sought Majorana fermion in atomic topological superfluids. Let us now discuss the two specific types of SOC. One simple scheme to create SOC in cold atoms is through a Raman transition that couples two hyperfine ground states of the atom, as schematically shown in Fig. 1. The Raman process is described by the following single-particle Hamiltonian in the first-quantization representation wherep is the momentum operator of the atom, 2k rx is the photon recoil momentum taken to be along the x -axis, δ and Ω are the two-photon detuning and the coupling strength of the Raman beams, respectively. The Hamiltonian acts on the Hilbert space expanded by the spin-up and spin-down basis, |↑ and |↓ . By applying a unitary transformation with the Hamiltonian H 0 can be recast into the following form: Here,k = (k x ,k y ,k z ) denotes the quasi-momentum operator of the atom: Whenk is applied to the transformed wave function, it gives the atomic quasi-momentum k that is related to the real momentum p asp = (k ± k rx ) with ± for spin-up and down, respectively. From this expression, it is sometimes convenient to regard both Ω and δ as the strengths of effective Zeeman fields. We note that after a pseudo-spin rotation (σ z → σ x , σ x → −σ z ), Hamiltonian (39) can be cast into the general form of SOC in Eq. (18) with λ = (k 2 r /m, 0, 0) and h = (δ/2, 0, −Ω/2). It is clear that the SOC is along a specific direction. Actually, it is an equal-weight combination of the well-known Rashba and Dresselhaus SOCs in solid-state physics [56]. For this reason, hereafter we would refer to it as 1D equal-weight Rashba-Dresselhaus SOC. We may also refer to the detuning δ as the in-plane Zeeman field since it is aligned along the same direction as the SOC. Accordingly, we call the coupling strength Ω as the out-of-plane Zeeman field. As we shall see, depending on δ and Ω, the spin-orbit coupled Fermi system can display distinct quantum superfluid phases at low temperatures. In each panel, we increase the coupling strength of the Raman beams from Er to 5Er, with a step of Er, as indicated by the arrows. Single-particle spectrum The single-particle spectrum can be easily obtained by diagonalizing the Hamiltonian (39), which is given by where we have defined a recoil energy E r ≡ k 2 r /(2m) and an SOC strength λ ≡ k r /m. The spectrum contains two branches as shown in Fig. 2. For small Ω, the lower branch exhibits a double-well structure. The double wells are symmetric (asymmetric) for δ = 0 (δ = 0). For large Ω, the two wells in the lower branch merge into a single one. It is important to emphasize that in each branch atoms stay at a mixed spin state with both spin-up and down components. The single-particle spectrum can be easily measured by using momentum-resolved rf spectroscopy, as already shown at Shanxi University and MIT [11,12]. In this case, the number of transferred atoms can be calculated by using the Fermi's golden rule [76]: where the summation is over all possible initial single-particle states Φ i (with energy E i and a given wavevector k x ) and final states Φ f (with energy E f ), and the Dirac δ-function ensures energy conservation during the rf transition. In practice, the δ-function is replaced by a function with finite width (e.g., δ(x) → (γ/π)(x 2 + γ 2 ) −1 where γ accounts for Here, the intensity of the contour plot shows the number of transferred atoms, increasingly linearly from 0 (blue) to its maximum value (red). We have set ω 3↓ = 0 and used a Lorentzian distribution to replace the Delta function. Figure taken from Ref. [76] with modification. the energy resolution of the measurement). The single-particle wave function Φ i is known from the diagonalization of the Hamiltonian (39) and the transfer element Φ f | V rf |Φ i is then easy to determine. The left panel of Fig. 3 shows the predicted momentum-resolved spectroscopy Γ (k x , ω) at δ = 0 and Ω = 2E r . The chemical potential is tuned (µ = 5E r ) in such a way that there are significant populations in both energy branches. The simulated spectrum is not straightforward to understand, because of the final free-particle dispersion relation in the energy conservation in Eq. (41) and also the recoil momentum shift (k r ) arising from the unitary transformation Eq. (38). Therefore, it is useful to defineΓ for which, the energy conservation takes the form δ[ω + E i (k x )]. As shown on the right panel of Fig. 3, the singleparticle spectrum is now clearly visible. Experimentally, the single-particle properties of the Fermi gas can also be easily tuned, for example, by using an additional rf field to couple spin-up and down states [12]. After the gauge transformation, it introduces a term (Ω/2)[cos(2k r x)σ x + sin(2k r x)σ y ] in the spin-orbit Hamiltonian Eq. (39), which behaves like a spin-orbit lattice and leads to the formation of energy bands. In Fig. 4, we show the simulation of momentum-resolved rf spectroscopy under such an rf spin-orbit lattice. The energy band structure is apparent. We refer to Ref. [76] for more details on the theoretical simulations, in particular the simulations in a harmonic trap. The relevant measurements will be discussed in greater detail later in the section on experiments. Two-body physics We now turn to consider the interatomic interaction. The interplay between interatomic interaction and SOC can lead to a number of intriguing phenomena, even at the two-particle level. Let us first solve numerically the energy E(q) of the two-particle states by using the general eigenenergy equation Eq. (19). A true bound state must satisfy E(q) < 2E min , where E min is the single-particle ground state energy. At zero detuning δ = 0, the two-particle ground state has zero center-of-mass momentum q = 0 [66]. In Fig. 5(a), we show its energy as a function of the dimensionless interaction parameter 1/(k r a s ). In the presence of 1D equalweight Rashba-Dresselhaus SOC, a two-particle bound state occurs on the BEC side with a positive s-wave scattering length a s > 0. The effective out-of-plane Zeeman field Ω acts as a pair-breaker and pushes the threshold scattering length to the BEC limit. In other words, the position of the Feshbach resonance, originally located at a s = ±∞, now shifts to the BEC side with at lower magnetic field strengths [14]. By calculating the dispersion relation E(q) around q = 0, we are able to determine the effective mass, as shown in Fig. 5(b). It is interesting that the effective mass along the direction of SOC is greatly altered. It becomes much larger than 2m towards the threshold scattering length. In the deep BEC limit, 1/(k r a s ) → ∞, where two atoms form a tightly bound molecule, the mass is less affected by the SOC or the effective Zeeman field, as we may anticipate. At nonzero detuning δ = 0, the result shows that the two-particle bound state will have its lowest energy at a finite center-of-mass momentum q 0 = (q 0 , 0, 0) [26,30]. Fig. 6 shows the binding energy and the magnitude of q 0 of the lowest-energy bound state. That the two-particle ground states possessing a finite momentum implies that the Cooper pairs, which is a many-body counterpart of two-particle bound state, may acquire finite center-of-mass momentum and therefore condense into an inhomogeneous superfluid state. This possibility will be addressed in greater detail later. We note that with the typical parameters, i.e., Ω ∼ E r and δ ∼ E r , q 0 is small and less than 1% of the recoil momentum k r , as shown in Fig. 6(b). However, its magnitude can be significantly enhanced by many-body effect. For Cooper pairs in the ground state, q 0 can be tuned to be comparable with k r or the Fermi wavevector k F [33]. √ ω − EB/ω 2 . The inset highlights the different contribution from the two final states, as described in the text. Figure taken from Ref. [65] with modification. Ideally, momentum-resolved rf-spectroscopy can be used to probe the two-particle bound state discussed above. We can perform a numerical simulation of the spectroscopy by using again the Fermi's golden rule. Let us assume that a bound molecule is initially at rest in the state |Φ 2B with energy E i . An rf photon with energy ω will break the molecule and transfer the spin-down atom to the third state |3 . In the case that there is no final-state effect, the final state |Φ f consists of a free atom in |3 and a remaining atom in the spin-orbit system. According to the Fermi's golden rule, the rf strength Γ(ω) of breaking molecules and transferring atoms is proportional to the Franck-Condon factor [77], The integrated Franck-Condon factor satisfies the sum rule,´+ [65] and [66], by carefully analyzing the initial two-particle bound state |Φ 2B and the final state |Φ f . Furthermore, by resolving the momentum of transferred atoms, we are able to obtain the momentum-resolved Franck-Condon factor F (k x , ω). Figs. 7(a) and 7(b) illustrate respectively the momentum-resolved and the integrated rf spectrum of the two-particle ground state at zero detuning δ = 0. One can easily resolve two different responses in the spectrum due to two different final states, as the remaining spin-up atom in the original spin-orbit system can occupy either the upper or the lower energy branch. Indeed, in the integrated rf spectrum, we can separate clearly the different contributions from the two final states, as highlighted in the inset. This gives rise to two peaks in the integrated spectrum. We note that the lower peak exhibits a red shift as the SOC strength increases, due to the decrease of the binding energy. It is also straightforward to calculate the rf spectrum of the two-particle bound state at nonzero detuning δ = 0 (not shown in the figure). However, the spectrum remains essentially unchanged, due to the fact that the center-of-mass momentum q 0 is quite small with typical experimental parameters. Momentum-resolved radio-frequency spectrum of the superfluid phase Consider now the many-body state. As we mentioned earlier, since the two-particle wave function contains both spin singlet and triplet components, we anticipate that the superfluid phase at low temperatures would involve both s-wave pairing and high-partial-wave pairing. Therefore, in general it is an anisotropic superfluid. This is to be discussed later in detail for 2D Rashba SOC. Here, we are interested in the phase diagram and the experimental probe of a 3D Fermi gas with 1D equal-weight Rashba-Dresselhaus SOC. First, let us concentrate on the case with zero detuning δ = 0, by using the many-body T -matrix theory within the pseudogap approximation [13]. Focusing on the vicinity of the Feshbach resonance where a s → ±∞, in Fig. 8 we show the superfluid transition temperature T c and the pair breaking (pseudogap) temperature T * of the spin-orbit coupled Fermi gas at Ω = 2E r and k F = k r . The pseudogap temperature is calculated using the standard BCS mean-field theory without taking into account the preformed pairs (i.e., ∆ pg = 0) [57,67]. We find that the region of superfluid phase is strongly suppressed by SOC. In particular, at resonance the superfluid transition temperature is about T c 0.08T F , which is significantly smaller than the experimentally determined T c 0.167(13)T F for a unitary Fermi gas [78]. Thus, it seems to be a challenge to observe a novel spin-orbit coupled fermionic superfluid in the present experimental scheme. In Figs. 9(a)-9(c), we show the zero-temperature momentum-resolved rf spectrum across the resonance. On the BCS side (1/k F a s = −0.5), the spectrum is dominated by the response from atoms and shows a characteristic highfrequency tail at k x < 0 [11,12,76], see for example, the left panel of Fig. 3. We note that the density of the Fermi cloud, chosen here following the real experimental parameters [11], is low and therefore only the lower energy branch is occupied at low temperatures. Towards the BEC limit (1/k F a s = +0.5), the spectrum may be understood from the picture of well-defined bound pairs and shows a clear two-fold anisotropic distribution, as we already mentioned in Fig. 7(a) [65]. The spectrum at the resonance is complicated and might be attributed to many-body fermionic pairs. It is interesting that the response from many-body pairs has a similar tail at high frequency as that from atoms. The change of the rf spectrum across the resonance is continuous, in accordance with a smooth BEC-BCS crossover. Fulde-Ferrell superfluidity The nature of superfluidity can be greatly changed by a nonzero detuning δ = 0. As we discussed earlier in the two-body part, in this case, the Cooper pairs may carry a nonzero center-of-mass momentum and therefore condense into an inhomogeneous superfluid state, characterized by the order parameter ∆ 0 (r) = ∆ 0 e iq·r . This exotic superfluid has been proposed by Fulde and Ferrell [63], soon after the discovery of the seminal BCS theory. Its existence has attracted tremendous theoretical and experimental efforts over the past five decades [79]. Remarkably, to date there is still no conclusive experimental evidence for FF superfluidity. Here, we show that the superfluid phase of a 3D Fermi gas with 1D equal-weight Rashba-Dresselhaus SOC and finite in-plane effective Zeeman field δ is precisely the longsought FF superfluid [33]. The same issue has also been addressed very recently by Vijay Shenoy [30]. We note that the FF superfluid can appear in other settings with different types of SOC and dimensionality [29,31,32,[34][35][36]80]. Theoretically, to determine the FF superfluid state, we solve the BdG equation (29) with V T (r) = 0 by using the following ansatz for quasiparticle wave functions The center-of-mass momentum q is assumed to be along the x -direction, inspired from the two-body solution [26]. The mean-field thermodynamic potential Ω 0 at temperature T in Eq. (10) is then given by where E kη (η = 1, 2, 3, 4) is the quasiparticle energy. Here, the summation over the quasiparticle energy must be restricted to E kη ≥ 0 because of an inherent particle-hole symmetry in the Nambu spinor representation. For a given set of parameters (i.e, the temperature T , interaction strength 1/k F a s , etc.), different mean-field phases can be determined using the self-consistent stationary conditions: ∂Ω/∂∆ = 0, ∂Ω/∂q = 0, as well as the conservation of total atom number, N = −∂Ω/∂µ. At finite temperatures, the ground state has the lowest free energy F = Ω + µN . In the following, we consider the resonance case with a divergent scattering length 1/k F a s = 0 and set T = 0.05T F , where T F is the Fermi temperature. According to the typical number of atoms in experiments [11,12], we take the Fermi wavevector k F = k r . In general, for any set of parameters there are three competing ground states that are stable against phase separation (i.e., ∂ 2 Ω 0 /∂∆ 2 0 ≥ 0), as shown in Fig. 10(a): normal gas (∆ 0 = 0), BCS superfluid (∆ 0 = 0 and q = 0), and FF superfluid (∆ 0 = 0 and q = 0). Remarkably, in the presence of spin-orbit coupling the FF superfluid is always more favorable in energy than the standard BCS pairing state at finite detuning ( Fig. 10(b)). It is easy to check that the superfluid density of the BCS pairing state in the SOC direction becomes negative (i.e., ∂Ω 0 /∂q < 0), signaling the instability towards an FF superfluid. Therefore, experimentally the Fermi gas would always condense into an FF superfluid at finite two-photon detuning. In Fig. 11, we report a low-temperature phase diagram that could be directly observed in current experiments. The FF superfluid occupies the major part of the phase diagram. The experimental probe of an FF superfluid is a long-standing challenge. Here, unique to cold atoms, momentumresolved rf spectroscopy may provide a smoking-gun signal of the FF superfluidity. The basic idea is that, since Cooper pairs carry a finite center-of-mass momentum q, the transferred atoms in the rf transition acquire an overall momentum q/2. As a result, there would be a q/2 shift in the measured spectrum. In Fig. 12, we show the momentum-resolved rf spectrum Γ(k x , ω) on a logarithmic scale. As we discussed earlier in the two-body part, there are two contributions to the spectrum, corresponding to two different final states [65]. These two contributions are well separated in the frequency domain, with peak positions indicated by the symbols "+" and "×", respectively. Interestingly, at finite detuning with a sizable FF momentum q, the peak positions of the two contributions are shifted roughly in opposite directions by an amount q/2. This provides clear evidence for observing the FF superfluid. 1D topological superfluidity Arguably, the most remarkable aspect of SOC is that it provides a feasible routine to realize topological superfluids [38], which have attracted tremendous interest over the past few years [81]. In addition to providing a new quantum phase of matter, topological superfluids can host exotic quasiparticles at their boundaries, known as Majorana fermions -particles that are their own antiparticles [82,83]. Due to their non-Abelian exchange statistics, Majorana fermions are believed to be the essential quantum bits for topological quantum computation [84]. Therefore, the pursuit for topological superfluids and Majorana fermions represents one of the most important challenges in fundamental science. A number of settings have been proposed for the realization of topological superfluids, including the fractional quantum Hall states at filling ν = 5/2 [85], vortex states of p x + ip y superconductors [86,87], and surfaces of three-dimensional (3D) topological insulators in proximity to an s-wave superconductor [88], and one-dimensional (1D) nanowires with strong spin-orbit coupling coated also on an s-wave superconductor [89]. In the latter setting, indirect evidences of topological superfluid and Majorana fermions have been reported [90]. Here, we review briefly the possible realizations of topological superfluids, in the context of a 1D spin-orbit coupled atomic Fermi gas [45,47,51,55], which can be prepared straightforwardly by loading a 3D spin-orbit Fermi gas into deep 2D optical lattices. Later, we will discuss 2D topological superfluids with Rashba SOC. Consider first a homogeneous 1D Fermi gas with a nonzero detuning δ = 0 [55]. In this case, we actually anticipate a topological inhomogeneous superfluid, where the order parameter also varies in real space. Using the same theoretical technique as in the previous subsection, we solve the BdG equation (29) in 1D and then minimize the mean-field thermodynamic potential Eq. (45) to determine the pairing gap ∆ 0 and the FF momentum q. In Fig. 13, we show the energy gap as a function of Ω at δ = 0.6E F and T = 0. For this result, we use a Fermi wavevector k F = 0.8k r and take a dimensionless interaction parameter γ ≡ −mg 1D /(n) = 3, where g 1D is the strength of the 1D contact interaction and n = 2k F /π is the 1D linear density. Topological phase transition is associated with a change of the topology of the underlying Fermi surface and therefore is accompanied with closing of the excitation gap at the transition point. In the main figure this feature is clearly evident. To better characterize the change of topology, we may calculate the Berry phase defined by [47] Here W η (k) ≡ [u kη↑ e iqz/2 , u kη↓ e iqz/2 , v kη↑ e −iqz/2 , v kη↓ e −iqz/2 ] T denotes the wave function of the upper (η = +) and lower (η = −) branch, respectively. In Fig. 13, the Berry phase is shown by circles. It jumps from π to 0, right across the topological phase transition. It is somewhat counter-intuitive that the γ B = 0 sector corresponds to the topologically non-trivial superfluid state. It is important to emphasize the inhomogeneous nature of the superfluid. Indeed, as shown in the inset, the FF momentum q increases rapidly across the topological superfluid transition and reaches about 0.3k F at Ω = 4E F . In Fig. 14, we present the zero-temperature phase diagram for the topological phase transition. The critical coupling strength Ω c decreases with the increase of the detuning δ. At zero detuning, Ω c can be determined analytically, since the expression for the BdG eigenenergy for single-particle excitations (after dropping a constant energy shift E r ) is known [24,47], where ξ k = k 2 /(2m) − µ and λ = k r /m. It is easy to see that the excitation gap closes at k = 0 for the lower branch (i.e., η = −), leading to the well-known result [89] Ω c 2 = µ 2 + ∆ 2 . This criterion for topological superfluids is equivalent to the condition that there are only two Fermi points on the Fermi surface [39], under which the Fermi system behaves essentially like a 1D weak-coupling p-wave superfluid. Let us now turn to the experimentally realistic situation with a 1D harmonic trap V T (x) = mω 2 x 2 /2 and focus on the case with δ = 0 [45,47,51]. The BdG equation (29) can be solved self-consistently by expanding the Nambu spinor wave function Φ η (x) onto the eigenfucntion basis of the harmonic oscillator. In this trapped environment, Majorana fermions with zero energy are anticipated to emerge at the boundary, if the Fermi gas stays in a topological superfluid state. The appearance of Majorana fermions can be easily understood from the particle-hole symmetry obeyed by the BdG equation, which states that every physical state can be described either by a particle state with a positive energy E or a hole state with a negative energy −E. The Bogoliubov quasiparticle operators associated with these two states therefore satisfy Γ E = Γ † −E . At the boundary, Eq. (48) could be fulfilled at some points and give locally the states with E = 0. These states are Majorana fermions, as the associated operators satisfy Γ 0 = Γ † 0precisely the defining feature of a Majorana fermion [82,83]. In Fig. 15(a), we present the zero-temperature phase diagram of a trapped 1D Fermi gas at k F = 2k r and γ = π [51]. The transition from BCS superfluid to topological superfluid is now characterized by the appearance of Majorana fermions, whose energy is precisely zero and therefore the minimum of the quaisparticle spectrum touches zero, min{|E η |} = 0. In the topological superfluid phase, as shown in Fig. 15(b) with Ω = 2.4E F , the Majorana fermions may be clearly identified by using spatially-resolved rf spectroscopy. We note that for a trapped Fermi gas with weak interatomic interaction and/or high density, the upper branch of single-particle spectrum may be populated at the trap center, leading to four Fermi points on the Fermi surface. This violates Eq. (48). As a result, we may find a phase-separation phase in which the topological superfluid occurs only at the two wings of the Fermi cloud. This situation has been discussed in Ref. [45]. C. 2D Rashba spin-orbit coupling Let us now discuss Rashba SOC, which takes the standard form V SO = λ(k yσx −k xσy ) [91]. The coupling between spin and orbital motions occurs along two spatial directions and therefore we shall refer to it as 2D Rashba SOC. This type of SOC is not realized experimentally yet, although there are several theoretical proposals for its realization [92,93]. The superfluid phase with 2D Rashba SOC at low temperatures shares a lot of common features as its 1D counterpart as we reviewed in the previous subsection. Here we focus on some specific features, for example, the two-particle bound state at sufficiently strong SOC strength -the rashbon [21,25] -and the related crossover to a BEC of rashbons. We will also discuss in greater detail the 2D topological superfluid with Rasbha SOC in the presence of an out-of-plane Zeeman field, since it provides an interesting platform to perform topological quantum computation. We note that experimentally it is also possible to create a 3D isotropic SOC, V SO = λ(k xσx +k yσy +k zσz ), where the spin and orbital degree of freedoms are coupled in all three dimensions [94]. We note also that early theoretical works on a Rashba spin-orbit coupled Fermi gas was reviewed very briefly by Hui Zhai in Ref. [95]. Single-particle spectrum In the presence of an out-of-plane Zeeman field hσ z , the single-particle spectrum is given by, The spectrum with a nonzero h is illustrated on the left panel of Fig. 16. Compared with the single-particle spectrum with 1D equal-weight Rashba-Dresselhaus SOC in Fig. 2, it is interesting that the two minima in the lower energy branch now extend to form a ring structure. At low energy, therefore, we may anticipate that in the momentum space the particles will be confined along the ring. The effective dimensionality of the system is therefore reduced. Indeed, it is not difficult to obtain the density of states (h = 0) [96]: where we have defined a dimensionless SOC coupling strength λ eff ≡ mλ/k F . As can be seen from the right panel of Fig. 16, ρ(ω) with Rashba SOC becomes a constant at low energy, which is characteristic of a 2D system. This reduction in the effective dimensionality will have interesting consequences when the interatomic interaction comes into play, as we now disucss in greater detail. Two-body physics We solve the two-body problem by calculating the two-particle vertex function, following the general procedure outlined in the theoretical framework (Sec. II A). Focusing on the case without Zeeman fields, we have By substituting it into Eq. (16), it is straightforward to obtain, with the single-particle energy and By performing explicitly the summation over iω m , replacing k by q/2 + k and re-arranging the terms, we find that where The above equation provides a starting point to investigate the fluctuation effect due to interatomic interactions. Here, for the two-body problem of interest, we discard the Fermi distribution function and set q = 0, as the ground bound state has zero center-of-mass momentum in the absence of Zeeman field. The two-body vertex function is then given by, The energy of the two-particle bound state E can be obtained by solving Re{Γ −1 2b [q = 0; ω = E]} = 0 with µ = 0, as we already discussed in the theoretical framework. More physically, we may calculate the phase shift Recall that the vertex function represents the Green function of Cooper pairs. Thus, the phase shift defined above is simply´dωA(q, ω), where A(q, ω) is the spectral function of pairs. As a result, a true bound state, corresponding to a delta peak in the spectral function, will cause a π jump in the phase shift at the critical frequency ω c = E, from which we determine the energy of the bound state. In the main figure and inset of Fig. 17(a), we show the two-body phase shift and the energy of the bound state of a Rashba spin-orbit coupled Fermi gas, respectively. Interestingly, the bound state exists even in the BCS limit, where the s-wave scattering length is small and negative [20]. This is because at the low energy the effective dimensionality of the Rashba system reduces to two, as we mentioned earlier from the nature of the low-energy density of states. In 2D, we know that any weak attraction can lead to a bound state. We can calculate the effective mass of the bound state [22,23], which is strongly renormalized by the SOC, by determining the dispersion relation of the two-body bound state E(q) at small momentum q ∼ 0. The result is shown in Fig. 17 It is important to note that all the properties of the two-body bound state, including its energy and effective mass, depend on a single parameter 1/(mλa s ), which is the ratio of the only two length scales 1/(mλ) and a s in the problem. Thus, in the limit of sufficiently large SOC, the bound state becomes universal and is identical to the one obtained at 1/(mλa s ) = 0. This new kind of universal bound state has been referred to as rashbon [21,25]. The mass of rashbons (i.e., γ 1.2 from Fig. 17(b)) is notably heavier than the conventional molecules 2m in the BEC limit. This causes a decrease in the condensation temperature of rashbons in such a way that where T (0) BEC 0.218T F is the BEC temperature of conventional molecules. In the presence of out-of-plane Zeeman field h, the two-body problem has been discussed in detail in Ref. [24]. Crossover to rashbon BEC and anisotropic superfluidity Let us now discuss the crossover to a rashbon BEC. We focus on the unitary limit with a s → ∞ and increase the 2D Rashba SOC. At the mean-field saddle-point level, the single-particle Green function Eq. (7) takes the form (h = 0) [22], The inversion of the above matrix can be worked out explicitly, leading to two single-particle Bogoliubov dispersions whose degeneracy is lifted by the SOC, E k,± = [(ξ k ± λk ⊥ ) 2 + ∆ 2 0 ] 1/2 , and the normal and anomalous Green functions from which we can immediately obtain the momentum distribution n (k) = 1 − α [1/2 − f (E k,α )]γ k,α and the single-particle spectral function where γ k,± = (ξ k ± λk ⊥ /E k,± ). The chemical potential and the order parameter are to be determined by the number and the gap equations, n = k n(k) and ∆ 0 = −U 0 ∆ 0 α [1/2 − f (E k,α )]/(2E k,α ), respectively. Fig. 18(a) displays the chemical potential µ and the order parameter as functions of the SOC strength. The increase of the SOC strength leads to a deeper bound state. As a consequence, in analogy with the BEC-BCS crossover, the order parameter and the critical transition temperature are greatly enhanced at λk F ∼ F . In the large SOC limit, we have µ = (µ B +E)/2, where E is the energy of the two-body bound state, and µ B is positive due to the repulsion between rashbons and decreases with increasing coupling as shown in the inset of Fig. 18(a). By assuming an s-wave repulsion with scattering length a B between rashbons, where µ B (n/2)4πa B /M , we estimate within mean-field that in the unitarity limit, a B 3/(mλ), comparable to the size of rashbons. Figs. 18(b) and (c) illustrate the momentum distribution and the single-particle spectral function, respectively. These quantities exhibit anisotropic distribution in momentum space due to the SOC and can be readily measured in experiment. Another interesting feature of the crossover to rashbon BEC is that the pairing field contains both a singlet and a triplet component [97]. For the system under study, it is straightforward to show that the triplet and singlet pairing fields are given by The magnitude of the pairing fields are shown in Fig. 19(a) and (b). The weight of the triplet component increases and approaches that of the singlet component as the SOC strength increases. In Fig. 19(c) and (d), we plot the zero-momentum dynamic and static spin structure factor, respectively. In the absence of the SOC, both these quantities vanish identically. Hence a nonzero spin structure factor is a direct consequence of triplet pairing [97]. Note that spin structure factor can be measured using the Bragg spectroscopy method as demonstrated in recent experiments [98]. The condensate fraction and superfluid density of the rashbon system have also been studied [99,100], and have been found to exhibit unusual behaviors: The condensate fraction is generally enhanced by the SOC due to the increase of the pair binding; while the superfluid density is suppressed because of the nontrivial effective mass of rashbons. To understand the finite-temperature properties of rashbons, the mean-field approach becomes less reliable. So far, a careful analysis based on the pair-fluctuation theory as outlined in the theoretical framework is yet to be performed. In Fig. 20, we show the superfluid transition temperature as a function of the Rashba SOC strength, predicted by the approximate many-body T -matrix theory -pseudogap theory [28]. At sufficiently large SOC strength, T c tends to the critical temperature of a rashbon BEC given by Eq. (59) -T c 0.193T F -regardless of the dimensionless interaction parameter 1/(k F a s ), as we may anticipate. For a more detailed discussion of the crossover from BCS to rashbon BEC, we refer to Ref. [25]. Here we consider 2D topological superfluidity with 2D Rashba SOC, in the presence of an out-of-plane Zeeman field h. It is of particular interest, considering the possibility of performing topological quantum computation. This is because each vortex core in a 2D topological superfluid can host a Majorana fermion. Thus, by properly interchanging two vortices and thus braiding Majorana fermions, fault-tolerant quantum information stored non-locally in Majorana fermions may be processed [84,101]. In the context of ultracold atoms, the use of 2D Rashba SOC to create a 2D topological superfluid was first proposed by Zhang and co-workers [38], and later considered by a number of researchers [41-44, 46, 48-50]. In free space, the criterion to enter topological superfluid phase is given by h > µ 2 + ∆ 2 , above which the system behaves like a 2D weak-coupling p-wave superfluid, as we already discussed in the previous subsection (see, for example, Eq. (48)). Here, we are interested in the nature of 2D topological superfluids for the experimentally relevant situation with the presence of harmonic traps [42]. Theoretically, we solve numerically the BdG equation (29). In the presence of a single vortex at trap center, we take ∆ 0 (r) = ∆ 0 (r)e −iϕ and decouple the BdG equation into different angular momentum channels indexed by an integer m. The quasiparticle wave functions take the form, [u ↑η (r)e −iϕ , u ↓η (r), v ↑η (r)e iϕ , v ↓η (r)]e i(m+1)ϕ / √ 2π. We have solved self-consistently the BdG equations using the basis expansion method. For the results presented below, we have taken N = 400 and T = 0. We have used E a = 0.2E F and λk F /E F = 1, where the binding energy E a is a useful parameter to characterize the interatomic interaction in 2D. These are typical parameters that can be readily realized in a 2D 40 K Fermi gas. The color of symbols changes from blue when the excited state is localized at the trap center to red when its mean radius approaches the Thomas-Fermi radius. In in the presence of a single vortex. By increasing the Zeeman field, the system evolves from a non-topological state (NS) to a topological state (TS), through an intermediate mixed phase in which NS and TS coexist. The topological phase transition into TS is well characterized by the low-lying quasiparticle spectrum, which has the particle-hole symmetry E m+1 = −E −(m+1) . As shown in Fig. 21(a), the spectrum of the NS is gapped. While in the TS, two branches of mid-gap states with small energy spacing appear: One is labeled by "Outer edge" and another "CdGM" which refers to localized states at the vortex core, i.e., the so-called Caroli-de Gennes-Matricon (CdGM) states [102]. The eigenstates with nearly zero energy at m = −1 could be identified as the zero-energy Majorana fermions in the thermodynamic limit. In the TS, the occupation of the Majorana vortex-core state affects significantly the atomic density and the local density of states (LDOS) of the Fermi gas near the trap center, which in turn gives a strong experimental signature for observing Majorana fermions. Fig. 22 presents the spin-up and -down densities at the trap center, n ↑ (0) and n ↓ (0), as a function of the Zeeman field. In general, n ↑ (0) and n ↓ (0) increases and decreases respectively with increasing field. However, we find a sharp increase of n ↓ (0) when the system evolves from the mixed phase to the full TS. Accordingly, a change of slope or kink appears in n ↑ (0). The increase of n ↓ (0) is associated with the gradual formation of the Majorana vortex-core mode, whose occupation contributes notably to atomic density due to the large amplitude of its localized wave function. We plot in the inset of Fig. 22(b) n ↓ (0) at h = 0.6E F , with or without the contribution of the Majorana mode, which is highlighted by the shaded area. This contribution is apparently absent in the NS. Thus, a sharp increase of n ↓ (0), detectable in in situ absorption imaging, signals the topological phase transition and the appearance of the Majorana vortex-core mode. This feature persists at typical experimental temperature, i.e., T = 0.1T F . In the presence of impurity scattering, topological superfluid can also host a universal impurity-induced bound state [50,51]. That is, regardless of the type of impurities, magnetic or non-magnetic, the impurity will always cause the same bound state within the pairing gap, provided the scattering strength is strong enough. The observation of such a universal impurity-induced bound state will give a clear evidence for the existence of topological superfluids. III. EXPERIMENTS We now review the experimental work, focusing on the ones carried out at Shanxi University. The apparatus and cooling scheme in the experiment have been described in previous papers [103][104][105][106][107] and briefly introduced here (see Fig. 1). An atomic mixture sample of 87 Rb and 40 K atoms in hyperfine state |F = 2, m F = 2 and |F = 9/2, m F = 9/2 , respectively, are first pre cooled to 1.5 µK by radio-frequency evaporative cooling in a quadrupole-Ioffe configuration (QUIC) trap. The QUIC trap consists of a pair of anti-Helmholtz coils and a third coil in perpendicular orientation. To gain larger optical access, the atoms are first transported from the QUIC trap to the center of the quadrupole coils (glass cell) by lowering the current passing through quadrupole coils and increasing the current in the Ioffe coil, and then are transferred into an crossed optical trap in the horizontal plane, created by two off-resonance laser beams, at a wavelength of 1064 nm. A degenerate Fermi gas of about N 2 × 10 6 40 K atoms in the |9/2, 9/2 internal state at T /T F 0.3 is created inside the crossed optical trap. Here T is the temperature and T F is the Fermi temperature defined by T F = E F /k B = (6N ) 1/3 ω/k B with a geometric mean trapping frequency ω 2π × 130 Hz. A 780 nm laser pulse of 0.03 ms is used to remove all the 87 Rb atoms in the mixture without heating 40 K atoms. To create SOC, a pair of Raman laser beams are extracted from a continuous-wave Ti-sapphire single frequency laser. The two Raman beams are frequency-shifted by two single-pass acousto-optic modulators (AOM) respectively. In this way the relative frequency difference ∆ω between the two laser beams is precisely controlled. At the output of the optical fibers, the two Raman beams each has a maximum intensity I = 130 mW , counter-propagating along the x-axis with a 1/e 2 radius of 200 µm and are linearly polarized along the z-and y-axis, respectively, which correspond to π (σ) and σ (π) of the quantization axisẑ (ŷ). The momentum transferred to atoms during the Raman process is 2k 0 = 2k r sin(θ/2), where k r = 2π/λ is the single-photon recoil momentum, λ is the wavelength of the Raman beam, and θ is the intersecting angle of two Raman beams. Here, k r and E r = k 2 r /2m are the units of momentum and energy. The optical transition wavelengths of the D1 and D2-line are 770.1 nm and 766.7 nm, respectively. The wavelengths of the Raman lasers are about 772 ∼ 773 nm. The two internal states involved in SOC are chosen as follows. In the case of noninteracting system, the two states are magnetic sublevels |↑ = |9/2, 9/2 and |↓ = |9/2, 7/2 . These two spin states are stable and are weakly interacting with a background s-wave scattering length a s = 169a 0 . We use a pair of Helmholtz coils along the y-axis (as shown in Fig. 1) to provide a homogeneous bias magnetic field, which gives a Zeeman shift between the two magnetic sublevels. A Zeeman shift of ω Z = 2π × 10.27 MHz between these two magnetic sublevels is produced by a homogeneous bias magnetic field of 31 G. When the Raman coupling is at resonance (at ∆ω = 2π × 10.27 MHz and two-photon Raman detuning δ = ∆ω − ω Z ≈ 0), the detuning between |9/2, 7/2 and other magnetic sublevels like |9/2, 5/2 is about 2π × 170 kHz, which is one order of magnitude larger than the Fermi energy. Hence all the other states can be safely neglected. In the case of the strongly interacting spin-orbit coupled Fermi gas, two magnetic sublevels |↓ = |9/2, −9/2 and |↑ = |9/2, −7/2 are chosen. To create strong interaction, the bias field is ramped from 204 G to a value near the B 0 = 202.1 G Feshbach resonance at a rate of about 0.08 G/ms. We remark that due to a decoupling of the nuclear and electronic spins, the Raman coupling strength decreases with increasing of the bias field [108]. When working at a large bias magnetic field, we have to use a smaller detuning of the Raman beams with respect to the atomic D1 transition in order to increase the Raman coupling strength. In order to control the magnetic field precisely and reduce the magnetic field noise, the power supply (Delta SM70-45D) has been operated in remote voltage programming mode, whose voltage is set by an analog output of the experiment control system. The current through the coils is controlled by the external regulator relying on a precision current transducer (Danfysik ultastable 867-60I). The current is detected with the precision current transducer, then the regulator compares the measured current value to a set voltage value from the computer. The output error signal from the regulator actively stabilize the current with the PID (proportional-integral-derivative) controller acting on the MOSFET (metal-oxide-semiconductor field-effect transistor). In order to reduce the current noise and decouple the control circuit from the main current, a conventional battery is used to power the circuit. We use the standard time-of-flight technique to perform our measurement. To this end, the Raman beams, optical dipole trap and the homogeneous bias magnetic field are turned off abruptly at the same time, and a magnetic field gradient along the y-axis provided by the Ioffe coil is turned on. The two spin states are separated along the ydirection, and imaging of atoms along the z-direction after 12 ms expansion gives the momentum distribution for each spin component. A. The noninteracting spin-orbit coupled Fermi gas In this section, we review the experiment on non-interacting system. Rabi oscillation We first study the Rabi oscillation between the two spin states induced by the Raman coupling. All atoms are initially prepared in the |↑ state. The homogeneous bias magnetic field is ramped to a certain value so that δ = −4E r , that is, the k = 0 component of state |↑ is at resonance with k = 2k rx state of |↓ component, as shown in Fig. 23(a). Then we apply a Raman pulse to the system, and measure the spin population for different duration time of the Raman pulse. Similar experiment in bosonic system yields an undamped and completely periodic oscillation, which can be well described by a sinusoidal function with frequency Ω [1]. This is because for bosons, macroscopic number of atoms occupy the resonant k = 0 mode, and therefore there is a single Rabi frequency determined by the Raman coupling only. While for fermions, atoms occupy different momentum states. Due to the effect of SOC, the coupling between the two spin states and the resulting energy splitting are momentum dependent, and atoms in different momentum states oscillate with different frequencies. Hence, dephasing naturally occurs and the oscillation will be inevitably damped after several oscillation periods. In our case, the spin-dependent momentum distribution shown in Fig. 23(b) clearly shows the out-of-phase oscillation for different momentum states. For a non-interacting system, the population of |↓ component is given by where t is the duration time of Raman pulse, n ↑ (k, r, 0) is the equilibrium distribution of the initial state in local density approximation. From Eq. (62) one can see that the momentum distribution along the x-axis of the |↓ component is always symmetric respect to 2k r at any time, which is clearly confirmed by the experimental data as shown in Fig. 23(b). The total population in the |↓ component is given by N ↓ (t) =´dkdr n ↓ (k, r, t), and in Fig. 23(c), one can see that there is an excellent agreement between the experiment data and theory, from which we determine Ω = 1.52(5)E r . Momentum distribution We focus on the case with δ = 0, and study the momentum distribution in the equilibrium state. We first transfer half of 40 K atoms from |↓ to |↑ using radio frequency sweep within 100 ms. Then the Raman coupling strength is ramped up adiabatically in 100 ms from zero to its final value and the system is held for another 50 ms before time-of-flight measurement. Since SOC breaks spatial reflection symmetry (x → −x and k x → −k x ), the momentum distribution for each spin component will be asymmetric, i.e. n σ (k) = n σ (−k), with σ =↑, ↓. On the other hand, when δ = 0 the system still satisfies n ↑ (k) = n ↓ (−k). The asymmetry can be clearly seen in the spin-resolved timeof-flight images and integrated distributions displayed in Fig. 24(a) and (b), where the fermion density is relatively low. While it becomes less significant when the fermion density becomes higher, as shown in Fig. 24(c), because the strength of the SOC is relatively weaker compared to the Fermi energy. Although the presence of the Raman lasers cause additional heating to the cloud, we find that the temperature is within the range of 0.5 − 0.8T F , which is still below degenerate temperature. In Fig. 24(d-f), we also show n σ (k x ) − n σ (−k x ) to reveal the momentum distribution asymmetry more clearly. Lifshitz transition With SOC, the single particle spectra of Eq. (39) are dramatically changed from two parabolic dispersions into two helicity branches as shown in Fig. 25(b). Here, two different branches are eigenstates of "helicity"ŝ and the "helicity" operator describes whether spin σ p is parallel or anti-parallel to the "effective Zeeman field" h = (−Ω, 0, k r p x /m + δ) at each momentum, i.e.ŝ = σ p · h/|σ p · h|. s = 1 for the upper branch and s = −1 for the lower branch. The topology of Fermi surface exhibits two transitions as the atomic density varies. At sufficient low density, it contains two disjointed Fermi surfaces with s = −1, and they gradually merge into a single Fermi surface as the density increases to n c1 . Finally a new small Fermi surface appears at the center of large Fermi surface when density further increases and fermions begin to occupy s = 1 helicity branch at n c2 . A theoretical ground state phase diagram for the uniform system is shown in Fig. 25(a), and an illustration of the Fermi surfaces at different density are shown in Fig. 25(b). Across the phase boundaries, the system experiences Lifshitz transitions as density increases [109], which is a unique property in a Fermi gas due to Pauli principle. We fix the Raman coupling and vary the atomic density at the center of the trap, as indicated by the red arrow in Fig. 25(a). In Fig. 25(c1-c5), we plot the quasi-momentum distribution in the helicity bases for different atomic density. At the lowest density, the s = 1 helicity branch is nearly unoccupied, which is consistent with that the Fermi surface is below s = 1 helicity branch. The quasi-momentum distribution of the s = −1 helicity branch exhibits clearly a double-peak structure, which reveals that the system is close to the boundary of having two disjointed Fermi surfaces at s = −1 helicity branch. As density increases, the double-peak feature gradually disappears, indicating the Fermi surface of s = −1 helicity branch finally becomes a single elongated one, as the top one in Fig. 25(b). Here we define a quality of visibility v = (n A − n B )/(n A + n B ), where n A is the density of s = −1 branch at the peak and n B is the density at the dip between two peaks. Theoretically one expects v approaches unity at low density regime and approaches zero at high density regime. In Fig. 25(d) we show that our data decreases as density increases and agrees very well with a theoretical curve with a fixed temperature of T /T F = 0.65. Moreover, across the phase boundary between SFS and DFS-1, one expects a significant increase of population on s = 1 helicity branch. In Fig. 25(e), the fraction of atom number population at s = 1 helicity branch is plotted as a function of Fermi momentum k F , which grows near the critical point predicted in zero-temperature phase diagram. The blue solid line is a theoretical calculation for N + /N with T /T F = 0.65, and the small deviation between the data and this line is due to the temperature variation between different measurements. Because the temperature is too high, the transition is smeared out. For both v and N + /N we observe only a smooth decreasing or growth across the regime where it is supposed to have a sharp transition, however, the agreement with theory suggests that with better cooling a sharper transition should be observable. Momentum-resolved rf spectrum The effect of SOC is further studied with momentum resolved rf spectroscopy [71], which maps out the singleparticle dispersion relation. A Gaussian shaped pulse of rf field is applied for 200 µs to transfer atoms from |9/2, 7/2 (|↓ ) state to the final state |9/2, 5/2 , as shown in Fig. 26(a), and then the spin population at |9/2, 5/2 is measured with time-of-flight at different rf frequencies. In Fig. 26(b) we plot an example of the final state population as a function of momentum p x and the frequency of rf field ν RF , from which one can clearly see the back-bending feature and the gap opening at the Dirac point. Both are clear evidences of SOC. For an occupied state, the initial state dispersion i (k) can be mapped out by where f (k) = k 2 /2m is the dispersion of the final |9/2, 5/2 state, and E Z is the energy difference between |9/2, 7/2 and |9/2, 5/2 state. Here, the momentum of the rf photon is neglected, thus the rf pulse does not impart momentum to the atom in the final state. In Fig. 26(c) we show three measurements corresponding to (c1), (c3) and (c5) in Fig. 25. For (c1), clearly only s = −1 branch is populated. For (c3), the population is slightly above the s = 1 helicity branch. And for (c5), there are already significant population at s = 1 helicity branch. In (c5) one can also identify the chiral nature of two helicity branches: For s = −1 branch, most left-moving states are dominated by |↓ state; while for s = 1 branch, right-moving states are mostly dominated by |↓ states. The theoretical simulation of momentum-resolved rf spectroscopy has been performed and discussed in Sec. II B 1 (see, in particular, Fig. 3). We note that, the definition of momentum and rf frequency is different. These are related by, k x = −p x − k r and ω = −ν RF . The single-particle spectrum is also measured using the technique of spin injection spectroscopy in a spin-orbit coupled Fermi gas of 6 Li by the MIT group [12]. In that work, the following four lowest hyperfine states are chosen |3/2, −1/2 , |3/2, −3/2 , |1/2, −1/2 , |1/2, 1/2 , which are labelled as |↑ i , |↑ f , |↓ f , |↓ i . The Raman process couples |↑ f to |↓ f to induce SOC between these two states. For momentum-resolved rf spectroscopy, the state |↓ i is coupled via rf field to the state |↓ f , as this connects the first and second lowest hyperfine states. Similarly, an atom in state |↑ i is coupled to |↑ f . Since the dispersion for initial states |↑ i and |↓ i ( i (k) = k 2 /2m) are known, the spectra of the final states, which is subject to the SOC, are obtained. The dispersion investigated above is the simplest case for a spin-orbit coupled system. An even richer band structure involving multiple spinful bands separated by fully insulating gaps can arise in the presence of a periodic lattice potential. This has been realized for Bose-Einstein condensates by adding rf coupling between the Ramancoupled states |↑ f and |↓ f [110]. Using a similar method, a spinful lattice for ultracold fermions is created, and one can use spin-injection spectroscopy to probe the resulting spinful band structure [12], see, for example, Fig. 4. B. The strongly interacting spin-orbit coupled Fermi gas We now consider the Femi gas where interaction cannot be neglected. In particular, we focus on the effect of SOC on fermionic pairing. Integrated radio-frequency spectrum To create a strongly interacting Fermi gas with spin-orbit coupling, first, the bias magnetic field is tuned from high magnetic field above Feshbach resonance to a final value B (which is varied) below Feshbach resonance. Thus, Feshbach molecules are created in this process. Then, we ramp up adiabatically the Raman coupling strength in 15 ms from zero to its final value Ω = 1.5E r with Raman detuning δ = 0. The temperature of the Fermi cloud after switching on the Raman beams is at about 0.6T F [11]. The Fermi energy is E F 2.5E r and the corresponding Fermi wavevector is k F 1.6k r . To characterize the strongly-interacting spin-orbit coupled Fermi system, we apply a Gaussian shaped pulse of rf field with a duration time about 400 µs and frequency ω to transfer the spin-up fermions to an un-occupied third hyperfine state |3 = |F = 9/2, m F = −5/2 . In Fig. 27(b), we show that the integrated rf-spectrum of an interacting Fermi gas below the Feshbach resonance, with or without spin-orbit coupling. Here, we carefully choose the one photon detuning of the Raman lasers to avoid shifting Feshbach resonance by the Raman laser on the bound-to-bound transition between the ground Feshbach molecular state and the electronically excited molecular state. We also make sure that the single-photon process does not affect the rf spectrum. The narrow and broad peaks in the spectrum should be interpreted respectively as the rf-response from free atoms and fermionic pairs. With spin-orbit coupling, we find a systematic blue shift in the atomic response and a red shift in the pair response. The latter is an unambiguous indication that the properties of fermionic pairs are strongly affected by spin-orbit coupling [13]. The red shift of the response from the pairs may be understood from the binding energy of pairs in the two-body limit. As mentioned below Eq. (39), the Raman coupling may be regarded as an effective Zeeman field. The stronger the effective Zeeman field, the smaller the binding energy of the two-particle bound states [24,26]. In Fig. 28, we compare the experimentally measured rf-spectrum with the many-body T -matrix prediction, which is obtained within the pseudogap approximation [13] (see the discussion in Sec. II A 3 and II A 5). In the calculation, at a qualitative level, we do not consider the trap effect and take the relevant experimental parameters at the trap center. Otherwise, there are no adjustable free parameters used in the theoretical calculations. As shown in Fig. 28, we find a qualitative agreement between theory and experiment, both of which show the red shift of the response from fermionic pairs. Note that, near Feshbach resonances our many-body pseudogap theory is only qualitatively reliable. It cannot explain well the separation of atomic and pair peaks in the observed integrated rf-spectrum. More seriously, it fails to take into account properly the strong interactions between atoms and pairs. Coherent formation of Feshbach molecules by spin-orbit coupling In a recent experiment, we studied the formation of Feshbach molecules from an initially spin-polarized Fermi gas [15]. For simplicity, let us consider two atoms both prepared in the | ↓ state. We label this state as | ↓ 1 | ↓ 2 , which is obviously a spin-symmetric state. Under the s-wave interaction, the Feshbach molecule is spin-antisymmetric singlet state. Hence to form Feshbach molecule from this initial state, a spin-antisymmetric coupling is required. To this end, we apply two Raman laser beams that effectively couples the hyperfine states | ↑ and | ↓ . The effective Hamiltonian arising from the Raman beams can be written as H R = H (1) for j = 1, 2. Here we have σ 64), Ω is the Raman coupling intensity, x j is the position of the j-th atom in the x-direction, and k 0 = k r sin(θ/2), with k r the single-photon recoil momentum and θ the angle between the two Raman beams. It is apparent that H R can be written as H R = H (+) Obviously, H can create spin-antisymmetric state out of the initially polarized state | ↓ 1 | ↓ 2 , and as a consequence make the formation of Feshbach molecule possible. When the two Raman beams propagate along the same direction, i.e., θ = 0, we have k 0 = 0 and thus H (−) R = 0. Then the Feshbach molecule cannot be produced from the polarized atoms. In contrast, when the angle θ between the two Raman beams is non-zero, we have H (−) R = 0 and Feshbach molecule can thus be created. This picture is exactly confirmed by our data. Our experiment is performed with the spin polarized 40 K gas in |F, m F = |9/2, −9/2 state, at 201.4 G, below the Feshbach resonance located at 202.1 G, which corresponds to a binding energy of E b = 2π × 30 kHz (corresponding to 3.59E r ) for the Feshbach molecules and 1/(k F a s ) ≈ 0.92 for our typical density. After applying the Raman lasers for certain duration time, we turn off the Raman lasers and measure the population of Feshbach molecule and atoms in |9/2, −7/2 state with an rf pulse. This rf field drives a transition from |9/2, −7/2 to |9/2, −5/2 . For a mixture of |9/2, −7/2 and Feshbach molecules, as a function of rf frequency ν RF , we find two peaks in the population of |9/2, −5/2 , as shown in Fig. 29(b). The first peak (blue curve) is attributed to free atom-atom transition and the second peak (red curve) is attributed to molecule-atom transition. Thus, in the following, we set ν RF /2π to 47.14 MHz to measure Feshbach molecules. When the two-photon Raman detuning δ is set to δ = −E b = −3.59E r , as shown in Fig. 29(a), we measure the population of Feshbach molecule as a function of duration time for three different angles, θ = 180 • , θ = 90 • , and θ = 0 • , as shown in Fig. 29(c), (d) and (e). We find for θ = 180 • , Feshbach molecules are created by Raman process and the coherent Rabi oscillation between atom-molecule can be seen clearly. For θ = 90 • , production of Feshbach molecules is reduced a little bit and the atom-molecule Rabi oscillation becomes invisible. For θ = 0 • , no Feshbach molecule is created even up to 40 ms, which means the transition between Feshbach molecules and a fully polarized state is prohibited if Raman process imparts no momentum transfer, i.e., no SOC. In a related work, the NIST group recently carried out an experiment in which they swept a magnetic field on the BEC side of the Feshbach resonance [14]. It is shown that the number of remaining atoms exhibits a dip as a function of the magnetic field strength. This dip represents the loss of atom due to the formation of the Feshbach molecules. The position of the dip moves towards the lower field (to the BEC limit) as the Raman detuning δ is increased. The phenomenon can also be explained by the fact that the effective Zeeman field (in this case, the detuning δ) disfavors the formation of bound molecules. Hence at larger δ, a larger a −1 s (i.e., stronger attraction between unlike spins) is required to form molecules [26]. This is in full agreement with the theoretical discussion concerning the two-body physics for the equal-weight Rashba-Dresselhaus SOC presented in Sec. II B 2. IV. CONCLUSION In this chapter, we described the properties of a spin-orbit coupled Fermi gas. Recent progress, both theoretical and experimental, were reviewed. As we have shown, spin-orbit coupled Fermi gases possess a variety of intriguing properties. The diverse configuration of the synthetic Gauge field and the extraordinary controllability of atomic systems provide new opportunities to explore quantum many-body systems and quantum topological matter. We note that this article by no means is a comprehensive review. For example, we only focused on a continuum system and neglected many interesting theoretical works on lattice systems. So far only one particular scheme (equal-weight Rashba-Dresselhaus) of SOC has been realized in the experiment, which is based on the Raman transition between two hyperfine ground states of the atom. One drawback of the laser-based SOC generating scheme is that the application of the laser fields inevitably induce additional heating. For certain atoms, this heating may be severe enough to prevent the system from becoming quantum degenerate. Furthermore, many interesting physics requires a strong interaction strength which is induced by applying a fairly strong magnetic field via the Feshbach resonance. Due to a decoupling of the nuclear and electronic spins in large magnetic fields, Raman coupling efficiency quickly reduces with increasing of the magnetic field [108]. This poses another severe experimental challenge. Due to these reasons, no superfluid spin-orbit coupled Fermi gas has been realized yet. As a result, many interesting theoretical proposals (e.g., topological superfluids, Majorna fermion, etc.) are still waiting to be experimentally realized. Nevertheless, we want to remark that despite the relatively high temperature of the experimental system, the effects of SOC have been clearly revealed in single-particle properties as well as the two-and many-body properties on the BEC side of the resonance, as such properties are not easily washed out by finite temperature effects. Very recently, a scheme to synthesize a general SOC is proposed, which is based on purely magnetic field pulses and involves no laser fields [111,112]. Whether this scheme will overcome the problems mentioned above remains to be seen.
2014-11-12T01:55:35.000Z
2014-11-11T00:00:00.000
{ "year": 2014, "sha1": "b1647bc14ece369c300df3deabf1b5eab7eee8f1", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1411.3043", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b1647bc14ece369c300df3deabf1b5eab7eee8f1", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
56716216
pes2o/s2orc
v3-fos-license
Reduction of the Incidence of Retained Placenta in Cows Treated with a New Chinese Herbal Medicine Dang Hong Fu used as Aqua-acupuncture at GV-1 When Chinese herbal medicine is injected into acupoints, the herbal dosage may be reduced by 20-50%, the clinic results may be improved by 10%-20%, the treatment time may be shortened and the duration of its effectiveness may be extended. 1-3 The injection of Chinese herbal medicine into acupoints provides a new direction for veterinary acupuncture application and has the potential to reduce the side-effects of some Chinese herbal medicines and the chance of medical abuse in clinical practice. ABSTRACT The objective of this study was to develop a new Chinese herbal aqua-acupuncture formulation Dang Hong Fu , test its safety in rabbits and evaluate its efficacy to prevent retained placenta in cows when injected into acupoint GV-1. Dang Hong Fu was administered intraocularly, subdermally, intraperitoneally and intravenously into rabbits and no adverse reactions were observed, except transient swelling of the ear flap after subdermal injection. One hundred and twenty four pregnant cows were selected to be in an untreated control group and observed after calving to determine the retained placenta rate for the farm. Fifty pregnant cows from the same farm were selected for the study and randomly assigned to two groups: 30 cows in the Dang Hong Fu group and 22 cows in a saline control group. Immediately after calving 40 ml of Dang Hong Fu (40 grams of dried herbs) were injected into GV-1 in the herbal group and 40 ml of physiological saline were injected at the same site in the saline control group. Both groups were observed for retained placentas and the time until placental expulsion was recorded in the others. The retained placenta rate for untreated cows that received no treatment was 35.5% (44/124). The incidence of retained placenta in the Dang Hong Fu group was 16.7% and in the saline control group 31%. The time for expulsion of placental membranes was a mean of 9 hours (range 3.5-24 hours) in the Dang Hong Fu group and a mean of 14.7 hours (range 3.0-24 hours) in the saline control group. When compared to the untreated control group, Dang Hong Fu aqua-acupuncture significantly reduced the incidence of retained placentas (p 0.047; < 0.05), but saline aqua-acupuncture did not (p 0.740; > 0.05). Herbal aqua-acupuncture may offer an easy treatment method to reduce the incidence of retained placenta with no adverse side effects. Reduction of the Incidence of Retained Placenta in Cows Treated with a New Chinese Herbal Medicine Dang Hong Fu used as Aqua-acupuncture at GV-1 Chao-ying Luo DVM, Jin-yu Li DVM, Jian-hua Wang DVM, Ji-fang Zheng DVM, Lan-ying Hua DVM, Zhenying Hu DVM, Dong-sheng Wang DVM, Yong-jiang Luo DVM, Hai-yan Kou DVM, Rui Wang DVM The acupoint GV-1 (Hou Hai or Jiao Cao) is located in the depression halfway between the anus and the ventral aspect of the caudal (coccygeal) vertebrae.Since GV-1 is one of the standard acupoints used for the treatment of retained placenta, it has been selected as one of the injection sites for Chinese herbal injectable formulations and vaccinations. 1Using different ingredients and processing methods, seven Chinese herbal injectable preparations were initially developed by the authors for aqua-acupuncture treatment of retained placenta in cows, but only four had acceptable diaphaneity (transparency) and precipitation qualities.When 20 ml of the four preparations were injected into GV-1 of test cows with retained placenta, there was no effect.When 40 ml of the preparations were injected into GV-1, one preparation named Dang Hong Fu had the best effect and it was chosen to be tested for safety and then used in a clinical trial of cows to prevent retained placenta. Dang Hong Fu is a modification of the classical formula Sheng Hua Tang, which was recorded in Fu Qing Zhu Nü Ke (Women's Diseases According to Fu Qingzhu) written in 1827 and is a very popular postpartum uterine cleansing formula for both women and domestic animals in China. 4,5ang represents Dang Gui (Radix Angelicae Sinensis), Hong represents Hong Hua (Flos Carthami), and Fu means blessing or good fortune.Dang Gui can nourish the Blood, stimulate circulation, and regulate menstruation to alleviate pain.Hong Hua (Flos Carthami) also has the function of boosting Blood circulation and regulating menstruation.These two primary ingredients are assisted by two secondary ingredients Yi Mu Cao (Herba Leonuri Japonici) and Huang Qi (Radix Astragali Mongolici).Yi Mu Cao (Herba Leonuri Japonici) promotes Blood circulation to restore menstrual flow and Huang Qi (Radix Astragali Mongolici), invigorates Qi, strengthens the body, expels pus and releases toxins.These effects are further enhanced by two peripheral herbs Pao Jiang (Dried Ginger) which warms the Channels and Che Qian Zi (Semen Plantaginis) which controls bleeding.Che Qian Zi (Semen Plantaginis) expels Dampness and relieves fever.All these herbs work together to resolve Blood deficiency, Qi Stagnation, Blood Stasis and Cold and Damp coagulation, and thus provide an effective treatment for retained placenta.Dang Hong Fu showed promise as a treatment to prevent retained placenta in cows with no evidence of toxicity but, a systematic study was needed. The objective of this study was to evaluate the safety of the injectable form of Dang Hong Fu in rabbits and then inject it into GV-1 in an experimental test group of cows after calving from the same farm and compare the incidence of retained placentas and the time of placental expulsion with a control group receiving physiological saline and a control group of cows receiving no treatment. MATERIALS AND METHODS The ingredients of the Chinese herbal formula Dang Hong Fu are: Dang Gui (Radix Angelicae Sinensis), 30%; Hong Hua (Flos Carthami), 15%; Yi Mu Cao (Herba Leonuri Japonici), 20%; Huang Qi (Radix Astragali Mongolici), 15%; Pao Jiang (Prepared Dried Ginger) 10% and Che Qian Zi (Semen Plantaginis) 10% a .These ingredients were mixed and decocted with water for 30 minutes the first time and 20 mins the second time, and then distilled b,c with alcohol d .A small amount of Tween-80 d was added to increase solubility.The herbal liquid was bottled in 20 ml ampoules (containing 20 grams of the herbal mixture), sterilized, numbered and labeled (Figure 1).Diaphaneity (transparency) and precipitation of the prepared liquid was examined visually without magnification, under sunlight or lamp light, against a black background and while the ampoules were rotated slowly.This examination was performed once per week for 6 months and then once a month for two years to ensure the solution was stable. Twelve (6 male and 6 female) pure-bred Japanese White Rabbits, weighing 2-3 kg, were used for the toxicity studies.They were clinically healthy and vaccinated against rabbit hemorrhagic disease Figure 1: Flow chart of the herbal preparation Copyrighted materials; for reprint permission email: ChrismanC@ajtcvm.org and Pasteurella multocida.They were housed at 15-18 degrees centigrade with both natural and lamp lighting for illumination.Before the experiments, the rabbits were observed for one week to insure they were being maintained under healthy conditions. Local sensitivity to the Dang Hong Fu injectable solution was evaluated by placing drops on the rabbits' eyes and injecting the solution subdermally in the ear flap and intramuscularly in the quadripceps femoris muscle.Physiological saline was placed onto the right eyes of three rabbits and liquid Dang Hong Fu solution was dropped onto the left eyes of the same rabbits and the reactions in the left and right eyes were compared for two days.Three rabbits received 1 ml of saline subdermally in the left ear flap and 1 ml of the injectable solution of Dang Hong Fu subdermally in right ear flap and the reaction in the two sides were compared for two days.One ml of physiological saline and one ml of the injectable solution of Dang Hong Fu was injected into the left and right quadriceps femoris muscle respectively in the rabbits and local muscle reactions were compared for each site for 2 days. The other six rabbits were administered 2 ml of the Dang Hong Fu injectable solution into the peritoneal cavity every other day 3 times.The rabbits were then divided into two groups, three each.The three rabbits in the first group received 5 ml of the Dang Hong Fu injectable solution via jugular vein injection on the 14 th day after the intraperitoneal injection.The second group of 3 rabbits received 5 ml of the Dang Hong Fu injectable solution via jugular vein injection on the 21 st day after the intraperitoneal injection.The rabbits were observed for evidence of anaphylaxis.The rectal temperature of the rabbits was taken 3 hours after the injection to monitor for a pyrogenic effect. Fifty-two Black-and-White Dairy cows of China, 2-4 years old, 400-800 kg body weight with fairly complete clinical records, were selected for the clinical trial.Cows used in this study were, clinically healthy, and had previously calved with an incidence rate of retained placenta of approximately 35%.They were brought in from the Second, Fourth and Fifth Farms of the Pasturage Company, Xian Modern Agricultural Corporation.Sequenced according to the time they last calved, odd number and even number cows were divided into two groups, 30 cows in the experimental Dang Hong Fu group and 22 cows as the saline control group.To serve as a non-treatment control group, one hundred and twenty four pregnant cows of similar type and ages, from the same farms, were used to determine the number of retained placentas and serve as a non-treatment control group for comparison. Eighteen gauge 10 cm hypodermic needles were used to inject either the Dang Hong Fu solution or physiological saline into GV-1.Individual autoclaved sterilized needles were used for each cow and separate sterile syringes were used for Dang Hong Fu or physiological saline solutions so no contamination was possible.The needle was inserted into the depression between the anus and ventral caudal vertebrae and was directed upward and forward 5-7 cm parallel to the dorsal keel.The Dang Hong Fu or physiological saline solution was slowly injected while withdrawing the needle. In the experimental (test) group, 40 ml Dang Hong Fu solution (40 grams dried medicinal herbs) was injected into GV-1 immediately after calving.In the control group, 40 ml physiological saline was injected into GV-1 immediately after calving.All of cows were observed for 24 hours.The placenta was considered retained, if it was not expelled or only partially expelled in 24 hours.The incidence rate of retained placenta was recorded for each group.The time it took the placenta to be expelled was also observed and recorded.The time began immediately after calving and ended when the placenta was completely expelled and was recorded in hours.Cows who failed to expel the placenta within 24 hours after calving were treated with other therapies and the time of expulsion was measured by days. The incidence of retained placenta for Dang Hong Fu treated, saline treated and untreated groups was compared using Chi-square tests of crosstabs d .The p value was set at < 0.05 as a significant difference between the groups.The mean time until placental expulsion for the Dang Hong Fu and saline GV-1 aqua-acupuncture groups were compared using the T test for independent samples e .The p value was set at < 0.01 as a significant difference between the groups. Physiological saline resulted in no tearing, Copyrighted materials; for reprint permission email: ChrismanC@ajtcvm.orgirritation or hyperemia of the conjunctiva in the rabbits.The Dang Hong Fu injection solution caused mild tearing (1 drop of tears), but no conjunctival irritation or hyperemia.There was no swelling when saline was injected subdermally in the ear flap, but the Dang Hong Fu injection solution caused mild local swelling, which may have been due to the high infiltration pressure, because it was gone in 2 hours (Tables 1 and 2).When injected into the quadriceps femoris muscles of the rabbits, neither the saline nor the Dang Hong Fu injection solution caused any local inflammatory reaction within 24 hours.Within three hours after intravenous injection of the Dang Hong Fu injection solution, the maximum increase in body temperature in the three test rabbits was 0.3ºC, 0.4ºC and 0.5 ºC, all below 0.6 ºC.These results indicate that the Dang Hong Fu injectable solution complies with the pyrogen test standards of safety f .No anaphylactic reactions were observed in any of the rabbits following intravenous saline or Dang Hong Fu injections.No toxicity or reactions at the injection site were observed in the cows in this study. The results of the study are outlined in Tables 3 and 4. The incidence of retained placenta in the 124 untreated calved cows was 44/124 (35%).In the group of 30 cows injected with Dang Hong Fu solution at GV-1, 5/30 (16%) had a retained placenta and in the group of 22 cows injected with physiological saline at GV-1, 7/22 (31%) had a retained placenta.There was a significant reduction in the incidence of retained placentas in cows receiving Dang Hong Fu aquaacupuncture at GV-1 as compared to cows receiving no treatment (p value was 0.047; < 0.05) (Figure 2).There was no significant reduction in incidence of retained placentas in cows receiving saline aqua-acupuncture at GV-1 as compared to cows receiving no treatment (p value was 0.740; > 0.05) (Figure 2).There was no significant difference in the incidence of retained placenta in cows receiving saline aqua-acupuncture at GV-1 as compared to cows receiving Dang Hong Fu aquaacupuncture at GV-1 ( p value was 0.200; > 0.05) (Figure 2).In the group of 30 cows injected with Dang Hong Fu solution at GV-1, the placenta was expelled in a mean time of 9±7.06 hours (range 3. The percentage of retained placentas was significantly less in the Dang Hong Fu herbal injection group as compared to the untreated control group (p value 0.047; < 0.05), but no significant difference was found between the saline treated control group and the untreated control group (p value 0.740; > 0.05) or the saline treated control group and the Dang Hong Fu herbal injection group (p value 0.200; > 0.05) Groups of Postpartum Cows Copyrighted materials; for reprint permission email: ChrismanC@ajtcvm.org with physiological saline at GV-1, the placenta was expelled in a mean time of 14.7±6.97hours (range 3.0-24 hours).Compared to saline injected at GV-1, Dang Hong Fu solution injected at GV-1 significantly reduced the placental expulsion time (p value was 0.005, < 0.01). DISCUSSION Intervet and Schering-Plough Animal Health, Netherlands, defines retained placenta as the disease in which a cow cannot expel the fetal membranes within 12-24 hours after calving. 6In this study the placenta was considered retained if it did not completely expel within 24 hours.The incidence of retained placenta ranges from 3 to 39% because of variations in the definition of this disease.In 1987, an incidence rate of 17.8% (1517/8521) was reported in Israel. 7In 1996, in a United States Department of Agriculture study, the reported incidence of retained placenta was 7.8 ± 0.2%. 8In 2002, the United States National Animal Disease Center in Ames, Iowa reported an incidence rate of 17.9% and the dairy farm at Iowa State University reported an incidence rate of 12.6%. 8In 2008, an incident rate of 16.55% (240/1450) was reported in Croatia. 9][12] Since, the incidence of retained placenta varies by region and also by farm, the incidence of retained placenta from the same farm was used (35.5%) to compare with Dang Hong Fu experimental and saline control aqua-acupuncture groups. It has been well recognized that placental retention can be associated with abortion, stillbirth, twins, dystocia, induction of parturition with PGF2alpha, metabolic disorders such as milk fever and infections such as brucellosis, leptospirosis, vibriosis, listeriosis, and infectious bovine rhinotracheitis. 61][12] Retained placenta also decreases milk production. 13In the Unites States, the financial loss has been estimated to be $328/per case. 14ased on the incidence reports, it is clear that retained placenta is one of the major post partum problems that threatens the health of dairy cows and negatively impacts the milk production industry.Therefore, effective prevention and treatment are essential for both conventional and Traditional Chinese Veterinary Medicine (TCVM). In this study, it was found that when 40 ml of Dang Hong Fu solution (40 grams dried herbs) was injected once into the acupoint GV-1 immediately after calving, the incidence of retained placenta was significantly reduced as compared to cows receiving no treatment.Since there was no reduction of the incidence rate of retained placenta in the saline aqua-acupuncture group, compared to the non-treated group, it can be concluded that the decrease in incidence of retained placenta was due to the herbal aqua-acupuncture rather than mere stimulation of the GV-1 acupoint.These clinical experiences of the authors indicate that herbal aqua-acupuncture is superior to saline aquaacupuncture or no aqua-acupuncture to, not only reduce the incidence of retained placentas in a herd but also decrease the placental expulsion times.The reduction in the rate of retained placenta was clearly shown between the herbal aqua-acupuncture and no treatment experimental groups but not significant between the herbal aqua-acupuncture and saline aqua-acupuncture groups. Historically, saline aqua-acupuncture has been used for a variety of conditions with therapeutic effects. 18Since there were a relatively small number of cases in the saline group (22 cows) as compared to the untreated control group (124 cows), this may have negatively skewed the data.If a larger number of cows were treated with GV-1 saline aqua-acupuncture or GV-1 Dang Hong Fu aqua-acupuncture and evaluated for retained placenta, there might have been a greater difference between all 3 groups or the GV-1 Dang Hong Fu aqua-acupuncture group shown to be superior to saline aqua-acupuncture.However, even with only 30 cows in the GV-1 Dang Hong Fu aquaacupuncture group as compared to 124 in the untreated control group, there was a significant difference between the 2 groups.As well there was a significant reduction in placental expulsion time as compared to the GV-1 saline aqua-acupuncture and both of these findings support the use of GV-1 Dang Hong Fu aqua-acupuncture in the clinical setting. Data was not available regarding the time of placental expulsion after calving for the nontreated control group, so this could not be compared with the treatment groups.The average duration of complete placental expulsion was significantly less in cows with GV-1 Dang Hong Fu aqua-acupuncture as compared to GV-1 saline aqua-acupuncture, suggesting that there is a definite Copyrighted materials; for reprint permission email: ChrismanC@ajtcvm.orgadvantage to herbal aquapuncture to ensure earlier placental expulsion.Although there are reports concerning treatment of many diseases with physiological saline aqua-acupuncture, Chinese herbal aqua-acupuncture may prove to be more effective for the treatment of diseases as well.Herbal aqua-acupuncture combines acupoint stimulation and the pharmacologic actions of the herbs. 18][17] It is impossible to prevent and treat all retained placentas with one formula or one therapy, as the causes of retained placenta vary and are complex and the underlying TCVM patterns differ. 15 Background: Acupuncture stimulation and phototherapy have been reported to have analgesic effects and improve the microcirculation.However, few studies have directly examined changes in peripheral blood vessels, either quantitatively or objectively.We assessed the responses of arteriolar blood flow to acupuncture stimulation and phototherapy under direct vision to examine the effects of these treatments.Methods: We used 40 rabbits with a rabbit ear chamber attached to the auricle.The rabbit ear chamber was fixed to the auricle under a dissecting microscope.Arterioles were selected and observed with the use of a microscope video camera.Pentobarbital was injected IV.The trachea was intubated and spontaneous respiration was maintained.Rabbits were randomly assigned to receive acupuncture stimulation (acupuncture group, n = 10), near-infrared lamp irradiation (lamp group, n = 10), near-infrared low-powered laser irradiation (laser group, n = 10), or no irradiation (control group, n = 10).In the acupuncture group, an acupuncture needle was placed in the auricle for 20 min.The lamp group repeatedly received 1 s of near infrared irradiation (1540 mW) followed by 4 s of treatment cessation.The laser group continuously received 60 mW of laser irradiation.In the lamp and laser groups, the auricle (same site as that of the acupuncture needles in the acupuncture group) was irradiated for 10 min with a contact probe.Arteriolar diameter and blood flow velocity were measured at baseline and for 60 min after acupuncture or irradiation treatment.Blood flow rate was calculated by multiplying the blood flow velocity by the cross-sectional area of the vessels.Results: Arteriolar diameter significantly increased to 131% +/-14% in the acupuncture group (P < 0.005), 129% +/-19% in the lamp group (P < 0.005), and 128% +/-11% in the laser group (P < 0.005) when compared with the pretreatment value (100%).Maximum values were reached 20 min after the end of the acupuncture stimulation, and 10 min after the end of lamp and laser irradiation.The three groups showed significant increases in arteriolar diameter when compared with the control group (P < 0.005).Blood flow velocity and blood flow rate showed similar trends to arteriolar diameter.Treatment effect persisted for 40-50 min after the end of stimulation and irradiation.Conclusions: Acupuncture stimulation and phototherapy were directly confirmed to increase the diameter and blood flow velocity of the peripheral arterioles.Acupuncture stimulation and phototherapy, associated with minimal systemic and local side effects, can enhance the microcirculation and may be a useful supportive treatment for diseases caused by poor peripheral blood flow. Table 1 :Table 2 :Figure 2 : Figure 2: A comparison of the percentage of retained placentas (vertical axis) in the untreated control group, the saline GV-1 aqua-acupuncture control group and the Dang Hong Fu GV-1 aqua-acupuncture group.The percentage of retained placentas was significantly less in the Dang Hong Fu herbal injection group as compared to the untreated control group (p value 0.047; < 0.05), but no significant difference was found between the saline treated control group and the untreated control group (p value 0.740; > 0.05) or the saline treated control group and the Dang Hong Fu herbal injection group (p value 0.200; > 0.05) Table 3 : The number and incidence of retained placentas for untreated cows and cows treated with aquaacupuncture at GV-1 with either 40 ml Dang Hong Fu solution or physiological saline a significantly less in the Dang Hong Fu group as compared to the untreated control group (p value 0.047; < 0.05); b no significant difference between the saline treated control group and the untreated control group (p value 0.740; > 0.05); c no significant difference between the saline treated control group and the Dang Hong Fu herbal injection group (p value 0.200; > 0.05). Table 4 : The time of placental expulsion after injection of GV-1 with 40 ml of Dang Hong Fu solution versus physiological saline a significantly less placental expulsion time in the Dang Hong Fu group compared to the saline control group (p value = 0.005; <0.01) Variations of the Dang Hong Fu acupoint injection formula may be needed for different disease patterns to improve clinical outcomes.Combining Dang Hong Fu acupoint injections with conventional treatments may provide even better results that either treatment alone.Further studies on the effects of Chinese herbal injections used as aqua-acupuncture are needed.
2019-01-23T00:39:39.120Z
2010-02-01T00:00:00.000
{ "year": 2010, "sha1": "0877b4a6f1f8aea7f1096810233522e670cd0c6c", "oa_license": "CCBY", "oa_url": "https://ajtcvm.scholasticahq.com/article/89730.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "c2f14cfab7eb9891ca3a9a09fec8cca699930665", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55898006
pes2o/s2orc
v3-fos-license
Matching of gauge invariant dimension 6 operators for $b\to s$ and $b\to c$ transitions New physics realized above the electroweak scale can be encoded in a model independent way in the Wilson coefficients of higher dimensional operators which are invariant under the Standard Model gauge group. In this article, we study the matching of the $SU(3)_C \times SU(2)_L \times U(1)_Y$ gauge invariant dim-6 operators on the standard $B$ physics Hamiltonian relevant for $b \to s$ and $b\to c$ transitions. The matching is performed at the electroweak scale (after spontaneous symmetry breaking) by integrating out the top quark, $W$, $Z$ and the Higgs particle. We first carry out the matching of the dim-6 operators that give a contribution at tree level to the low energy Hamiltonian. In a second step, we identify those gauge invariant operators that do not enter $b \to s$ transitions already at tree level, but can give relevant one-loop matching effects. Introduction The Standard Model (SM) of particle physics as the gauge theory of strong and electroweak (EW) interactions has been tested and confirmed to a high precision since many years [1]. Furthermore, the observation of a Higgs boson at the LHC [2,3] and the first measurements of its production and decay channels are consistent with the SM Higgs mechanism of EW symmetry breaking. Nevertheless, the SM is expected to constitute only an effective theory valid up to a new physics (NP) scale Λ where additional dynamic degrees of freedom enter. A renormalizable quantum field theory of NP, realized at a scale higher than the EW one, satisfies in general the following requirements: (i) Its gauge group must contain the SM gauge group SU (3) C × SU (2) L × U (1) Y as a subgroup. (ii) All SM degrees of freedom should be contained, either as fundamental or as composite fields. (iii) At low-energies the SM should be reproduced, provided no undiscovered weakly coupled light particles exist (like axions or sterile neutrinos). In most theories of physics beyond the SM that have been considered, the SM is recovered via the decoupling of heavy particles, with masses Λ M Z , guaranteed, at the perturbative level, by the Appelquist-Carazzone decoupling theorem [4]. Therefore, NP can be encoded in higher-dimensional operators which are suppressed by powers of the NP scale Λ: Here L (4) SM is the usual renormalizable SM Lagrangian which contains only dimensiontwo and dimension-four operators, Q (5) νν is the Weinberg operator giving rise to neutrino masses [5]tr, Q (6) k and C (6) k denote the dimension-six operators and their corresponding Wilson coefficients, respectively [6,7]. Even if the ultimate theory of NP was not a quantum field theory, at low energies it would be described by an effective non-renormalizable Lagrangian [8] and it would be possible to parametrize its effects at the EW scale in terms of the Wilson coefficients associated to these operators. Thus, one can search for NP in a model independent way by studying the SM extended with higher-dimensional gauge-invariant operators. Once a specific NP model is chosen, the Wilson coefficients can be expressed in terms of the NP parameters by matching the beyond the SM theory under consideration on the SM enlarged with such higher dimensional operators. Flavor observables, especially flavor changing neutral current processes, are excellent probes of physics beyond the SM: since in the SM they are suppressed by the Fermi constant G F as well as by small CKM elements and loop factors they are very sensitive to even small NP contributions. Therefore, on one hand flavor processes can stringently constrain the Wilson coefficients of the dimension-six operators induced by NP. On the other hand, if deviations from the SM were uncovered, flavor physics could be used as a guideline towards Λ L (4) SM + 1 ΛC (5) νν Q (5) νν + 1 Λ 2 kC Energy scale L (4) SM + 1 ΛC (5) νν Q (5) νν + 1 Λ 2 kC the construction of a theory of physics beyond the SM. The second point is especially interesting nowadays in light of the discrepancies between the SM predictions and the measurements of b → sµ + µ − and b → cτ ν processes: the combination of B → D * τ ν and B → Dτ ν branching fractions disagrees with the SM prediction [9] at the level of 3.9 standard deviations (σ) [10]. Furthermore, b → s + − global fits even show deviations between 4 σ and 5 σ [11][12][13]. These deviations have been extensively studied recently. Many NP models have been proposed to explain the anomalies, (see for example for b → sµ + µ − and [34][35][36][37][38][39][40][41][42][43][44][45][46][47] for b → cτ ν.). Therefore, at the moment, B physics is probably our best guideline towards NP. The effective field theory approach is an essential ingredient of all B physics calculations within and beyond the SM. However, the Hamiltonian governing b → s and b → c transitions is not invariant under the full SM gauge group, but only under SU (3) C ×U (1) EM since it is defined below the EW scale where SU (2) L × U (1) Y is broken (see for example [48,49] for a review of the use of effective Hamiltonians in B physics). As a consequence, the SM extended with gauge invariant dimension-six operators must be matched onto the low energy effective Hamiltonian governing B physics (see figure 1). In the flavor sector only partial analyses exist: the matching effects in the lepton sector were calculated in refs. [50][51][52], 1 while in the quark sector b → sµ + µ − transitions and their correlations with B → K ( * ) νν and B → D ( * ) τ ν were studied in refs. [54][55][56][57][58]. However a systematic and complete phe-nomenological study of the gauge invariant dimension-six operators in B physics is still missing. Such analysis proceeds, in a bottom-up approach, in the following three steps. (i) The matching at the EW scale µ W , of the order of M W , of the gauge invariant operators onto the low-energy B physics Hamiltonian by integrating out the heavy degrees of freedom represented by the top quark, the Higgs and the Z and W bosons. It is the aim of this article to perform such systematic matching of the gauge invariant operators. (ii) The evolution of the effective Hamiltonian's Wilson coefficients from the scale µ W down to the B meson scale µ b , where µ b is of the order of m b . This is obtained by solving the appropriate renormalizarion group equation (RGE) We note that after the matching procedure the set of operators in the B physics Hamiltonian is larger than the SM one since new Lorentz structures must be taken into account, therefore the anomalous dimension matrices get also bigger compared to the SM. 2 (iii) The assessment of the constrains on the dimension-six operators' Wilson coefficients (defined at the EW scale µ W ) stemming from the available flavor observables. An example of such analysis can be found in the section 5, while the complete numerical analysis will be given in a subsequent publication. The purpose of the outlined study is to depict the general pattern of deviations observed in B physics employing dimension-six operators. It is worth noting however that in the framework of higher dimensional operators, in order to correctly interpret any deviations of the SM in terms of a specific NP model, it is necessary to map the pattern of deviations observed at the EW scale back to the scale Λ where the BSM physics was supposedly integrated out (see figure1). Indeed due to operator mixing, the pattern of deviations at the EW scale differs from the pattern of Wilson coefficients at the matching scale Λ. The connection between these two mass scales is given by the RGE evolution of dimension-six operators [63][64][65]. The outline of this article is as follow: in section 2 we list the operators relevant for our analysis and discuss the EW symmetry breaking. Then, in section 3, we establish our conventions for the B physics Hamiltonian and we perform the complete matching of the dimension-six operators that give contributions to b → s or b → c transitions at tree level. In section 4 we identify and calculate the leading one-loop EW matching corrections for b → s processes for those operators which do not enter b → s transitions already at tree-level. A phenomenological application of the computed matching conditions will be given in section 5. Finally we conclude. Gauge invariant operators In this section we list the gauge invariant operators, following the conventions of ref. [7], that contribute to b → s or b → c transitions at tree-level. Here we only consider operators involving quark fields. The importance of flavor physics in constraining operators which modify triple gauge couplings has been studied in ref. [66]. Recall that the gauge invariant dimension-six operators are defined before EW symmetry breaking, implying that they are given in the interaction basis (as the mass basis it not yet defined). After the EW symmetry breaking, the fermions acquire their masses and the necessary diagonalizations of their mass matrices affect the Wilson coefficients. As we will see, all these rotations can be absorbed by a redefinition of the Wilson coefficients, except for the misalignment between the lefthanded up-quark and down-quark rotations, i.e. the Cabibbo-Kobayashi-Maskawa matrix (CKM) which relates charged and neutral currents. Operator formalism In table 1 we list the operators contributing to b → s at the tree level (and possibly also to b → c transitions), while table 2 gives the operators generating at tree level b → c but not b → s. For the SM Lagrangian we adopt the standard definition where , q and ϕ stand for the lepton, quark and Higgs SU (2) L doublets, respectively, while the right-handed isospin singlets are denoted by e, u and d. Here r ϕ i = ε ij (ϕ j ) * , where ε ij is the totally antisymmetric tensor with ε 12 = +1. Flavor indices i, j, k, l = 1, 2, 3 are implicitly assigned to each fermion field appearing in (2.1), and the Yukawa couplings Y e,u,d are matrices in the generation space. Therefore, in table 1 the operator names in the left column of each block should be supplemented with generation indices of the fermion fields whenever necessary. Covariant derivatives are defined with the plus sign, i.e. for example where Y is the hypercharge and T A = 1 2 λ A ; λ A and τ I are the Gell-Mann and Pauli matrices, respectively. With the above definition for the covariant derivative, the gauge field strength tensors read Moreover the Hermitian derivative terms are defined as Table 1: Complete list of the dimension-six operators that contribute to b → s (and possibly also to b → c) transitions at tree level. For the operators in the classes (LL)(LL), (LL)(RR), (RR)(RR) and ψ 2 ϕ 2 D (except for Q ϕud ), hermitian conjugation is equivalent to the transposition of generation indices in each of the fermion currents. Moreover, the operators Q qq , Q uu and Q dd are symmetric under exchange of the flavor indices ij ↔ kl. Therefore, we will restrict ourselves to the operators satisfying [ij] < [kl], where [ij] denotes the two digit number [ij] = 10i + j. EW symmetry breaking Although the set of gauge invariant dimension-six operators we have just introduced is written in term of the flavor basis, actual calculations that confront theory with experiment are performed using the mass eigenbasis which is defined after the EW symmetry breaking. In the broken phase, flavor and mass eigenstates are not identical and the SU (2) L doublet components are distinguishable. Therefore, we need to rotate the weak eigenstates into mass eigenstates via the following transformations: where S d L , S d R , S u L and S u R are the 3 × 3 unitary matrices in flavor space that diagonalize the mass matrix as With these definitions, the CKM matrix V is given by After these necessary field redefinitions, there are no flavor changing neutral currents at tree-level in the SM, due to the unitarity of the transformation matrices, and mixing between generations only occurs in the charged quark current. When dimension-six operators are included in the Lagrangian, the effect on them by the matrices S q L,R cannot be eliminated by unitarity. However, these rotations can be absorbed into the Wilson coefficients. As a first example, we consider the operator Q ϕd which takes the form: we can indeed absorb S q L,R into the overall coefficient: In contrast to the SM, it is not possible anymore to avoid the appearance of flavor changing neutral currents for all operators. Moreover, the redefinitions of the Wilson coefficients are not unique, in general. Let us consider as a second example the operator Q ϕq : In this case we cannot absorb at the same time the rotation for the up quarks and for the down quarks, so that we can choose to define obtaining the two equivalent expressions For both definitions, the mass diagonalization leads to flavor changing neutral currents either in the up sector, for the coefficient denoted with the tilde (∼), or in the down sector for that one with the check (∨). The two notations are related through the identity All operators reported in table 1 must be analogously expressed in the mass basis. We report in appendix A the explicit expressions for the Wilson coefficients r C. Q dϕ and Q uϕ The operators Q dϕ and Q uϕ play a special role as they contribute to the quark mass matrices after the EW symmetry breaking. For example, the down-quark mass matrix receives two contributions, one from the SM Yukawa interactions and one from the operator Q dϕ : where Y d is the Yukawa matrix of the SM and v = 246 GeV is the vacuum expectation value of the SM Higgs field. For the coupling of the Higgs with the down-type quarks, defined by the Lagrangian term , the extra contribution is enhanced by a combinatorial factor of three compared to the contribution to the mass term: Unlike in the pure dimension-four SM, the mass matrix and the quark-Higgs coupling cannot be diagonalized simultaneously: a flavor changing interaction between the SM Higgs and the quarks appears [51,67,68]. Indeed the first term in eq. (2.20) is rendered diagonal by a field redefinition as in (2.6), where the new U d L,R matrices, necessary to diagonalize the mass in the presence of the Q dϕ operator, differ from S d L,R by terms of order 1/Λ 2 . The quark-Higgs coupling matrix is now given by where we have defined Note that in this approximation all Wilson coefficients of the operators discussed above remain unchanged since the extra rotation induced by the Q dϕ operator would lead to a 1/Λ 4 effect. Similar considerations apply to the operator Q uϕ . Tree level matching In this section we perform the tree-level matching of the gauge invariant dimension-six operators relevant for b → s and b → c transitions. This matching is performed at the EW scale on the effective Hamiltonian governing B physics, which is defined below the EW scale. Therefore, the effective B physics Hamiltonian contains the SM fields without W , Z, the Higgs and the top quark, while these are dynamical fields of the gauge invariant dimension-six operator basis. As we will see, the B-physics Hamiltonian contains operators with additional Lorentz structures compared to the ones relevant in the SM. ∆B = ∆S = 2 In this section we consider B s -B s mixing. Here, following the conventions of refs. [59,69], the effective Hamiltonian is given by with the operators defined as where α and β are color indices. The primed operators O 1,2,3 are obtained from O 1,2,3 by interchanging P L with P R . The contributions from the four-fermion operators to the Hamiltonian in eq. (3.1) read: 4) where N c denotes the number of colors. In addition, we include for completeness the effects of Q dϕ even though they are formally suppressed by 1/Λ 4 because the 1/Λ 2 effect in the B-physics Hamiltonian is suppressed due to the m f /v coupling of the Higgs to the light fermions. 3 Here we get 20). Note that we do not include the analogous contributions from a modified Z coupling since in this case the coupling to light fermions are not suppressed and especially b → sµ + µ − processes will give relevant tree-level constraints at the 1/Λ 2 level. ∆B = ∆C = 1 For the charged current process b → c i ν j we write the effective Hamiltonian as where the operators are 11) and the prime operators are obtained by interchanging P L ↔ P R in the quark current. 4 The four-fermion operators lead to the following contribution to the effective Hamiltonian: where the summation over i = 1, 2, 3 is understood. The operators Q ϕud and Q ϕq induce an anomalous u-d-W coupling. Their contribution to the b → c ν transition reads: The effect of such modified W couplings to quarks on the determination of V cb (and analogously on V ub ) has been discussed in refs. [70][71][72][73][74][75][76][77][78][79][80]. In principle, also momentum dependent modifications of the W -c-b coupling can lead to effects in b → c ν transitions as examined in refs. [73,78] at the level of non-gauge invariant operators. However, these effects scale like m b v/(m 2 W Λ 2 ). Furthermore, also corrections to Z-b-b couplings can appear which are stringently constrained, making the possible contributions tiny [79]. Therefore we do not include these effects here. ∆B = ∆S = 1 We describe the b → s − + and b → sγ transition via the effective Hamiltonian where the index q runs over all light quarks q = u, d, c, s, b. The operators contributing in the first part are: While in the second part of the Hamiltonian we have four-quark operators with vectorial Lorentz structures, and four-quark operators with scalar and tensor Lorentz structure (with the notation of [62]), The primed operators are obtained by interchanging everywhere P L ↔ P R . We recall that in the SM only the vector operators receive contributions, while for the scalar/tensor operator the matching contribution is zero. However, NP is expected to contribute to the Hamiltonian also via scalar/tensor operators. We also note that the operators in (3.16) are redundant since O 1 and O 2 can be obtained from O q 3−6 , when q = c, via Fierz rearrangements. We will include all NP contributions into the definition of C q 3−6 even though for q = c they could be absorbed in C 1 and C 2 as well. Interestingly, at the leading-logarithmic order only the operators O q 15−20 mix into the magnetic and chromomagnetic operators O 7 and O 8 . The vector operators on the other hand mix neither into the magnetic and chromomagnetic nor into the scalar-tensor four-quark operators. The scalar-tensor operators however mix into the vector ones [62]. Four fermion operators that involve two right handed currents (Q dd , Q (1) ud , and Q (8) ud ), give the following contribution to the effective Hamiltonian: Through a Fierz rearrangement also the operator r Q 1321 dd contributes to Operators with up-type quarks give: In the set (LL)(RR) in table 1, the operators with right-handed up-type quarks give the following contributions: For the same operator set, but with left-handed up-type quarks, we obtain , as defined in section 2. The operators with four down-type quarks give . We obtain the following matching contribution from the vertices involving four left-handed down-type quarks: From the operators with two left-handed up-type quarks we obtain where the symbols χ q and Ξ q stand for The operators Q ϕq , Q ϕud and Q ϕd , involving a Z and W coupling with righthanded fermions, contribute to the four-quark operators in eq. (3.10) in the following way: where i = u, d, c, s, b and Q i and T i 3 denote its charge and third isospin component, respectively. Moreover we introduced the short notation Σ i ϕq = r C The operators involving a vector-current with left-handed quarks directly appear at tree level in the coefficients for O 9 , O 10 in eq. (3.17): where the indices i, j = 1, 2, 3, corresponding to e, µ and τ . Similar contributions appear for the operators O 9 , O 10 from vector-currents involving right-handed quarks: Scalar operators contribute to the coefficients of O P , O S : where the hermitian conjugate of the operator Q ijmn edq is defined as r C * ijmn edq e j R i L (q n L d m R ) . These results agree with those in [57] in the case of lepton flavor conservation. Also the operators Q dB and Q dW appear already at tree-level in the effective Hamiltonian through O 7 and O 7 : (3.69) The operators O 9 and O 10 , and similarly O 9 and O 10 , receive the following lepton flavor conserving tree-level contribution through the effective s-b-Z coupling appearing in the operators Q ϕd , Q ϕq and Q ϕq : The operator Q dG contributes to the Wilson coefficients of O 8 and O 8 in the following way: where g and g s are the SU (2) L and SU (3) C coupling constants, respectively. Interestingly, as already noted in ref. [57], there is no matching contribution to tensor operators at the dimension-six level. The tree level contribution to the four-quark scalar operators stemming from the operator Q dϕ is given by (3.77) One-loop matching corrections In this section we analyze the leading one-loop matching corrections to the b → s transitions arising from the dimension-six operators in (1.1). Let us define what we mean by "leading" one-loop matching corrections. First of all, if one of the gauge invariant operators can contribute already at tree-level to b → s transitions, a calculation of loop effects is not necessary, since the corresponding Wilson coefficient would already be stringently constrained. Therefore, the loop contribution would only be a subleading effect. With this argument, one can already eliminate all operators that do not contain right-handed up-type quarks: left-handed up quarks always come with their SU (2) L down quark partner that then contributes to the Hamiltonian at the tree level. Note that it might be possible that an operator containing quark doublets is flavor-violating for up-type quarks but flavor conserving concerning down-type quarks (i.e. not contributing b → s transitions due to an alignment in flavor space). However, we do not consider this possibility here and focus on operators with up-quark SU (2) L singlets. Therefore, we are left with the operators given in table 3. In the following, we will identify six different classes of matching effects which can be numerically relevant and discuss each of them in a separate subsection. We have the following contributions of gauge invariant dimension-six operators to the ones of the B physics Hamiltonian: 1. 4-fermion operators to 4-fermion operators (∆B = ∆S = 1). 4-fermion operators to 4-fermion operators (∆B = ∆S = 2). 3. 4-fermion operators to O 7 and O 8 . We perform the matching of the operators in table 3 by integrating out the heavy degrees of freedom represented by the Higgs and the top quark, together with the W and Z bosons. The amplitudes are evaluated at vanishing external momenta, setting all lepton and quark masses to zero except for the top quark mass. To calculate the contribution to the magnetic operators O 7 and O 8 , as well as the photon and gluon penguins, we expanded the amplitudes up to the second order in external momenta and small quark-masses. In order to check our result we performed the calculation in a general R ξ gauge, and we explicitly verified the cancellation of the ξ dependent part in the final results. In several cases, the amplitudes have ultraviolet (UV) divergences. Such divergences signal the running and/or the mixing of different gauge invariant operators between the NP scale Λ and the EW scale. The divergences can be (and are) removed via renormalization for which we choose the MS scheme. The residual finite terms constitute in these cases the matching result. To indicate the exact origin of the logarithms, we used the notation log(m 2 t /µ 2 W ) for the one-loop contributions where only the top quark appears in the loop internal legs, while log(M 2 W /µ 2 W ) signals the presence of at least one W -boson in the loop. Contribution of 4-fermion operators to 4-fermion operators (∆B = ∆S = 1) We start by reporting the matching contribution to the semi-leptonic operators O 9 and O 10 from four-fermion operators that couple up-type quarks and charged leptons: Q u and Q eu . Obviously, only a charged particle (i.e. the W and the charged Goldstone) can give a contribution to a bs operator which is only possible via a genuine vertex correction. Moreover, the result turns out to be proportional to m 2 u j . Therefore, we include only the top-quark contribution while u or c quark effects are vanishing in the massless limit. Calculating the diagram in figure 2a (and the analogous Goldstone contribution unless one is working in unitary gauge) gives the following matching contributions: The four-fermion operators involving only quark fields can also contribute to C ( ) 9 and C ( ) 10 through a closed top loop (figure 2c) to which an off-shell Z or photon is attached. In this case the contribution is evidently lepton flavor conserving: Furthermore, through a W -boson exchange (figure 2b) the operators under discussion give a one-loop matching contribution to ∆B = ∆S = 1 four-quark operators of the form: (4.14) where Q i is the charge of the quark, T i 3 = 1/2 for q = u, c and T i 3 = −1/2 for q = d, s, b. Four-fermion operators not containing the flavor violating current sb contribute to the four-quarks operators in (3.16) in the following way: where here we used also the notation introduced in section 2: q C Contribution of 4-fermion operators to 4-fermion operators (∆B = ∆S = 2) The Hamiltonian for B s -B s mixing in eq. (3.1) gets a one-loop matching contribution through the graph in figure 2b: (4.26) Contributions of 4-fermion operators to O 7 and O 8 Four-fermion operators with scalar currents contribute to the low energy Hamiltonian (3.16) through the diagram in figure 2d: (4.28) where C F = (N 2 c − 1)/(2N c ). Note that the contribution to C 7 or C 8 from 4-fermion operators involving vector currents vanishes (excluding QCD corrections). The operator Q ϕu , involving only right-handed up-type quarks, gives through a Z-penguin (figure 3f) a matching contribution to the ∆B = ∆S = 1 Hamiltonian in eq. (3.16) of the form: Contributions of right-handed where I(x t ) has been defined in eq. (4.3). The possibility to probe the anomalous couplings of the Z boson to top quark with rare meson decays were also studied in [81]. originating from the operators Q uB , Q uW and Q uG . The red dots represent an operator insertion. For each of these diagrams a symmetric one must also be considered, with the effective operator in the W -t-b vertex. Box diagrams and self energies on the external legs (not depicted here) must also be included. Contributions of right-handed W couplings to O 7 and O 8 The operator Q ϕud couples the W boson to right-handed quarks, which induces a non-zero contribution only to the magnetic terms O 7 , O 8 : where the x t -functions, in agreement with [82,83], are For simplicity, let us first consider the operators r C 33 uW and r C 33 uB that generate an extra term for the top anomalous magnetic moment resulting in a chirality flipping vertex with the W boson. We will later analyse the case when the vertices with the photon and the Z are flavor violating. Here we include only the contributions to four-quark operators arising from gluon-penguin diagrams, which are of O(α s ), and we neglect the subleading EW penguin diagrams, of O(α). We obtained the following contributions to the effective Hamiltonian in eq. (3.16): (4.42) where the explicit expressions for the x t -dependent functions are (4.55) We found that the expressions for the functions E i uW ,F i uW ,Y uW and Z uW are in agreement with the results reported in [82,83], while A uW , Z uB , E 7 uB and F 7 uB are new to the best of our knowledge. Note that the effect on the magnetic operators O 7 and O 8 is divergent while it is finite for the four-fermion operators. Moreover, all these effects scale like 1/Λ 2 and do not possess an additional suppression by 1/M 2 W . Now we turn our attention to the operators Q i3 uW and Q i3 uB , where i = 1, 2. 5 These operators lead to an anomalous W -t-d i coupling, plus two flavor-violating neutral currents (Z/γ)tc and (Z/γ)tu, so then in the diagram 3b one top quark propagator becomes q = u, c. However, we recall that this amplitude is non-zero only for the γ penguin, or the transition b → sγ -the effective coupling is proportional to σ µν q ν , where q is the momentum of the boson. Only the functions arising from a γ penguin will be modified in this case, i.e. the functions Z, E 7 , F 7 . Repeating the calculations performed for r C 33 uB and r C 33 uW we obtain the following results for the matching: uB V ib V * ts )/2 (the summation over i = 1, 2 is implied). The new functions introduced above are: (4.64) The operator Q 33 uG gives a chromo-magnetic coupling with the top quark, that contributes at one-loop to O 8 and O 4 through the gluon-penguin diagrams in figure 3b,3f. The explicit matching contributions are where A uG = Z uB , E 8 uG = E 7 uB and F 8 uG = F 7 uB . Moreover, the operators Q i3 uG lead to a flavor violating neutral current involving a gluon and up-type quarks, whose effects in the effective Hamiltonian are , (4.67) where A uG = Z uB and E 8 uG = E 7 uB . Phenomenological example As an example of phenomenological applications of the matching conditions reported in sections 3 and 4, we will consider the operator r Q 33 ϕud that gives rise to a one-loop contribution to C 7 and C 8 (see eqs.(4.35) and (4.37)). We can employ the inclusive B → X s γ branching ratio to constrain the the Wilson coefficient r C 33 ϕud . Let us denote the Wilson coefficients in (3.16) as C i (µ) = C SM i (µ) + ∆C i (µ), where ∆C i (µ) are possible non-SM terms. The calculation of the contribution to the decay B → X s γ proceeds precisely as in the SM case: • The evolution of the Wilson coefficients in (3.16), from the mass scale µ = µ W down to µ = µ b , where µ b is of the order or m b , by solving the appropriate RGE. • The evaluation of the corrections to the matrix elements sγ| O i (µ) |b at the scale µ = µ b , and the subsequent shift induced in the branching ratio B(B → X s γ). For the purpose of this example we assume r C 33 ϕud to be real and we neglect the imaginary part of V tb and V ts . Identifying the non-SM terms ∆C 7,8 with the results in eqs. (4.35) and (4.37), and taking into account the current world average [10] B exp (B → X s γ) = (3.43 ± 0.21 ± 0.07) × 10 −4 , we can find the current 95% C.L. bounds This quite strong bound takes place due to a relative enhancement m t /m b compared to the SM case: the SM chiral suppression factor m b /M W is replaced by the factor m t /M W [87]. It can be interesting to compare (5.3) with the W tb vertex structure searches at the LHC. The 8 TeV data on the single top quark production cross section and the measurements of the W -boson helicity fractions allowed the authors of [88] to set a bound on r C 33 ϕud (v 2 /Λ 2 ) at the level of 10 −1 . Also, ATLAS searches for anomalous couplings in the W tb vertex from the measurement of double differential angular decay rates of single top quarks produced in the t-channel show similar sensitivities [89]. Conclusions In this article, we calculated (at the EW scale) the matching of the gauge invariant dimension-six operators on the B physics Hamiltonian (including lepton flavor violating operators) integrating out the top, W , Z and the Higgs. After performing the EW symmetry breaking and diagonalizing the mass matrices, we first presented the complete tree-level matching coefficients for b → s and b → c transitions. Operators involving top quarks do not contribute to b → s processes at the tree level, as the top is not a dynamical degree of freedom of the B physics Hamiltonian. Therefore, we identified all operators involving right-handed top quarks which can give numerically important contributions at the one loop-level: 1. 4-fermion operators to 4-fermion operators (∆B = ∆S = 1). Once the necessary running between the EW scale and the B meson scale is performed, our results can be used systematically to test the sensitivity of B physics observables on the dimension six operators. 4-fermion operators to -25 - Here we explicitly relate the Wilson coefficients of the gauge invariant operators in the interaction basis to the mass basis. This translation is necessary, if the results obtained in this article have to be related to a UV complete model, where the interaction basis is specified. For the notation, we refer the reader to the original paper in ref. [7]. Operator Definition in the mass basis Table 6: Four-fermion operators with two quarks and two leptons Operator Definition in the mass basis
2018-11-09T09:32:20.000Z
2015-12-09T00:00:00.000
{ "year": 2016, "sha1": "ce065c9e531e1458e0f97064dd4f1829801348a9", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP05(2016)037.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "ce065c9e531e1458e0f97064dd4f1829801348a9", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
151392094
pes2o/s2orc
v3-fos-license
The Relationship between Flexible Working Arrangements and Quality of Work Life among Academicians in a Selected Public Institution of Higher Learning in Kuching, Sarawak, Malaysia This study aims to determine the relationship between working arrangements and quality of work life (QWL) among academicians in a selected public institution of higher learning in Kuching, Sarawak. A survey methodology was used in this study. This research involves the utilization of questionnaire which was administered among one-hundred and fifty (151) academicians currently working in a selected public institution in Kuching, Sarawak. The relationship between flexible working arrangements and quality of work life was analyzed using the Pearson’s Correlation analysis test. The results of this study revealed that there is a significant relationship between long working arrangement and flexible working arrangements with QWL. Hence, if organizations are concerned about developing their human resources and gaining a competitive advantage in the marketplace, it is necessary that they attend to one of their most precious assets, namely, their human resources by practicing flexible working arrangements. INTRODUCTION In recent years, the phrase "Quality of life" has been used with increasing frequency to describe certain environmental and humanistic values neglected by industrial productivity and economic growth.Many current organizational experiments seek to improve both productivity for the organization and the quality of working life for its members. Quality of Work Life (QWL) is the existence of a certain set of organizational conditions or practices.This definition frequently argues that a high quality of work life exists when democratic man-agement practices are used, employee's jobs are enriched, employees are treated with dignity and safe working conditions exist.QWL refers to the level of satisfaction, motivation, involvement and commitment individuals experience with respect to their lives at work (Geet, Deshpande & Asmita, 2009). Public institutions of Higher Learning in Malaysia play an important role in economic and social development of Malaysia.In order to fulfill this role successfully, they need to attract and retain high quality staff.They also need to provide a supportive working environment to enable their staff to conduct high quality teaching and research (Siti Aisyah, Azizah, Roziana, Ishak, Hamidah & Siti Khadijah, 2012).Nowdays, flexible working arrangement has become a significant issue for workers in order to have a good steadiness between both work and non-work events (Kattenbach, Demerouti, & Nachreiner, 2010).The concept of flexible working hours refers to the provision towards workers in controlling their hours of working instead of their working schedule (Atkinson & Hall, 2013).Prior study has found that male workers took the opportunity through having a flexible working arrangement by developing the engagement skills towards the organization that they work in, while on the other hand female workers used the chance to improve the balance between work and home life (Shagvaliyeva & Yazdanifard, 2014).Also, flexible working hours promote and facilitate work-life balance.Reduced stress and increased employee wellbeing are outcomes of the work-life balance.In Malaysia, the Minister of Women, Family and Community Development, Dato' Sri Rohani Abdul Karim has promoted the implementation of Flexible Working Arrangement (FWA) in the Ministry of Women, Family & Community for the purpose of producing 85% of high successful employees (Bong, 2015). Previous research on QWL were prioritized more towards the specific measures, such as, enticing skills, occupation safety, as well as rewards along with welfare; which then, the priority progressively changes to the contentment of an occupation together with involvement (Gayathiri & Ramakrishnan, 2013).However, researchers these days believed that there are still other varieties of measures that are significant and beyond than what have been mentioned that literally influence the grasp towards QWL (Kaighobadi, Esteghlal & Serajoddin, 2014). Besides that, prior studies have shown that academicians are being pressured due increasing workload but at the same time being discouraged by lack of encouragement in doing research (Siti Aishah et al., 2012).Flexible working practices are beneficial for both employee and employer.Flexibility definitely contributes to improvement in allocation of work and life responsibilities.Thus, employee might end up easily fulfilling his/her working tasks as well non-working roles (Shagvaliyeva & Yazdanifard, 2014). Hence, this study focuses on determining the relationship between working arrangement (flexible working hours) and QWL of academicians in a selected Public institutions of Higher Learning in Kuching, Sarawak. BACKGROUND OF THE STUDY Working arrangement gives a huge effect towards the QWL of workers.As what Lu (2011) had claimed in his studies, it has already being identified that working arrangements influences a person along with his family life presentations.This finding is similar with that of a study by Ahmad, Idris & Hashim (2013) indicating that the implementation of suitable working hours schedule could increase and balance one's responsibilities in work and family.On the other hand, working arrangement influences the occupation in terms of health (Uchida, Kaneko, & Kawa, 2014).In this study, two types of working arrangement, long working hours and flexible working hours were investigated. Long Working Arrangement In the reality of current work life, most of the individuals are actually working overtime for the reason of the increment of amount of work, security of career, job performing stress, as well as for the maintenance of budget to live (Ahmad et al., 2013).Fiksenbaum, Jeng, Koyuncu, & Burke (2010) claimed that working for long hours might lead to bad impacts within the workers themselves, relatives, companies, as well as, the people.Besides that, it is believed that there is a relationship between long working arrangement and the implication towards the healthiness of an employee (Uchida et al., 2014).According to Sabil and Marican (2011), long working arrangement is usually being related to pressure, tiredness, sleeplessness, also serious health conditions, for instance, body aches, coronary heart disease along with the increment of safety issues.However, on the other hand, Fiksenbaum et al. (2010) claimed of a prove that long working arrangement could also be prospering, whereby the respondents of theirs' who consists of top level managers felt very contented towards their occupation due to the incoming benefits, significance as well as the difficulties produce at their state of position.On top of that, Rudolf (2013) found that the Americans are pleased with long working arrangements as they assumed that being a hardworking employee may lead to favourable outcomes. Flexible Working Arrangement These days, flexible working arrangement has become a significant issue towards workers in order to have a good steadiness between both work and non-work events (Kattenbach et al., 2010).The concept of flexible working hours refers to the provi-sion towards workers in controlling their hours of working instead of their working schedule (Atkinson & Hall, 2013). There has been a prior study stating that male workers took the opportunity through having a flexible working arrangement by developing the engagement skills towards the organization that they work in, while on the other hand female workers used the chance to improve the balance between work and home life (Shagvaliyeva & Yazdanifard, 2014).Another prior study also claimed that flexible working arrangement is preferred within female employees in order to manage both job as well as family obligations (Lewis & Humbert, 2010).In addition, one of the constructive aspects of flexible working arrangement includes job values producing motivations as intuitive indicators meant for the workers towards evaluating their QWL (Yeo & Li, 2013). Quality of Work Life (QWL) According to Nair (2013), it is said that QWL can be understood as the bond value among both workers and the whole environment of working.QWL involves the belief of intermodal relative that is impacted by period setting, along with one's own and community principles which relies on the opinion of such individual along with his life (Behzad, Arezo, & Mohammadi, 2014). On top of that, the workers along with their managers will mutually receive the advantages of initiating the practice of QWL and to the fact that the workers perceive it as feeling secured, contented, as well as having the competency in growing and developing through the attendance of the practice of QWL (Adhikari & Gautam, 2010).According to Gayathiri and Ramakrishnan (2013), employers as well as scholars similarly had visualize on how to improve the QWL.This is due to the fact that QWL gives a massive impact towards mostly everything for the sake of achiev-ing the vision, mission as well as goals in an organization.In addition, Gayathiri and Ramakrishnan (2013) claimed that from the viewpoint of a group of scholars, it is the necessity of organizations to attain great performance along with the development in productivity. Perspectives of Elements in QWL According to Gayathiri and Ramakrishnan (2013), a group of scholars had made an effort in discovering the types of elements that they conclude came about as a result of several of their viewpoints.The reason is because different scholars have their own different perspectives.Not only that, Nair (2013) said that QWL involves a class of principles which comprise the elements that are based on jobs, for instance contentment in work, wages, together with the bond between co-workers; as well as the elements that widely indicates the state of life and universal emotions towards wellbeing. Furthermore, QWL influences workers' actions on the job regarding aspects, such as, company recognition, meaningful job, work engagement, work endeavor, work outcomes, desire in resigning, turnover of company as well as isolation of individual (Sinha, 2012). Jayakumar and Kalaiselvi (2012) claimed that the term QWL is known for concepts such as, job dedication, work inspiration as well as outputs from work.QWL consists of eight main theoretical components which are receiving rational reimbursement, achieving secured job environment, obtaining instant chances of ongoing progress along with safety, chances in utilizing together with growing people's skills, accomplishing employees' relations within company, having constitutionality within company, attaining work-life balance, and being pertinence to colleagues.Each scholar described QWL differently that leads to several counterparts, for instance job standard, work description purposes, workers' welfare, workers relations' quality, job condition, as well as the poise concerning work necessity as well as having the capability of making choices (Kaighobadi et al., 2014).Susan and Jayan (2013) conceptualized QWL's elements as work safety, improved compensation structure, increment of salary and wages, chances in developing, involvement of employees together with the increment of profitability. Other than that, Gayathiri and Ramakrishnan (2013) recognized several QWL measurements which are: wage and salary, work pressure, company well-being programs, flexible working arrangement, involvement in job management and controls, receiving gratitude, relation of employer-employee, injustice practices, capitals sufficiency, superiority along with advancement merit and perpetual employment.The factors of QWL can be categorized into several parts which are: (a) the increment of employees' engagement, contribution as well as authority, (b) the increment of focus towards workers ability growth, (c) the increment of selfindependence within employees in taking actions along with creating choices also (d) the decrement of position discriminations within the organization (Battu & Chakravarthy, 2014). In addition, Tabassum, Rahman and Kursia (2011) said that QWL covers reimbursement method, job surroundings, job arrangement, safety and health matter, monetary as well as non-monetary welfare including the organization comportment so as to approach the workers. In this study, four domains of QWL which consist of work life, work design, work context, and work world are investigated.Almalki, Fitzgerald and Clark (2012) claimed that the domain of work life/home life can be understood in terms of the border concerning the life of both at workplace and at home.In other words, it can be seen as a domain that touches towards the aspect of work life balance. On the other hand, another domain known as work design can be understood as the arrangement of job along with the realistic description about the job (Almalki et al., 2012).This domain generally views about matters, such as, workloads, time adequacy to conduct such tasks as well as quality.Besides that, Almalki et al. (2012) also stated another domain identified as work context that involves situation of job conducting whereby one works while exploring the job surroundings' influence towards such structures of several parties.Elements such as the achievements in getting recognitions, feedbacks, communication skills, relationship with colleagues and job opportunities are being emphasized within this domain. The forth domain is work world that can be comprehended by the means of the impacts within the influences of wide communal together with the alterations of job practices (Almalki et al., 2012). The Organizational Commitment Theory The organizational commitment theory comprises of three components which are identified as Affective Commitment (AC), Normative Commitment (NC), and Continuance Commitment (CC).However, this research only focuses on the AC components.According to Afsar (2014), AC is understood as "an emotional attachment, identification, and involvement that an employee has with its organization where he is happy to be a member of that organization" (p.128).Indeed, studies show that individuals who have a high level of affective organizational commitment are less likely to quit their jobs, have a lower rate of absenteeism, have a stronger desire to achieve the organization's goals, adopt organizational citizenship behaviors, uphold the organization's values, and ultimately perform better (Brunelle, 2013, p.57).Thus, this theory can also be related towards one's QWL in terms of the relation-ship between working arrangements and QWL. Relationship between Working Arrangements Dimensions and QWL A study by Lu (2011) found a significant relationship between working hours and occupational stress which is also part of the QWL contexts since it focuses towards long working arrangement.Consequently, Rudolf ( 2013) conducted a research on the effect of working arrangements towards an individual and family quality of work and domestic life.The study found that reduction in working hours had no significant impact on overall job and life satisfaction. PROBLEM STATEMENT Prior studies show that limited study that has been done that focused on working arrangements.Previous researchers of QWL focused more on the specific measures, such as, enticing skills, occupation safety, as well as rewards along with welfare; which then the priority progressively changes to the contentment of an occupation together with involvement (Gayathiri & Ramakrishnan, 2013).However, researchers these days believed that there are still other varieties of measures that are significant and beyond than what have been mentioned that literally influence the grasp towards QWL."Given the diversity in perspectives, two questions remain: what constitutes a high quality of work life?How its impact be measured?"(Kaighobadi et al., 2014, p.220).Besides that, prior studies have stated that academicians work pressure is increasing due to increasing workload, but at the same time decreasing of encouragement especially during handling tasks that are linked with research (Panatik et al., 2012).There have been a dearth of prior studies regarding flexible working arrangement influencing work-life balance and what more QWL (Shagvaliyeva & Yazdanifard, 2014).In addition to that, it can be said that there are limited studies that have been done within this area, especially amongst academic staff in Public Institution in Malaysia as well as in other Asian countries (Farid, Izadi, Ismail & Alipour, 2014).Hence, this study focuses on QWL among academicians in a selected Public Institution of Higher Learning in Kuching, Sarawak, Malaysia. OBJECTIVES Specifically, the objectives of this study are: i. to determine the relationship between long working arrangement and QWL among academicians in a selected public university.ii. to determine the relationship between flexible working arrangement and QWL among academicians in a selected public university. METHODOLOGY Research Design This study employed the survey methodology to collect data on the dependent variable, that is, QWL and on the independent variables that is, working arrangement (long working arrangement & flexible working arrangement). Population and Sample This study was conducted in a selected Public Institution of Higher Learning in Kuching, Sarawak.The population of this study consisted of all the 350 academicians.A formula devised by Krejcie & Morgan (1970) was used to determine the minimum required sample size for the survey to guide in the sampling process. Based on the calculation using this formula, the minimum required sample size for the survey should be 120 persons, or approximately 40.0 percent of the total population.A slightly bigger sample of 151 respondents were identified and selected to take into account the problem of non-returned questionnaires and missing responses during data collection. Research Instrument The long working arrangement questionnaire was adapted from Robbins (1999) which was actually known as a Work Addiction Risk Test (WART), consisting of 12 items (Aziz, Uhrich, Wuensch, & Swords, 2013).On the other hand, the flexible working arrangement was adapted from Churchill (1979) that involved 7 items (Ahmad et al., 2013).Meanwhile, QWL questionnaire was adapted from Brooks (2004) and also includes four domains known as Work Life, Work Design, Work Context and Work World.In this study, the questionnaire employed the five-point Likert-type scale, as follows: 1 =Strongly Disagree, 2 = Moderately Disagree, 3 = Disagree, 4=Agree, 5=Moderately Agree and 6=Strongly Agree.A pilot test was conducted to determine the reliability of the instrument and the results are shown in Table 1. Dependent Variable Quality of Work Life (QWL) CONCLUSION AND RECOMMEN-DATIONS This study was conducted to determine the relationship between long working arrangement and flexible working arrangement with QWL among academicians in a selected public university in Kuching.The results reveal there is a significant relationship between flexible working arrangement with QWL of academicians.Salehi, Mohd Rasdi, and Ahmad (2014) mentioned that academicians need to perform multiple major roles and responsibilities, such as, teaching and learning, carried out research, publishing, administering, supervising post graduate, contributing to professional services include providing consultancy for industry and becoming policy makers.As a result, academics staffs are facing a high level of work-related stress (Ogbonna & Harris, 2004).For those who are dual career families, they have to juggle their jobs and family needs.This situation reflects the hardship of academics' work as they also need to adapt to a working environment with a high key performance index (Salehi et al., 2014).Thus, flexible working hours promote and facilitate work-life balance.Reduced stress and increased employee wellbeing are outcomes of the work-life balance (Shagvaliyeva & Yazdanifard, 2014). In addition, this research also provides beneficial information for human resource practitioners.Indirectly, this study provide additional information that can be used to reduce job stress among the academicians in public institution of higher learning.In order to have an improved level of QWL life as well as productivity of the academicians, the human resource practitioners need to develop new initiatives, as well as, alternatives to diminish any issues such as job stress and fatigue that may inhibit the organization's favourable outcomes.Future researchers could extend this research by increasing the sample size through the inclusion of other Ma-laysian public universities.Besides that, mix methods design (embedded, triangulation, explanatory, and exploratory) need to be considered to explore more in-depth about the concept of QWL and other types of working arrangements.Comparison between FWA and QWL among public and private institutions is also recommended.Furthermore, future researchers are encouraged to confirm the fit of the model by using structural equation modelling.
2018-12-11T03:41:13.539Z
2016-03-01T00:00:00.000
{ "year": 2016, "sha1": "a1b1bc6bd15aeaa8447c640020f3b2bc4421f662", "oa_license": "CCBYNCSA", "oa_url": "http://publisher.unimas.my/ojs/index.php/JCSHD/article/download/197/168", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "a1b1bc6bd15aeaa8447c640020f3b2bc4421f662", "s2fieldsofstudy": [ "Business", "Education" ], "extfieldsofstudy": [ "Psychology" ] }
231720672
pes2o/s2orc
v3-fos-license
Cardiotoxicity: A Major Setback in Childhood Leukemia Treatment Ongoing research in the field of pediatric oncology has led to an increased number of childhood cancer survivors reaching adulthood. Therefore, ensuring a good quality of life for these patients has become a rising priority. Considering this, the following review focuses on summarizing the most recent research in anthracycline-induced cardiac toxicity in children treated for leukemia. For pediatric cancers, anthracyclines are one of the most used anticancer drugs, with over half of the childhood cancer survivors believed to have been exposed to them. Anthracyclines cause irreversible cardiomyocyte loss, leading to chronic, progressive heart failure. The risk of developing cardiotoxicity has been known to increase with the treatment-free interval and total cumulative dose. However, because of individual variations in anthracycline metabolism, it has recently been shown that there is no risk-free dose. Moreover, studies have shown that diagnosing anthracycline-induced cardiomyopathy in the symptomatic phase is associated with poor treatment response and prognosis. Thus, early and systematic evaluation of these patients is crucial to allow optimal therapeutic intervention. Although currently echocardiographic assessment of left ventricle ejection fraction and cardiac biomarker evaluation are being used for cardiac function monitoring in oncologic patients, there is no established follow-up and treatment protocol for these patients, and these methods are neither specific nor sensitive for identifying early cardiac dysfunction. All things considered, the need for ongoing research in the field of pediatric cardiooncology is crucial to offer these patients a chance at a good quality of life as adults. Introduction Recent discoveries in the field of pediatric oncology have significantly improved 5-year survival rates, from 50% in the 1970s to 80% nowadays [1][2][3][4]. On the other hand, the incidence of pediatric cancers is slowly increasing [5], most noticeable for leukemia, cancer being still one of the main causes of death by illness in childhood and adolescence [1][2][3]. Hematopoietic malignancies are the most common cancers in children, accounting for up to 31% of all malignancies that occur in children younger than 15 years of age [1,3,6]. Leukemias are more common than lymphomas; the most common is acute lymphoblastic leukemia (ALL), representing up to 25% of all childhood cancers in children under 15 years old [7]. The most important prognostic factor is the correct choice of treatment based on specific group stratification. Risk assessment takes into account many factors including leukemia subtype, age and white blood cell count at diagnosis, and also response rate to the induction treatment [7,8]. Chemotherapy is the main treatment method used in leukemia and consists of an association of several cytotoxic agents, showing an increased efficiency of up to 85% in inducing remission [3,6]. However, efficient, oncological treatments are often aggressive, with multiple side effects that can also occur years after treatment has ended. Considering that more survivors of childhood cancer reach adulthood, special attention has been given to the quality of life of these patients, as well as to the late-onset complications of the antineoplastic treatment [2,3,6]. Better knowledge and understanding of these side effects are needed to amend or even prevent some of them in the future. Cardiovascular complications are one of the main causes of morbidity and mortality in survivors of childhood cancer [9,10]. Anthracyclines (AC) represent one of the most effective chemotherapeutic agents currently used, being simultaneously the most well known for their effects on the cardiovascular system [11]. The Childhood Cancer Survivor Study (CCSS) has shown that the risk of death due to cardiovascular disease is eight times higher in survivors of an AC-treated neoplasm as compared to the general population [12,13]. Considering the unfavorable prognosis of AC-induced cardiomyopathy [14], early identification of patients at risk by means of optimal cardiac function monitoring is essential both for the cardiologist and the oncologist, allowing timely implementation of personalized treatment regimens and possibly even prevention of cardiac dysfunction. Chemotherapy-Induced Cardiotoxicity 2.1. General Toxicity. As stated, chemotherapy is the main method used for the treatment of pediatric leukemia. Although an effective treatment, one of its major drawbacks is the increased toxicity of the drugs being used, which sometimes counterbalances their therapeutic benefit [9][10][11]15]. Anticancer drugs have general toxicity, explained by their action on cells with a high division rate, such as intestinal epithelia and hematopoietic cells. Thus, the most common side effects are bone marrow failure, digestive disorders (nausea, vomiting, and diarrhoea), and alopecia. These consequences cannot be avoided but in most cases resolve spontaneously when stopping the treatment. Specific toxicity is determined by the pharmacodynamics and pharmacokinetic particularities of each agent used. In order to determine the life quality of cancer survivors, CCSS has monitored cancer treatment side effects on 14,357 survivors of pediatric malignancies treated between 1970 and 1986, with at least 5 treatment-free years at the moment of enrolment in the study. Analyzing the data, it has been found that survivors of childhood cancer have an eight times higher risk of developing chronic diseases as compared to their brothers or sisters. Also, more than a third will eventually develop a severe, potentially fatal condition [13]. In addition to the development of secondary malignancy, the most com-mon side effects associated with the use of chemotherapies are cardiovascular disease, respiratory dysfunction, renal failure, infertility, psychosomatic development delay, and allergic reactions [13,15,16]. Cardiac Toxicity. The heart is a tissue with reduced regenerative capacity, so any extensive injury will cause irreversible damage. Although recent research has led to the development of even more effective antineoplastic agents, their effects on the myocardial tissue have not disappeared. Cardiovascular side effects caused by chemotherapy are various, including arrhythmias and conduction disorders, heart failure (HF), acute coronary syndromes, myocarditis, and pericarditis. The most commonly encountered side effect is the alteration of left ventricular (LV) contractility, with the consequent decrease of its ejection fraction (LVEF). In a simplified manner, postchemotherapy cardiotoxicity has been divided into two types: type I: caused by cardiomyocyte death, irreversible (most commonly associated with AC treatment), and type II: caused by myocardial dysfunction, frequently reversible (most commonly associated with Trastuzumab use) [17]. AC-induced cardiac dysfunction can also be divided into clinical and subclinical disease, by taking into account the presence or absence of clinical manifestations of congestive HF. In terms of subclinical changes, multiple definitions have been proposed, a widely accepted one being an alteration of the systolic function objectified by echocardiographic measurements or radionuclide angiography. Concerning the echocardiographic criteria, systolic dysfunction is considered to be present when LVEF is reduced by 10% for asymptomatic patients and 5% in symptomatic patients, or a decrease of LVEF below 50% [18]. Anthracycline-Induced Cardiac Toxicity. It is estimated that there are currently over 363,000 survivors of childhood cancer, with 60% of them believed to have been exposed to AC [19]. 2.3.1. Anthracyclines: The Mechanism of Toxicity. AC are a class of anticancer drugs, derived from Streptomyces Bacterium. They act at the nuclear level by DNA intercalation, topoisomerase 2β (TOP2β) inhibition, and production of reactive oxygen species (ROS), eventually triggering the pathways of cellular apoptosis [14,20,21]. Of all the classes of anticancer drugs used in the treatment of pediatric leukemia, AC are most known for their toxic effects on cardiac tissue [14,18,19]. These are effective antimitotics on many types of cancer, doxorubicin (DOX) being the most potent agent in this class, with the largest action spectrum. It is commonly used in oncology for both solid tumors and hematopoietic malignancies. However, the proven cardiac side effects of both DOX and daunorubicin limit their use [22]. More novel AC molecules such as Epirubicin and idarubicin and the structurally related molecule mitoxantrone have been proposed as less cardiotoxic variants of DOX. However, over the years, all types of AC have been shown to cause ACinduced cardiac toxicity [23]. Disease Markers The molecular mechanism for AC-induced cardiotoxicity ( Figure 1) is complex and incompletely understood: cardiac toxicity is believed to be caused partly by the production of ROS and partly by the production of alcohol metabolites that accumulate in the myocytes [20]. Considering DOX, for example, the reduction of an electron from the quinone group leads to the formation of a semiquinonic radical, which will reduce the molecular oxygen to superoxide anion and hydrogen peroxide, both ROS. In this way, DOX causes oxidative stress and energy depletion at the cellular level, while also activating apoptotic pathways. Consequently, AC induce irreversible cardiomyocyte loss. The second mechanism proposed, which explains the chronic, ongoing damage suffered by the myocardium, involves the conversion of AC to alcohol metabolites. These do not have the same oxidative potential as ROS but cause disturbances in calcium (Ca) and iron (Fe) cellular homeostasis, thereby affecting the contractile function. Also, being polar compounds, alcohols accumulate, which explains why cardiotoxicity risk increases proportionally to the total administered dose of AC [20,21,24]. Recent studies propose that TOP2β is involved in the development of increased oxidative stress following DOX treatment. AC bind to both TOP2α, which is overexpressed in cancerous cells, and TOP2β, expressed in adult mammalian cardiomyocytes. Studies showed that TOP2β cardiomyocyte knockout mice presented less impairment in cardiomyocyte function, while wild-type mice exhibited significant abnormalities in the p53 tumor suppressor gene, β-adrenergic signaling, and apoptotic pathways. [25] The more the mechanisms of cardiotoxicity are understood, the easier it becomes to develop new cardioprotective treatment strategies, while also preserving the desired oncologic efficacy. Risk Factors for the Development of Anthracycline-Induced Cardiotoxicity. The incidence of cardiotoxicity after AC treatment is influenced by multiple factors, among the most important ones being the type of chemotherapy, the total given dose, and age at onset of therapy [26]. As stated, AC are one of the antineoplastic medications most frequently associated with long-term cardiac side effects following chemotherapy, the risk increasing proportionally to the total cumulative dose. At a total dose of less than 300 mg/m 2 , the risk of developing cardiotoxicity is considered to be 5%, increasing to 20% when the total dose exceeds 300 mg/m 2 and to more than 35% at doses higher than 600 mg/m 2 [27]. In the pediatric population, young age at diagnosis has been associated with an increased risk of subsequent cardiac damage. A study by Armstrong and Ross showed that childhood cancer survivors had twelve times higher risk of developing congestive HF following AC treatment in the following 3 years after treatment [28]. Also, another study showed that the incidence of AC-induced cardiac toxicity has risen up to 30% of the adult survivors of childhood cancer [29]. Other risk factors for AC-induced cardiac toxicity are preexisting cardiovascular risk factors such as diabetes, arte-rial hypertension, obesity, lung disease, or thyroid disease [30]. This is why, in the adult population, an increase in cardiotoxicity following AC treatment is noticed with age, as the elderly population already presents an increased prevalence of the above-mentioned additional cardiac risk factors. Clinical Manifestations: Prognosis. Cardiovascular complications caused by AC can be acute, chronic with early onset or chronic with late onset, depending on the time frame and reversibility of cardiac damage [9]. Acute toxicity occurs rarely during treatment, with an incidence lower than 1%, is dose-independent, and most often resolves shortly after treatment ends [31]. It may have various manifestations: myocarditis, pericarditis, and endocarditis. Acute HF during treatment is a rare but extremely serious side effect, as it requires immediate treatment termination [32]. Arrhythmias and hypotensive episodes are acute manifestations that occur more often during treatment but do not always require cessation of chemotherapy [9]. Chronic heart disease is a more common side effect of AC treatment. Depending on the onset of symptoms, cardiac damage may be subdivided into early-onset cardiotoxicity when symptoms occur within 1 year from finalizing the treatment or cardiotoxicity with late onset when symptoms occur after more than 1 year from finishing chemotherapy. The risk of developing cardiac toxicity increases proportionally to the treatment-free interval [33,34]. Chronic cardiotoxicity manifests as a decrease in cardiac function leading to CHF. Unlike acute complications, chronic impairment is in most cases progressive [9,10]. This toxicity has been shown to be dose-dependent and cumulative: initially, diastolic dysfunction occurs with a cumulative doxorubicin dose of 200 mg/m 2 , while systolic dysfunction occurs later, when the total dose exceeds 400-600 mg/m 2 , with individual variability [32,33]. However, recent studies have shown that cardiac toxicity can occur even at doses previously considered "harmless" to cardiac tissue [35,36]. Diastolic dysfunction is frequently asymptomatic, which is why careful cardiac monitoring of patients treated with 3 Disease Markers anthracyclines is required even if they do not present any symptoms of cardiac disease [33]. Also, if diagnosed in the symptomatic phase, the prognosis and treatment response of AC-induced cardiomyopathy are poor with a 5-year survival rate below 50% [33,37]. Genetic Polymorphisms in Anthracycline Metabolism. A long-term follow-up of anthracycline-treated children has shown in some patients development of cardiac side effects at cumulative doses of less than 150 mg/m 2 , as well as a lack of toxic effects in some patients at over 600 mg/m 2 [35]. This indicates the importance of individual variability in terms of pharmacodynamics and pharmacokinetics, most likely due to genetic polymorphisms. In a recent study, the Children Oncology Group (COG) has shown that homozygous patients for the G allele of carbonyl reductase 3 (CBR3: an oxidoreductase involved in the reduction of carbonyl groups in alcohol groups, important in anthracycline metabolism) are at an increased risk of developing toxic cardiomyopathy even when low doses of AC are being used [38]. For these patients, it is considered that there is no risk-free dosage. Another study identified the polymorphisms of the SLC28A3 gene as an important modulator for the risk of developing AC-related cardiotoxicity [39]. A recent review on AC-related cardiotoxicity mechanisms and genomics in childhood cancer survivors revealed a total of 18 genes or genetic variants associated with ACinduced cardiac toxicity. These genes play roles in DNA damage pathways, oxidative stress response, iron metabolism, drug transport, and sarcomere function. Mostly, the ABCC, CBR3, and SLC28A3 genes have emerged in the majority of studies cited, emphasizing their important role in the development of AC-related heart disease [23]. These findings could facilitate, in the future, the implementation of targeted and personalized primary prophylactic strategies. Monitoring Patients with Anthracycline The risk of death by cardiovascular pathology is eight times greater in cancer survivors than the risk of tumor recurrence, especially in pediatric patients [9]. Cardiovascular damage dramatically reduces not only the duration but also the quality of life of these patients. Moreover, their response to standard cardiac treatments is often reduced and unsatisfactory. Diagnosing cardiac toxicity at a stage where it is already symptomatic greatly limits the potential benefits of drug intervention, thus the importance of establishing a method that could aid in diagnosing AC-induced cardiomyopathy in its subclinical stages. This can be achieved by elaborating a specific follow-up protocol using the means we currently have, as well as developing new methods for early identification of patients at risk [27]. Echocardiography. Echocardiography is the most commonly used screening method for cardiac pathology, being an easily accessible, noninvasive, inexpensive, and fast method that allows real-time visualization of the heart. Evaluation of the LVEF is essential for assessing heart function, being also a necessary tool in the diagnosis of ACinduced cardiomyopathy [27,33]. Some studies also recommend the use of ventricular shortening fraction (SF) during the follow-up, with a SF lower than 30% indicating significant cardiac function impairment [40,41]. However convenient, studies have shown that changes in LVEF or LVSF often show a rather irreversible alteration of heart function [32,41]. Therefore, the European Society for Medical Oncology (ESMO) proposed the use of Doppler echocardiography for basal evaluation and periodic monitoring of cardiac function [42] as being a more sensitive method. What is more, the Pulse Wave Doppler (PWD) method has proven to be extremely useful, allowing for the assessment of flow velocities at a given point in real time. The PWD method records the magnitude of E and A waves at the level of the mitral valve, the ratio of which (E/A) is useful in diagnosing diastolic dysfunction. Recently, Tissue Doppler Imaging (TDI) has become increasingly used, allowing diagnosis of cardiac impairment even in the stage of subclinical diastolic dysfunction. This method records myocardium motion velocities with the pulsed Doppler system set for low velocities. Using TDI, 3 wave patterns are recorded: the positive S ′ wave (recorded in the systolic phase) and the negative E′ and A′ waves (recorded in the diastolic phase). Studies showed decreased rates of these waves in the AC-treated group versus the control group [43,44]. These correlated with reduced systolic contraction and delayed relaxation, in apparently asymptomatic patients with normal LVEF and LVSF. This emphasizes the importance of using PWD and TDI for the timely detection of cardiac dysfunction. Another method of identifying early cardiac damage is speckle tracking. This is an application of TDI, which calculates the strain and strain rate based on spatial differences in tissue velocity. Follow-up studies of oncological patients encourage evaluation of LV strain and global strain, the latter being preferred. However, these evaluations proved to be more useful in the immediate period following treatment and less in the long-term follow-up [45]. A recent study of 1,820 surviving, adult, pediatric cancer patients revealed a reduction in global longitudinal strain (GLS), as compared to normal values. However, the patients included in this study already had low LVEF, hypertension, or impaired glucose tolerance; therefore it was not possible to determine if GLS was reduced merely because of the former antineoplastic treatment [46]. Lastly, echocardiography greatly depends on the operator, the results being greatly influenced by their knowledge. All things considered, the ideal imaging method of cardiac function evaluation for these patients is still to be determined. Electrocardiogram (ECG) . ECG is a noninvasive method used to evaluate cardiac conductive tissue, allowing identification of arrhythmias, conduction anomalies, and cardiac ischemia. There are studies that correlated a prolonged QT interval in oncological patients with the increased possibility of later developing a cardiac pathology [47]. Acute DOX 4 Disease Markers toxicity includes supraventricular tachycardia, ventricular ectopy, myopericarditis, cardiomyopathy, and death. However rare, these manifestations are life-threatening; thus, ECG examination is required in the follow-up protocol of these patients. Biomarkers. In recent years, interest in the use of biological markers has increased due to the need to easily identify patients at risk of developing chemotherapy-related cardiac toxicity. 3.3.1. C-Reactive Protein (CRP). CRP is an acute-phase protein synthesized in the liver. In patients with heart disease, high levels of CRP signal a proinflammatory status and correlate with the HF severity, indicating a negative prognosis. Also, highly sensitive CRP (hs-CRP) is a reliable indicator for the risk of an acute cardiovascular event, values higher than 3 mg/l being associated with an increased risk [27]. Markers of Oxidative Stress. Since it is difficult to assess cellular oxidative stress, it was attempted to estimate it using indirect markers such as oxidized low-density lipoproteins, malondialdehyde, and myeloperoxidase. In animal models, administration of doxorubicin increased both the activity of myeloperoxidase and lipid peroxidation [49]. Natriuretic Peptides. Brain natriuretic peptide (BNP) and N-terminal prohormone BNP (NT-proBNP) are two extremely useful markers in cardiac function assessment. These are synthesized in the myocyte in response to increased cardiac wall pressure. BNP produces vasodilatation, increases diuresis and natriuresis, and reduces sympathetic nervous system activity and renin-angiotensin-aldosterone system activation. They are used to diagnose HF (at a level above 400 pg/ml), to stratify the patients in risk groups, and also in their long-term follow-up [50]. Recently, the utility of these markers has been demonstrated for identifying patients at risk of developing cardiotoxicity. In a study by Sandri et al., 52 patients who received highdose chemotherapy were evaluated. NT-proBNP values were determined at onset and at the end of treatment, as well as at 12, 24, 36, and 72 hours after. The values of 33% of patients remained elevated and 72 hours posttreatment. This group demonstrated a decrease in LV diastolic index and a reduction in LVEF from 62% to 45% in the year following treatment [51]. Markers of Myocardial Injury. Cardiac Troponins (cTn) T and I are myofibrillar proteins that have demonstrated increased sensitivity and specificity as markers of myocardial injury. Several studies have shown increased cTnT levels in the early stages of AC therapy [52]. This increase was correlated in some studies, with a marked reduction in the diastolic function of LV [53,54]. In a study on patients with breast cancer treated with Trastuzumab, cTnI has proven to be an important predictor of cardiotoxicity as well as a negative prognostic factor regarding cardiac function recovery [18]. Following these studies, in 2010, Cardinale and Sandri proposed cTn levels to be used in cardiac risk assessment for both standard anticancer treatments and new biological therapy [54]. Also, in a study of 18 pediatric patients diagnosed with non-Hodgkin's lymphoma, Blaes et al. showed that patients with elevated cTn at the beginning of treatment had an increased incidence of systolic dysfunction [55]. A recent review analyzing over 20 studies regarding cTn use as a biomarker of cardiotoxicity in patients treated with AC for breast cancer concluded that the main evidence up until today is that low cTn levels during treatment correlate with a better long-term prognosis regarding heart function [56]. Monitoring during Treatment. Monitoring during treatment has a role of identifying potential cardiac damage as soon as possible, thus allowing therapeutic interventions and treatment modification. The goal is to reduce the risk of developing long-term cardiac complications [11]. At the same time, it should be taken into account not to reduce treatment's efficacy, which would eliminate the benefit created by reducing cardiotoxicity. A recent study on pediatric leukemia has shown that myocardial tissue is affected even before chemotherapy begins, as seen from the correlation determined between the white blood cell count at diagnosis and NT-proBNP values. This might be partially explained by myocardial infiltration with cancer cells. However, preexisting cardiac suffering highlights even more the need for a timely, rigorous, ongoing cardiac function evaluation [57]. In order to be effective, Steinherz et al. emphasize the importance of conducting an ECG and echocardiography prior to the beginning of treatment [58]. Subsequently, most guidelines recommend an ultrasound after half the total cumulative dose of doxorubicin is given, followed by an echocardiographic examination before each of the following doses [58]. It has been proposed that at a decrease in LVEF below 50% or more than 10% during treatment, chemotherapy should be discontinued. This is based on the fact that the identified systolic dysfunction appears most likely following an extensive myocardial injury [27]. However, a lack of reduction in LVEF during treatment does not rule out the possibility of late cardiac toxicity [27,33,43,59]. 3.5. Long-Term Monitoring. Lifetime screening for cardiac damage is indicated following antineoplastic treatment, especially in patients treated with AC or those who have received radiation therapy to the chest. In the first year following treatment, ultrasound screening is currently recommended at 3, 6, and 12 months [26]. COG provides a detailed guide on the frequency of posttreatment monitoring, based on age at exposure to AC, the total dose received, and the association with thoracic irradiation. Disease Markers For a universal approach, they propose converting all doses of AC to isotoxic doses of doxorubicin [38]. Another important aspect is the screening for cardiovascular risk factors: sedentary lifestyle, tobacco use, family history of premature coronary heart disease (less than 55 years in men and 65 years in women, respectively), lipid profile, basal blood glucose, and blood pressure (BP). Cancer patients are generally considered at risk for development of cardiovascular pathology, so adding any other two cardiac risk factors leads to the inclusion of these patients in a high-risk group. Thus, according to the American Heart Association, for cancer survivors, the target body mass index (BMI), BP, LDL, and glucose levels change: BMI < 90th percentile, BP < 95th percentile, LDL < 130 mg/dl, and basal blood glucose < 100 mg/dl [60]. Therapeutic Outlook for Anthracycline-Induced Heart Failure First of all, in order to decrease the likelihood of AC-induced cardiac disease, the administration recommendations have been modified. The maximum total cumulative dose recommended nowadays being 400-550 mg/m 2 DOX and 900 mg/m 2 Epirubicin. Anyhow, one must keep in mind that up until now no dose of AC has been considered cardiac riskfree, so the ongoing evaluation of these patients is mandatory regardless of the received dose. Also, a slow DOX infusion has proven to diminish the cardiotoxic effect of AC use, by lowering its maximum plasma levels, a parameter which, in turn, determines the amount of drug entering the myocardial tissue [61]. However, Lipshultz et al. conducted a study on 102 children treated for ALL, who received doxorubicin in a randomized fashion, either in a continuous regimen (over 48 hours) or by bolus (15 minutes). A cardiac follow-up, with a median of 8 years, showed no significant difference in cardiac function between the two groups, concluding that, in children, continuous infusion shows no benefit over bolus administration [62]. The use of liposomal drug formulations has been widely debated and studied. Liposomal DOX has the advantage of a limited diffusion through the myocardial tissue, due to their size (too big to cross the endothelial junction of healthy tissues) with preserved antitumor efficiency (leaky, irregular tumor vasculature) [63]. There are many successful animal studies done on solid tumors, which show not only the preserved desired antitumor effect with minimal cardiac toxicity but also, in some cases, liposomal formulations actually exposing tumor cells to higher amounts of AC [64]. However good the results are, there are few randomized clinical trials on liposomal-coated AC, thus the limited clinical indications so far being metastatic breast cancer, advanced ovarian cancer, multiple myeloma, and AIDS-related sarcoma [65]. Until further studies emerge, liposomal formulations are not yet an alternative for children with leukemia. Regarding preventive treatment, cardioprotective drugs such as dexrazoxane, angiotensin-conversion enzyme inhibitors, and beta-blockers have been tested [21,66]. Dexrazoxane, an iron chelating agent, has long been considered the first-line prophylactic therapy for chemotherapy-induced cardiac toxicity, being the only drug currently approved by the US FDA for the prevention of AC-induced HF. It has also been proven to be efficient in children with leukemia. Lipshultz et al. demonstrated in a randomized controlled trial of 205 children the protective effect of dexrazoxane on cardiac function as means of LV structure and function, with no adverse effect on relapse risk, frequency of secondary malignancy, or survival [67]. Another randomized controlled trial, from the Pediatric Oncology Group, has shown that, although the 5-year survival rate did not differ between the group that received dexrazoxane and the group without it, measurements of the SF, LV wall thickness, and thicknessto-dimension ratio were worse in patients who did not receive dexrazoxane [68]. However, in 2007, a controversial study claimed that dexrazoxane use could increase the risk of secondary malignancies, especially AML [69]. No further studies have supported this theory so far [70]. What is more, after previously allowing dexrazoxane to be used only in women treated for breast cancer, the EMA changed its decision and now supports its administration to pediatric patients who are likely to be treated with high cumulative doses of anthracyclines (>300 mg/m 2 of doxorubicin) [71,72]. Beta-blocker use is encouraged in a recent review on their role in the prevention of AC-induced cardiotoxicity, due to their important cardioprotective action. Carvedilol seems to be the most studied drug from this class; however, its dosing regimens and optimal timeline of administration in oncologic patients still need to be established [73]. A small study of 25 patients demonstrated that Carvedilol administration started before initiating AC therapy improved LVEF and the value of the E/A ratio compared to the placebo group [74]. Similar studies were also performed using Enalapril, Spironolactone, Metoprolol, and Candesartan, all with encouraging results in the prevention of postchemotherapy cardiotoxicity [75][76][77][78]. Another study conducted on 473 cancer patients presenting with elevated cTn following various cytostatic regimens demonstrated that Enalapril administration for over a year resulted in a lower incidence of LV dysfunction than in the placebo group [79]. For patients who have already developed HF secondary to cytostatic treatments, there are limited studies regarding the appropriate therapeutic approach. For now, HF is to be treated according to the current guidelines, although treatment response is poorer than in the "classical" HF patient population. Conclusion Taking all the above-mentioned aspects into account, it is obvious that cardiotoxicity following AC treatment is a current issue for both the oncologist and the cardiologist. The pediatric population represents an even bigger challenge, because of the various stages of development in which children receive chemotherapy, being very difficult to establish specific monitoring and treatment protocols. There are many questions unanswered in cardiooncology, thus the need and development of a separate medical specialty dealing with this intricate problem. All things considered, careful and 6 Disease Markers systematic monitoring, as well as timely intervention, proves to be crucial to the long-term prognosis and quality of life for these patients. Data Availability The data supporting this review are from previously reported studies and datasets, which have been cited.
2021-01-29T05:22:11.335Z
2021-01-06T00:00:00.000
{ "year": 2021, "sha1": "04856289a2e3b09b1dbfa61af8cf9781b5d17427", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2021/8828410", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "04856289a2e3b09b1dbfa61af8cf9781b5d17427", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
1576450
pes2o/s2orc
v3-fos-license
Spironolactone is an antagonist of NRG1‐ERBB4 signaling and schizophrenia‐relevant endophenotypes in mice Abstract Enhanced NRG1‐ERBB4 signaling is a risk pathway in schizophrenia, and corresponding mouse models display several endophenotypes of the disease. Nonetheless, pathway‐directed treatment strategies with clinically applicable compounds have not been identified. Here, we applied a cell‐based assay using the split TEV technology to screen a library of clinically applicable compounds to identify modulators of NRG1‐ERBB4 signaling for repurposing. We recovered spironolactone, known as antagonist of corticosteroids, as an inhibitor of the ERBB4 receptor and tested it in pharmacological and biochemical assays to assess secondary compound actions. Transgenic mice overexpressing Nrg1 type III display cortical Erbb4 hyperphosphorylation, a condition observed in postmortem brains from schizophrenia patients. Spironolactone treatment reverted hyperphosphorylation of activated Erbb4 in these mice. In behavioral tests, spironolactone treatment of Nrg1 type III transgenic mice ameliorated schizophrenia‐relevant behavioral endophenotypes, such as reduced sensorimotor gating, hyperactivity, and impaired working memory. Moreover, spironolactone increases spontaneous inhibitory postsynaptic currents in cortical slices supporting an ERBB4‐mediated mode‐of‐action. Our findings suggest that spironolactone, a clinically safe drug, provides an opportunity for new treatment options for schizophrenia. Introduction Schizophrenia (SZ) is as severely debilitating neuropsychiatric disorder characterized by positive symptoms, that is, hallucinations and delusions, negative symptoms, that is, lack of motivation, and cognitive symptoms (Insel, 2010). Positive symptoms can frequently be ameliorated by treatment with dopamine receptor antagonists, but efficient treatment options for negative and cognitive symptoms are not available (Goff et al, 2011). Thus, there is a strong clinical need to develop and explore more target-directed therapies for SZ (Nestler & Hyman, 2010). Repurposing of existing drugs principally offers a fast track to the clinic and has been demanded for SZ (Insel, 2012;Lencz & Malhotra, 2015), also because many pharma companies withdrew from research on severe mental disorders (Margraf & Schneider, 2016). Genetic association studies have identified NRG1 and its cognate receptor ERBB4 as SZ risk genes, and altered NRG-ERBB4 signaling has been associated with positive, negative, and cognitive symptoms (Stefansson et al, 2002;Li et al, 2006;Nicodemus et al, 2006). Several postmortem studies revealed increased expression of NRG1 in SZ patients (Hashimoto et al, 2004;Law et al, 2006;Weickert et al, 2012). Elevated expression of the ERBB4-JM-a-CYT-1 variant carrying a PI3K-recruitment domain has also been detected in SZ (Silberberg et al, 2006;Law et al, 2007Law et al, , 2012. Moreover, ERBB4 was found to be hyperphosphorylated in postmortem brains from SZ patients (Hahn et al, 2006), suggesting that NRG1-ERBB4 hyperstimulation might represent a component of SZ pathophysiology. In agreement, transgenic mice with increased Nrg1 expression display SZ-relevant behavioral deficits, including hyperactivity, impaired sensorimotor gating, decreased social interaction, and reduced cognitive functions (Deakin et al, 2009(Deakin et al, , 2012Kato et al, 2010). In particular, transgenic mice with neuronal overexpression of the membrane-bound cysteine-rich-domain (CRD) type III isoform of Nrg1 (Nrg1-tg) display chronic ErbB4 hyperphosphorylation in the cortex, which is associated with a broad spectrum of SZ-relevant endophenotypes, including dysbalanced excitatory and inhibitory neurotransmission, altered spine growth, and impaired sensorimotor gating (Agarwal et al, 2014). Moreover, it has been shown recently that endophenotypes associated with elevated Nrg1 expression are reversible in adult animals, which strongly supports the assumption that the NRG1/ERBB4 signaling system provides a valid target for pharmacological interventions (Yin et al, 2013;Luo et al, 2014). It thus appears plausible that compounds, which can re-balance the activity of the NRG1-ERBB4 signaling pathway, could represent candidates for the therapeutic treatment of schizophrenia beyond positive symptoms. In this study, we have first developed a co-culture assay system compatible with high-throughput-screening (HTS) utilizing the split TEV technology (Wehr et al, 2006(Wehr et al, , 2008. We then used this assay to screen a library of clinically approved drugs in a repurposing approach to uncover new potential target specificities (Wang & Zhang, 2013), which resulted in the identification and validation of spironolactone as an inhibitor of ERBB4. Finally, we can show that spironolactone decreases phosphorylation levels of ERBB4 in vitro and in vivo and leads to an altered balance of excitation/inhibition of cortical projection neurons. Chronic spironolactone treatment ameliorates hyperactivity and reverses sensorimotor gating and working memory deficits in Nrg1-tg mice. Thus, spironolactone alleviates novel aspects of SZ-relevant symptoms in this mouse model of increased NRG1-ERBB4 signaling. Results A split TEV-based co-culture assay to screen for modulators of NRG1-ERBB4 signaling Screening for modulators of NRG1-ERBB4 signaling in cell culture requires an adequate setup reflecting endogenous signaling mechanisms. According to a current model, NRG1 ligands reside in presynaptic terminals of principle pyramidal neurons, whereas ERBB4 receptors are mainly expressed at the postsynaptic density of dendrites in inhibitory interneurons (Rico & Marín, 2011). Based on this ligand-receptor configuration, NRG1 mediates juxtacrine and paracrine signaling to ERBB4. We established a cellular HTS-compatible co-culture assay, in which NRG1 was expressed in the signalsending cell population A, whereas ERBB4 was expressed in the signal-receiving cell population B (Fig 1A). Initially, we used the fulllength NRG1 type I b1a isoform that undergoes proteolytic cleavage resulting in the release of the extracellular domain, which contains the biologically active EGF-like domain (EGFld), into the extracellular space (Hu et al, 2006;Willem et al, 2006). Therefore, NRG1 type I b1a can elicit juxtacrine (non-cleaved form) and paracrine (cleaved form) stimuli. To screen for approved small compounds that could modulate NRG1-ERBB4 signaling, we combined this co-culture assay with the split TEV protein-protein interaction technique to monitor ERBB4 activation through induced PI3K adaptor recruitment by the human ERBB4-JMa-Cyt1 variant ( Fig 1A). The functionality and robustness of the co-culture assay was investigated by co-plating increasing numbers of PC12 cells carrying a stably integrated mouse Nrg1 type I b1a expression cassette (Nrg1 cells, Fig EV1A for stable Nrg1 expression) with ERBB4-PIK3R1expressing PC12 cells (split TEV assay cells). Co-culture conditions were verified using two PC12 cell populations expressing either EYFP or ECFP (Fig EV1B). A dose-response analysis showed that the assay reached a plateau of activation when 10,000 Nrg1expressing cells were co-plated with 40,000 split TEV assay cells, with half-maximal activation at 5,000 cells ( Fig 1B). Calculation of the Z' factor, a measure of HTS applicability and quality (Zhang et al, 1999), resulted in a value of 0.5 indicating a large separation band at screening conditions. Importantly, addition of the soluble EGFld resulted in a twofold increase of ERBB4 activation compared with Nrg1 cells alone, implying that ERBB4 activation in the coculture assay can be decreased and increased by potential NRG1-ERBB4 inhibitors and activators, respectively ( Fig 1C). In addition, dose-response assays using ERBB4-PIK3R1 split TEV assay cells only and soluble EGFld as stimulus (single-culture assay) showed stable and reproducible dose-responses that also qualified for HTS, with Z' factors between 0.56 and 0.68 for three independent assays ( Fig EV1C). The specificity of the NRG1-ERBB4 co-culture assay was validated in dose-response assays using established ERBB4 inhibitors, such as lapatinib (IC 50 value at 2.61 lM, co-culture assay, Fig 1D; 0.45 lM, single-culture assay, Fig EV1D) and CI-1033 (IC 50 value at 0.01 lM, co-culture assay, Fig EV1E; 0.004 lM, single-culture assay, Fig EV1F). Taken together, these data indicate that the split TEV-based co-culture assay provides a robust platform to screen for modulators of NRG1-ERBB4 signaling. Screening the NIH clinical compound collection recovers spironolactone as ERBB4 receptor antagonist We used the split TEV-based NRG1-ERBB4-PIK3R1 co-culture assay to screen two sets of the NIH Clinical Collection (NIH-NCC) containing 727 FDA-approved drugs in total (Fig 2A). From this screen, we selected a primary hit list of candidates that were at least three standard deviations away from the mean (Fig 2B for NIH-NCC set 1; Fig EV2A for NIH-NCC set 2; Dataset EV1). These candidates were then subjected to individual re-screening to eliminate off-target effects, such as toxicity and interference with assay tools, and 18 substances met these criteria and were selected for the final hit list (Appendix Fig S1 for a flowchart of all screening and validation steps; see Appendix Table S1 for final hit list). Spironolactone, a mineralocorticoid receptor (MR) antagonist formerly used as a diuretic and to treat high blood pressure (Gaddam et al, 2010), was recovered as top antagonist candidate (Fig 2B). In a dose-response co-culture assay using Nrg1 type I b1a, spironolactone displayed an IC 50 value of 1.0 lM, and marginal toxic effects at higher concentrations, as indicated by reduced Renilla luciferase readings ( Fig 2C). ERBB4-specific effects were confirmed by dose-response control assays, which showed absence of spironolactone effects on assay components (Fig EV2B and C). Importantly, spironolactone also inhibited ERBB4 activity (IC 50 value of 1.1 lM) in a co-culture assay using the membrane-bound CRD containing type III isoform of Nrg1, the major NRG1 isoform in the brain, which is implicated in juxtacrine signaling ( Fig 2D). Thus, spironolactone modulates ERBB4 activity downstream from both paracrine and juxtacrine NRG1 signaling. Further, spironolactone acts at a proximal step of ERBB4 receptor activation upstream of tyrosine phosphorylation and adapter recruitment as ERBB4 dimerization stimulated by EGFld was efficiently inhibited (IC 50 value of 1.1 lM) in a singleculture assay (Fig 2E). Finally, we used an ERBB4 variant that lacks the intracellular domain, and thus is signaling incompetent, but expresses at the cell cortex and dimerizes upon EGFld stimulation (Fig EV3A and B). Notably, spironolactone inhibits dimerization of full-length ERBB4, but not of C-terminally truncated ERBB4 (Fig EV3C). Spironolactone but not its metabolic products antagonizes the ERBB4/PIK3R1 assay Spironolactone also inhibited ERBB4 signaling activity in an ERBB4-PIK3R1 single-culture assay, albeit with a slightly increased IC 50 A The ERBB4-PIK3R1 split TEV assay monitors NRG1-ERBB4 signaling in PC12 cells. The Nrg1 ligand (green) is stably expressed in the signal-sending cell population A. The signal-receiving cell population B (or split TEV assay cells) is transfected with plasmids encoding the assay components ERBB4 (red) fused to NTEV-tevS-GV (ERBB4-NTEV-tevS-GV), the adapter molecule PIK3R1 (purple, the regulatory subunit alpha of the PI3K) fused to CTEV (PIK3R1-CTEV), and a UAS-driven firefly luciferase reporter (Fluc). Upon Nrg1 binding to the extracellular domain of ERBB4 (1), ERBB4-NTEV-tevS-GV dimerizes and cross-phosphorylates itself (2). PIK3R1-CTEV binds to the phosphorylated ERBB4 receptor leading to the functional reconstitution of TEV protease activity and the concomitant release of the artificial cotranscriptional activator Gal4-VP16 (GV) through proteolytic cleavage at a TEV protease cleavage site (tevS) (3). In turn, released GV translocates to the nucleus and binds to UAS sequences (open box) to activate the transcription of a firefly reporter gene (4). B Dose-response assay using increasing numbers of Nrg1 type I b1a-expressing PC12 cells. For each 96-well, 40,000 split TEV assay cells were co-plated with increasing numbers of Nrg1-expressing cells and incubated for 24 h. Half-maximal activation is reached at 5,000 Nrg1-expressing cells. The Z' factor is 0.5 indicating a large separation band for this assay. C Adding 10 ng/ml EGFld resulted in a twofold activation. Per 96-well, 40,000 split TEV assay cells were co-plated with empty PC12 cells (no Nrg1 expression), 10,000 Nrg1-expressing cells, and 10,000 Nrg1-expressing cells plus 10 ng/ml EGFld. Arrows indicate measuring window of activation (blue arrow) and inhibition (red arrow) relative to baseline activity. D Lapatinib antagonizes ERBB4-PIK3R1 signaling in a dose-dependent manner. Per 96-well, 40,000 split TEV assay cells were incubated with increasing amounts of lapatinib, followed by co-plating 10,000 Nrg1 type I b1a-expressing cells. The inset depicts the IC 50 value in lM. A Flow chart of the compound screen. PC12 cells (population A) were transfected in solution with the split TEV assay plasmids ERBB4-NTEV-tevS-GV and PIK3R1-CTEV and incubated for 2 h before seeded onto 96-well plates. Population A cells were allowed to express the plasmids for 24 h. Compounds were added in a concentration of 10 lM, followed by seeding the Nrg1-expressing PC12 cells (population B) half an hour later. After 24 h of compound incubation, cells were lysed and subjected to a dual luciferase assay. The screening data were analyzed using the cellHTS2 package in R Bioconductor. B Graphic visualization of the primary screen data of the NIH-NCC library set 1. All counts (320 compounds and 64 controls) from the Nrg1-ERBB4-PIK3R1 split TEV compound screen were plotted against the Z-score using the Mondrian program, with pathway activators displaying high values and inhibitors low values. For the secondary analysis, we selected all candidates that were at least three standard deviations away from the mean. EGFld-positive and lapatinib/CI-1033-negative controls are shown in red. C, D Spironolactone antagonizes Nrg1-ERBB4-PIK3R1 signaling. In dose-response assays using ERBB4-NTEV-tevS-GV and PIK3R1-CTEV plasmids transfected into PC12 cells, spironolactone was administered at increasing concentrations before seeding (C) Nrg1 type I-or (D) Nrg1 type III-expressing PC12 cells. E Spironolactone inhibits ERBB4 receptor dimerization. Dose-dependent dimerization of the ERBB4 receptor was analyzed using a split TEV assay encompassing ERBB4-NTEV-tevS-GV and ERBB4-CTEV plasmids transfected into PC12 cells. 10 ng/ml EGFld was applied as Nrg1 stimulus. Data information: Fluc, firefly luciferase activity (black lines); Rluc, Renilla luciferase activity (gray lines, indicating toxicity levels); n = 6; data are shown as mean, and error bars represent SEM. The insets depict IC 50 values in lM. ª 2017 The Authors EMBO Molecular Medicine Vol 9 | No 10 | 2017 Michael C Wehr et al Spironolactone inhibits NRG1-ERBB4 signaling EMBO Molecular Medicine value of 2.8 lM (Fig 3A). We used the assay to identify molecular structures in spironolactone ( Fig 3A) required for the inhibition of ERBB receptor activation in this assay and examined structurally highly related compounds, such as the metabolites canrenone ( Fig 3B) and 7a-thiomethyl-spironolactone ( Fig 3C) as well as the second-generation MR antagonist eplerenone ( Fig 3D). Canrenone lacks the thio-ketone group attached to the sterol core structure, whereas 7a-thiomethyl-spironolactone lacks the ketone group only. . Multilevel profiling approach of spironolactone treatment assessing target specificities and adapter recruitment. A Spironolactone, molecular structure shown on the left, inhibits ERBB4/PIK3R1 split TEV assay activity (black line), with an IC 50 value of 2.76 lM. B-D The spironolactone metabolites (B) canrenone and (C) 7a-thiomethyl-spironolactone as well as the second-generation drug (D) eplerenone do not attenuate the ERBB4/PIK3R1 assay activity (black lines). Note that only spironolactone bears a thio-ketone group attached to the sterol core structure. All ERBB4/PIK3R1 assays were run in a single-culture assay mode using 10 ng/ml EGFld as functional Nrg1 stimulus, and ERBB4-NTEV-tevS-GV and PIK3R1-CTEV plasmids were transfected into PC12 cells (indicated by icon). E Schematic representation of the most critical ERBB dimers tested in spironolactone dose-response assays using the split TEV protein-protein interaction detection technique. Note that not all possible combinations are depicted. F Heat map showing the IC 50 values obtained from individual ERBB dimerization split TEV assays. All single dose-response assays can be found in Fig EV4; the combination ERBB4-ERBB4 is shown in Fig 2E. Data information: Fluc, firefly luciferase activity (black lines, reporting ERBB4-PIK3R1 assay activity); Rluc, Renilla luciferase activity (gray lines, assessing viability); n = 6; data are shown as mean, and error bars represent SEM. The insets depict IC 50 values in lM. EMBO Molecular Medicine Vol 9 | No 10 | 2017 ª 2017 The Authors In eplerenone, the thio-ketone group is replaced by an acidic group, in addition to a minor modification at the sterol core structure. None of these compounds showed an inhibitory effect in the ERBB4-PIK3R1 single-culture assay ( Fig 3B-D), suggesting that the thio-ketone group specific for spironolactone plays a role for its inhibitory function in this assay. Spironolactone preferentially inhibits dimers containing ERBB4 Target specificity is a crucial aspect for the evaluation of pharmacological active compounds (Feng et al, 2009). As ERBB4 is a member of the ERBB family, we determined spironolactone's antagonistic effects on dimer formation of other ERBB receptors ( Fig 3E). As indicated by IC 50 values obtained from dose-response assays (Figs 3F and EV4A-I), spironolactone preferentially inhibits formation of ERBB4 homodimers (at 1.75 lM). Spironolactone antagonizes also other ERBB combinations, albeit with less efficacy, and efficiently inhibits EGFR homodimer formation when stimulated by EGF (at 1.74 lM) (Figs 3F and EV4J). Collectively, these data suggest that spironolactone acts as a pan-ERBB family inhibitor, which preferentially inhibits dimer formation involving EGFR or ERBB4. In adult cortical tissues of mice, however, EGFR is not detectably expressed (Appendix Fig S2), suggesting that ERBB4 is the major target of the ERBB family for spironolactone in the brain. Spironolactone reverts ERBB4 hyperphosphorylation Dose-response assays indicated that spironolactone reduces the phosphorylation-dependent recruitment of PIK3R1 by activated ERBB4 (Figs 2C and 3A), prompting us to examine the phosphorylation levels reverted by spironolactone biochemically. EGFld treatment of human T-47D cells that endogenously express ERBB4 induced hyperphosphorylation at Tyr1056 and Tyr1284 of ERBB4. Addition of lapatinib completely reverted Tyr1056 and Tyr1284 phosphorylation, whereas treatment with spironolactone reduced phosphorylation to intermediate levels ( Fig 4A and B) Likewise, spironolactone antagonized EGFld-mediated hyperphosphorylation of transfected human ERBB4 in PC12 cells that were used in the screen ( Fig EV3D). To translate our findings into a potential therapeutic rationale for SZ, we utilized a transgenic mouse model, in which Nrg1 type III is overexpressed under the control of the neuronal Thy1.2 promoter (referred to as Nrg1-tg) and causes hyperphosphorylation of ERBB4 receptors in the prefrontal cortex (Velanac et al, 2012). We tested whether phospho-ERBB4 levels were also regulated by chronic spironolactone treatment in Nrg1-tg mice and injected the drug for 21 consecutive days before sacrificing the mice for biochemical analysis ( Fig 4C). In lysates from mouse prefrontal cortex, phospho-Erbb4 levels were efficiently visualized using the p-ERBB4-Y1284 antibody (Fig 4D and E). Notably, addition of spironolactone resulted in a robust reduction of NRG1induced ERBB4 hyperphosphorylation. Downstream signaling effects, as assessed by pERK and pAKT, were neither modulated by Nrg1 overexpression, nor by spironolactone treatment. Next, we tested whether LIM kinase 1 (LIMK1) displays regulated phosphorylation levels upon spironolactone treatment. LIMK1 is a nonreceptor protein serine/threonine kinase implicated in cytoskeleton dynamics and the regulation of synaptic spine morphology and function (Meng et al, 2002(Meng et al, , 2004 and has been associated with NRG1 signaling (Yin et al, 2013). Notably, phospho-Limk1 levels were slightly upregulated in wt and markedly upregulated in spironolactone-treated Nrg1-tg mice, suggesting that LIMK1 activity integrates, at least in part, spironolactone's inhibitory effect (Fig 4D and E). Taken together, these results indicate that spironolactone serves as an inhibitor of NRG1-mediated ERBB4 signaling. Since NRG1-ERBB4 signaling has been shown to modulate inhibitory neurotransmission (Yin et al, 2013;Agarwal et al, 2014;Mei & Nave, 2014), we tested the impact of spironolactone treatment on synaptic transmission in acute slices prepared from prefrontal cortex. When we measured spontaneous inhibitory postsynaptic currents (sIPSCs) at layer II/III pyramidal neurons, canrenone (10 lM) showed no significant effect on sIPSC frequencies and amplitudes ( Fig 4F). In contrast, administration of spironolactone (10 lM) caused an increase of sIPSC frequency (n = 12; P < 0.05) and amplitude (n = 12, P < 0.05) shortly after bath application ( Fig 4G). Next, evoked IPSCs were measured in pyramidal neurons in layer II/III of prelimbic cortex after stimulating in layer I. To distinguish between MR and ERBB4-mediated effects by spironolactone, these experiments were performed in the presence of canrenone (10 lM), which is a more potent MR antagonist relative to spironolactone. Under these conditions, spironolactone (5 lM) significantly increased the eIPSC amplitude ( Fig 4H), a finding also increased by the pan-ERBB family inhibitor compound lapatinib as control (5 lM) (Fig 4I). Together, these findings are consistent with the hypothesis that spironolactone modulates GABAergic neurotransmission via ERBB4. Chronic spironolactone treatment ameliorates SZ-relevant behavioral endophenotypes in Nrg1-tg mice Nrg1-tg mice exhibit SZ-relevant behavioral abnormalities, including deficits in prepulse inhibition (PPI) (Agarwal et al, 2014), an operational measure of sensorimotor gating. In a pilot experiment, we performed a two-arm study, in which Nrg1-tg and wt mice were tested for PPI before and after chronic spironolactone treatment as above ( Fig EV5A). Before treatment, Nrg1-tg mice displayed PPI deficits (Fig EV5B), in line with our previous findings (Agarwal et al, 2014). Spironolactone treatment significantly improved PPI in Nrg1-tg mice (Fig EV5C), but had no effect in wt controls, suggesting that spironolactone modulates behavioral deficits of enhanced NRG1-ERBB4 signaling (Fig EV5D). Based on these results, we performed a four-arm study, in which an independent cohort of Nrg1-tg and wt mice was tested following spironolactone or vehicle treatment (Fig 5A). In this study, we assessed a battery of behavioral domains with relevance for SZ, such as motor activity, curiosity, light-dark preference, working memory, motivation, PPI, fear memory, and pain sensitivity. Vehicle-treated Nrg1-tg mice covered longer distances in the open-field arena than vehicle-treated wt controls (Fig 5B and C). This locomotor hyperactivity, however, was reverted by spironolactone (Fig 5B and D). Further, Nrg1-tg mice showed, increased anxiety, paralleled by higher frequency of defecation and urination during the open-field test (Fig EV5E-G), supporting previous observations (Agarwal et al, 2014). When testing for light-dark preference, spironolactone-treated Nrg1-tg spent more time in the light compartment. This suggests an anxiolytic rather than a sedative effect of spironolactone (Fig 5E) displayed a similar transition activity in the light-dark test ( Fig EV5H). In contrast, Nrg1-tg mice showed an increased activity in the tail suspension test (Fig EV5I). Working memory was assessed in the Y-maze test. Nrg1-tg mice performed significantly less alterations than wt controls, suggesting an impaired working memory performance in transgenics ( Fig 5F). Notably, spironolactone treatment rescued these deficits (Fig 5F), without influencing the activity in the Y-maze ( Fig EV5J). However, the number of choices was higher in transgenics, supporting their hyperactivity phenotype observed in the open-field test (Fig 5B and C). In the contextual fear memory test, spironolactone-treated Nrg1-tg mice displayed a non-significant reduction in freezing (Fig EV5K), which was paralleled by significantly decreased levels of pain sensitivity as assessed in the hot plate test (Fig EV5L). Cue memory testing revealed neither genotype nor treatment-dependent alterations ( Fig EV5K). Moreover, spironolactone treatment significantly enhanced PPI in Nrg1-tg (Fig 5G), but not in wt mice (Fig EV5M), replicating the data obtained from the pilot experiment ( Fig EV5D). As Nrg1 is a critical regulator for brain development, we aimed to exclude age-related consequences on the behavioral effects observed using a covariate analysis, which links age with test performance. To do this, we grouped mice into juvenile (8-11 weeks) and adult (12-16 weeks) categories, and ANCOVA with the covariate age did not reveal an effect on the genotypedependent treatment response (Appendix Fig S3). Taken together, chronic spironolactone treatment alleviates hyperactivity, PPI and working memory deficits in Nrg1-tg mice, findings that are paralleled by a reduction in Erbb4 hyperphosphorylation levels in these mice. Discussion To obtain repurposed drugs for schizophrenia, we have developed a co-culture assay system mimicking several proximal aspects of NRG1-ERBB4 signaling (Citri & Yarden, 2006;Mei & Xiong, 2008). Its successful application in a repurposing screen with clinical substances resulted in the identification of the MR antagonist spironolactone as a potent ERBB4 inhibitor. The efficacy of spironolactone as novel inhibitor of NRG1-ERBB4 signaling was validated in heterologous cells with an endogenous expression of human ERBB4 and in vivo using transgenic mice, which model NRG1 overexpression and ERBB4 hyperphosphorylation linked to several endophenotypes with relevance for SZ (Agarwal et al, 2014). We thus provide a pharmacological proof-of-principle for targeting NRG1-ERBB4 signaling in the context of SZ and exploited the opportunity to repurpose a clinical compound, a strategy that was strongly demanded in the last years for mental diseases (Insel, 2012). In addition, a recent study suggested to fast track all SZ susceptibility genes, which encode potential targets for approved drugs, for repurposing (Lencz & Malhotra, 2015). Therefore, our multilevel approach targeting NRG1-ERBB4 signaling that identified a hidden mode-of-action of spironolactone antagonizing ERBB4 activity strongly supports this approach. As spironolactone is a clinically safe and available substance, it immediately qualifies for therapeutic intervention trials. Finally, the co-culture assay is qualified for high-throughput conditions under industry quality standards and will allow the exploratory screen of large exploratory compound libraries. Spironolactone inhibited the association between ERBB4 and PIK3R1 in the split TEV-based co-culture assay, with an IC 50 value at approximately 1 lM. Our data suggest that spironolactone also targets other ERBB family members, however, with a preference for ERBB4. Our biochemical analysis suggests that spironolactone shows an intermediate efficacy of ERBB4 inhibition. Notably, fine tuning of excitation and inhibition between excitatory projection neurons expressing NRG1 and inhibitory parvalbumin-positive interneurons expressing ERBB4 is thought to be a critical determinant of endophenotypes observed in gain-and loss-of-function mouse models (Chen et al, 2010;Yin et al, 2013;Agarwal et al, 2014). Therefore, moderate changes in NRG1/ERBB4 activity may be desired to achieve rebalanced signaling levels under pathological ◀ Figure 4. Spironolactone antagonizes ERBB4 phosphorylation both in in vitro and in vivo. A Spironolactone reduces ERBB4 levels. T-47D cells were stimulated with 10 ng/ml EGFld, 10 lM lapatinib and 10 lM spironolactone for 5 min as indicated. Cell lysates were probed for ERBB4 phosphorylation levels at Tyr1056 and Tyr1284. B Quantification of band intensities for phospho-ERBB4 levels (n = 4 per condition) shown in (A) using ImageJ. Phosphorylation levels are normalized to protein levels of ERBB4. Data are shown as mean, and error bars represent SD; t-test, with *P = 0.0356 for p-ERBB4 (Y1056), and **P = 0.0079 for p-ERBB4 (Y1284). C Experimental design for Western blot analysis shown in (D) and (E). Nrg1-tg and wt animals were treated daily with spironolactone (50 mg/kg, s.c.) or vehicle (n = 2 per genotype and per treatment) for 21 days. D Spironolactone reduces phospho-Erbb4 levels in Nrg1-tg mice. Mice were treated with spironolactone for 21 days and sacrificed for Western blot analysis. Lysates were probed with indicated antibodies. E Quantification of band intensities for phospho-Erbb4 and phospho-Limk1 levels (n = 2 per condition) shown in (D) using ImageJ. Phosphorylation levels are normalized to protein levels of Erbb4 and Limk1. Data are shown as mean, and error bars represent SD; t-test, with *P = 0.0330 for p-Erbb4, and *P = 0.0201 for p-Limk1. F Canrenone (applied as 10 lM) showed no effects on frequencies and amplitudes of sIPSCs in pyramidal neurons of the prefrontal cortex (PFC). Representative traces of sIPSCs (upper) and a histogram of mean sIPSC (lower) are shown for both before and after addition of canrenone. G Spironolactone (applied as 10 lM) significantly increases frequencies (n = 12; *P = 0.0454) and amplitudes (n = 12, *P = 0. A Experimental design. Nrg1-tg and wt animals were treated daily with spironolactone (50 mg/kg, s.c.) or vehicle (n = 12 per genotype and per treatment) for 3 weeks, followed by behavioral phenotyping using the tests as indicated. Spironolactone or vehicle treatment was continued throughout the phenotyping phase. B Nrg1-tg mice travelled longer distances in the open-field arena (effect of genotype F 1,44 = 10.53; P = 0.0022; two-way ANOVA). Bonferroni post hoc analysis revealed a significant genotype-dependent difference between vehicle-treated groups (**P = 0.0044) but not in spironolactone-treated groups (P = 0.3783). Genotype differences were abolished upon spironolactone treatment. C When vehicle-treated animals were analyzed in 1-min intervals, transgenic mice showed an increased activity throughout the entire test (effect of genotype F 1,22 = 9.27; P = 0.0060; two-way ANOVA), most prominent in intervals 2, 7, and 8 (*P = 0.0427, *P = 0.0385, and ***P = 0.000036, respectively, Bonferroni post hoc test). D There was no significant difference between the genotypes when treated with spironolactone (effect of genotype F 1,22 = 2.19; P = 0.1535; two-way ANOVA). However, the interaction of genotype and treatment was significant (F 9,198 = 1.94; P = 0.0481; two-way ANOVA). E Nrg1-tg mice treated with spironolactone spent more time in the light compartment during the light-dark test (interaction gene × treatment F 1,41 = 4.90; P = 0.0324; two-way ANOVA, and *P = 0.0219, Bonferroni post hoc test). F In the Y-maze test, transgenic mice performed less alterations (effect of genotype F 1,44 = 11.50; P = 0.0015; two-way ANOVA). The Bonferroni test confirmed this phenotype in vehicle-treated groups (**P = 0.0011), but not in spironolactone-treated animals (P = 0.5950). Spironolactone treatment had a significant effect on the number of alterations (F 1,44 = 4.12; P = 0.0484; two-way ANOVA). G Spironolactone treatment significantly enhanced PPI in Nrg1-tg mice (effect of treatment F 1,21 = 5.07; *P = 0.0325; two-way repeated-measures ANOVA). Data information: Data are shown as mean, and error bars represent SEM. Spiro, spironolactone; Veh, vehicle. n = 12 per genotype and treatment with an exception of (E) (Nrg1-tg vehicle, n = 11; Nrg1-tg Spiro, n = 10; wt vehicle, n = 12; wt Spiro, n = 12) and (G) (Nrg1-tg vehicle, n = 11; Nrg1-tg Spiro, n = 12. EMBO Molecular Medicine Vol 9 | No 10 | 2017 ª 2017 The Authors not canrenone, enhanced inhibitory neurotransmission when applied acutely to cortical slices of wild-type mice, suggesting an ERBB4-mediated mechanism. Likewise, the ERBB4 kinase inhibitor lapatinib caused similarly increased IPSCs within the same experimental model. Increased amplitudes of mIPSCs have also been observed in conditional Nrg1 loss-of-function mutants, most likely as a consequence of reduced activity of ERBB4 in inhibitory neurons (Agarwal et al, 2014). In conditional ERBB4 mutants, however, mIPSC frequencies were reduced in the hippocampus (Fazzari et al, 2010). A recent study reports that NRG2, a close relative of NRG1, is expressed in inhibitory interneurons and activates ERBB4 cellautonomously, causing a downregulation of NMDA receptor activity in these cells (Vullhorst et al, 2015). In such a scenario, inhibition of ERBB4 activity may indeed increase IPSCs, a hypothesis fitting to our observations, for both spironolactone and lapatinib control treatments. Overall, these findings indicate that altered NRG1/ ERBB4 signaling modulates inhibitory signaling, although different adaptations may prevail in different brain regions and genetic models as well as pharmacological treatments. Our biochemical analysis implicates LIMK1 signaling, but not ERK1/2 nor AKT1 as potential downstream effectors of spironolactone treatment in Nrg1-tg mice. As a non-receptor protein serine/ threonine kinase, LIMK regulates synaptic spine morphology and function by modulating cytoskeleton dynamics (Meng et al, 2002(Meng et al, , 2004Bennett, 2011). Further, LIMK1 has been linked to NRG1 signaling and SZ-relevant endophenotypes in a Nrg1-tg mouse model (Yin et al, 2013). We show that phospho-LIMK1 levels were upregulated in Nrg1-tg mice treated with spironolactone suggesting that LIMK1 activity may possibly integrate spironolactone's inhibitory effect by promoting spine enlargement, and thus synapse formation, through controlling actin cytoskeleton dynamics. Nrg1-tg animals display subtle structural changes related to spine morphology, that is, the number of bifurcated spines is increased (Agarwal et al, 2014). Therefore, it might be possible that spironolactone treatment reverts this structural endophenotype. Nonetheless, the increased levels of p-LIMK1 rather favor a mechanism compensating for the structural changes in Nrg1-tg mice, which may underlie network disturbances in these animals, by stimulating structural plasticity via increased LIMK1 activity. To further explore the modeof-action of spironolactone in the future, its impact on structural plasticity should be addressed in additional studies. Spironolactone has been developed as MR antagonist and was clinically applied for decades as a potent and safe diuretic (Ogden et al, 1961). As brain-expressed corticoid receptors are implicated in modulating the stress response, spironolactone treatment has been tested in the context of depression and was shown to increase motivation and curiosity in mice (Wu et al, 2012). Moreover, anxiety was partially improved in a small group of patients suffering from bipolar disorder (Juruena et al, 2009) in good agreement with our finding that spironolactone affects anxiety-related behavior in Nrg1tg mice. Acute spironolactone administration to healthy human volunteers, however, reduced memory retrieval (Zhou et al, 2011;Rimmele et al, 2013) and reportedly impaired recent fear memory formation in mice (Zhou et al, 2011). Upon chronic administration of spironolactone, however, we could not observe any detrimental effects on cognitive performance in fear and working memory tests in wild-type mice. However, spironolactone-treated Nrg1-tg mice displayed a slightly reduced level of fear memory, which may be partially dependent on anxiolytic actions of spironolactone or decreased pain sensitivity in transgenic mice. Nonetheless, working memory deficits of Nrg1-tg mice were rescued upon spironolactone treatment. Structure-function analysis using spironolactone metabolites and second-generation analogues revealed that the intact structure of spironolactone is paramount for inhibiting ERBB4 signaling activity. We speculate that other structural modifications of spironolactone may improve its selectivity for ERBB4 binding and concomitant inhibition. Spironolactone may therefore also serve as a template for a lead optimization process, which could produce a new molecular entity with improved characteristics to inhibit ERBB4 signaling and avoiding potentially adverse effects on the MR. Nonetheless, given our observations and the safety profile of spironolactone, a clinical study might be warranted to assess the chronic effects of spironolactone treatment in SZ patients. The oligonucleotides used for cloning are shown in the Appendix Table S2. Generation of PC12 cells stably expressing Nrg1 1 Mio PC12 cells were either transfected with 10 lg of a Nrg1 type I b1a plasmid or Nrg1 type III b1a plasmid using Lipofectamine 2000. Following an initial expression of 24 h, 400 lg/ml G418 was applied to select stable clones as each Nrg1 plasmid harbors a neomycin resistance gene for selection in mammalian cells. After 2 weeks of culturing, visible PC12 cell clones were transferred into a single well of a 24-well plate. Following to a recovery and expansion phase, stable Nrg1 expression was validated in a split TEV-based ERBB4-PIK3R1 co-culture assay. Positive clones were also verified by Western blot analysis. Protein lysates and Western blotting PC12 cells were transfected with indicated plasmids using Lipofectamine 2000 (Life Technologies). After 24 h of expression, cells were treated as indicated and lysed in a 1% Triton-X lysis buffer (50 mM Tris [pH 7.5], 150 mM NaCl, 1% Triton X-100, 1 mM EGTA) supplemented with 10 mM NaF, 1 mM Na 2 VO 4 , 1 mM ZnCl 2 4.5 mM Na 4 P 2 0 7 , as phosphatase inhibitors, and the complete protease inhibitor cocktail (Roche). Lysates from T-47D cells were analyzed for endogenous proteins only. For the analysis of cytosolic proteins, cell extracts were spun for 10 min at 4°C at 17,000 g. For the biochemical analysis of spironolactone-treated mice (for a precise description of the injection paradigm, see subheading "Mouse behavior analysis", "Spironolactone treatment"), vehicle control or spironolactone was subcutaneously injected daily for 21 days into age-matched (11-13 weeks) male mice prior to preparation of the mouse prefrontal cortex (n = 2 per genotype and treatment). For the generation of lysates, the isolated tissue was immediately placed into cooled (4°C) sucrose buffer (320 mM sucrose, 10 mM Tris-HCl, 1 mM NaHCO 3 , 1 mM MgCl 2 , supplemented with 10 mM NaF, 1 mM Na 2 VO 4 , 1 mM ZnCl 2 , 4.5 mM Na 4 P 2 0 7 as phosphatase inhibitors, and the complete protease inhibitor cocktail (Roche)), homogenized using an ultra-turrax (IKA GmbH, Staufen, Germany), sonicated (3 pulses for 10 s), and denatured for 10 min at 70°C in LDS sample buffer. Protein gels were run using the Mini-PROTEAN Tetra Electrophoresis System (Bio-Rad), and gels were blotted using the Mini-PROTEAN Tetra Electrophoresis System (Bio-Rad). Detection of proteins was performed by Western blot analysis using chemiluminescence (Western Lightning â Plus-ECL, PerkinElmer). Western blots were probed with antibodies at dilutions as shown in the Appendix Table S3. Each blot was replicated two times. Western blots were densitometrically quantified using ImageJ following the protocol openly accessible at lukemiller.org (http:// lukemiller.org/index.php/2010/11/analyzing-gels-and-western-blotswith-image-j/). Immunofluorescence staining of PC12 cells 1 Mio PC12 cells were plated per well onto poly-L-lysine (PLL)coated coverslips in a 6-well plate at day 1. At day 2, cells were transfected with ERBB4-NTEV-tevs-GV or ERBB4_1-685-NTEV-tevS-GV. At day 3, cells were gently washed twice by adding and removing 1× TBS (50 ll per coverslip), fixed in cold 4% PFA for 10 min, washed twice with 1× TBS, and permeabilized in TBS/0.1% Triton X-100 for 5 min. Then, cells were washed again three times in 1× TBS and blocked in blocking buffer (3% BSA, 0.1% Triton X-100 in 1× TBS) for 1 h at room temperature. Primary antibodies diluted in blocking buffer and cells were incubated for 1 h at room temperature. Following three washes in TBS, cells were incubated with a secondary antibody (Alexa 594 anti-rat, 1:500, Abcam, ab150160) diluted in blocking buffer for 1 h at room temperature. Coverslips were washed three times in 1× PBS, once quickly dipped into ddH 2 O to remove traces of salt, mounted on microscope slides, and sealed with ProLong Gold Antifade Mountant with Dapi (ThermoFisher Scienctific, P36935). Slides were stored at 4°C before imaged on a Zeiss Observer Z.1 microscope. Compound screening and validation Cell-based split TEV assay to monitor ERBB4 activity The split TEV method is based on the functional complementation of two previously inactive TEV protease fragments denoted NTEV and CTEV fused to interacting proteins. It has been shown to robustly and sensitively quantify protein-protein interactions and receptor activities, as proven before for the ERBB4 receptor and the regulatory adapter subunits of the PI3K, PIK3R1, and PIK3R2 (Wehr et al, 2006(Wehr et al, , 2008. Recently, the split TEV method was also successfully applied to genomewide RNAi screening in Drosophila cell culture, supporting its applicability to high-throughput applications (Wehr et al, 2013). For our HTS-compatible split TEV assay approach, human fulllength ERBB4-Cyt1 was fused to the NTEV fragment, a TEV protease cleavage site (tevS) and the artificial co-transcriptional activator Gal4-VP16 (ERBB4-NTEV-tevS-GV); human PIK3R1 was fused to the CTEV fragment (PIK3R1-CTEV) (Fig 1A). Upon ERBB4 activation, PIK3R1 is recruited to the receptor resulting in a reconstituted protease activity that cleaves off GV. In turn, released GV translocates to the nucleus and binds to upstream activating sequences (UAS) to activate the transcription of a firefly luciferase reporter gene (Fig 1A). A constitutively expressed Renilla luciferase driven under the control of the human thymidine kinase (TK) promoter was used as control to address offtarget effects related to toxicity. Compound library For small molecule screening, the NIH-NCC Clinical Collection library (sets NCC-003 and NCC-201) was used containing 727 small molecules that are FDA-approved and have a history in clinical applications (www.nihclinicalcollection.com). A Hamilton Labstar robot connected to 37 and 4°C incubators for cell incubation and compound storage and application (Cytomat automated incubator, ThermoScientific) and to a luciferase reader (Berthold Technologies) was used to automatically perform the screening. Batch 1 (compounds 1-320) was run in quadruplets, batch 2 (compounds 321-727) in triplicates. Each batch was screened three times. Transfection of cells To equally transfect large amounts of cells, PC12 cells were transfected with the split TEV assay components in solution. For one 96well plate, 4 × 10 6 PC12 cells were harvested and diluted in 5 ml assay medium (phenol red-free DMEM (low glucose, Life Technologies), 10% FCS, 5% HS, no antibiotics). The split TEV assay plasmids (2 lg pcDNA3_ERBB4-NTEV-tevS-GV-2xHA, 2 lg pTag4C_PIK3R1-CTEV-2xHA, 2 lg p5xUAS_firefly luciferase, 2 lg pTK_Renilla luciferase, and 0.5 lg pECFP-C1 for examining transfection efficiency) were diluted in 2.5 ml Opti-MEM (Life Technologies) and vortexed. In parallel, 20 ll of the transfection reagent Lipofectamine 2000 was diluted in 2.5 ml Opti-MEM and vortexed. Both Opti-MEM aliquots were mixed, vortexed, and incubated for 20 min at room temperature, followed by carefully mixing the DNA/Lipofectamine/Opti-MEM solution with the PC12 cells and incubating the cell suspension at 37°C and 5% CO 2 for 2 h without shaking. Plating the cells For plating of one 96-well plate, 10 ml suspension, containing the 4 × 10 6 in solution-transfected cells, was placed in the bubble paddle reservoir of the Hamilton Cellstar robot. 100 ll was seeded per 96-well using the 96-tip pipetting head. The homogeneity of the cell suspension was guaranteed over time by mild stirring using the paddling device inside the reservoir. For five plates each, 50 ml of additional cell suspension was used to allow for losses of inaccessible volume. After seeding, plates were transferred and stored in the Cytomat device at 37°C and 5% CO 2 . Addition of compounds The cells were allowed to express the plasmids for 24 h before compounds were added. Proper expression and transfection efficiency were verified by ECFP expression on a clear control plate. The compounds were applied in a final concentration of 10 lM using DMSO as diluent. Sixteen positions per 96-well plate (i.e., columns A and H) were reserved for controls; in detail, four wells each were taken for positive controls (stimulated with 10 ng/ml EGFld (Reprokine, RKQ02297) in DMSO, 96-well positions A1 to D1), baseline controls (DMSO only, 96-well positions E1 to H1), negative controls I (100 nM CI-1033 (Canertinib dihydrochloride, Axon, 1433) in DMSO, 96-well positions A12-D12), and negative controls II (10 lM lapatinib (Lapatinib ditosylate, Axon, 1395) in DMSO, 96-well positions E12 to H12). Thirty minutes later, 10,000 Nrg1-expressing PC12 cells in 100 ll assay medium were seeded on top. 24 h after addition of the compounds, the cells were lysed using 40 ll Passive Lysis buffer (Promega) and subjected to a Dual Luciferase Assay (Promega) according to the manufacturer's instructions. The data were analyzed in R Bioconductor using the package cellHTS2 (http://www.bioconductor.org/packages/devel/ bioc/html/cellHTS2.html), assessed using the z-score, and visualized using the program Mondrian (http://stats.math.uni-augsb urg.de/mondrian/). Dose-response luciferase assays for validation Individually re-screened candidates were validated using a doseresponse assay. PC12 cells were batch-transfected as described in the section "Transfection of cells", manually plated, and incubated for 24 h at 37°C and 5% CO 2 . Candidate small molecules were prepared in a series of dilutions using DMSO as diluent and ranging from 0.0001 to 100 lM at final concentrations, thus covering at least five orders of magnitude. Candidate dilutions were added, followed by the addition of 10,000 Nrg1-expressing cells in 100 ll volume 30 min later. Cells were lysed in 40 ll Passive Lysis Buffer and analyzed in a Dual Luciferase Assay. Data were analyzed in Excel and GraphPad Prism. For single-culture assays that used EGFld as stimulus, 100 ll assay medium containing EGFld (f.c. 10 ng/ml) was administered. The following candidates were analyzed in doseresponse assays: spironolactone (Sigma-Aldrich, S3378), eplenrenone (Sigma-Aldrich, E6657), canrenone (Santa Cruz Biotechnology, sc-205616), 7a-thiomethyl-spironolactone (Santa Cruz Biotechnology, sc-207187). Dose-response assays were run in six replicates per concentration and repeated at least two times. Data are shown as mean, and error bars represent SEM. Mouse behavior analysis For behavioral testing, age-matched male mice (8-16 weeks) on C57Bl/6 background that constitutively overexpress the 2xHAtagged Nrg1 type III b1a isoform (Nrg1-tg) under the control of the mouse Thy1.2 promoter (Velanac et al, 2012) and their wild-type (wt) littermates as controls were used. Animals were group-housed in the same ventilated sound-attenuated rooms under a 12-h light/ 12-h dark schedule (lights on at 8:00 am) at an ambient temperature of 21°C with food and water available ad libitum. One week prior to experiments, mice were separated into single cages and habituated to the experimental rooms. To minimize the influence of the circadian rhythm on drug actions, the treatment groups were analyzed at balanced time points during the light phase. The investigators for behavioral tests were blind to genotypes and/or spironolactone administration. All animal experiments were conducted in accordance with NIH principles of laboratory animal care and were approved by the Government of Lower Saxony, Germany, in accordance with the German Animal Protection Law. Spironolactone treatment 50 mg of Spironolactone (Sigma-Aldrich) was initially dissolved in DMSO and suspended in 10 ml of 0.9% NaCl, 1% DMSO, and 0.002% Tween â 20. Spironolactone (5 mg/ml) and a vehicle control (0.9% NaCl, 1% DMSO, and 0.002% Tween â 20) were subcutaneously injected with a 50 mg/kg dose daily (e.g., corresponding to 0.3 ml injection volume of a mouse with 30 g weight) for 3 weeks prior to behavioral testing (n = 12 per genotype and per treatment). Treatment was continued throughout the behavioral analysis period. To avoid injection-induced stress prior to behavioral testing, mice were injected in the afternoon, after the entire cohort has completed the behavioral paradigm. Calculation of spironolactone dosage The calculated daily dosage of 50 mg/kg/day spironolactone for mice is based on the following assumptions. Patients are routinely treated with 400 mg/day spironolactone (Aldactone 100, Riemser Pharma; spironolactone 100, Ratiopharm). Dosages of 50 to 100 mg/day were administered to patients in long-term treatments (Juruena et al, 2009). The dosage of 400 mg/80 kg patient body weight per day is equal to 5 mg/kg/day. Human doses are converted to mouse doses using the body surface area normalization method, which integrates various aspects of biological parameters including basal metabolism, blood volume, caloric expenditure, and oxygen utilization (Reagan-Shaw et al, 2008). For the calculation of the mouse dose (mg/kg), the human dose (mg/kg) is multiplied by the human K m /mouse K m , where the human K m = 37 and the mouse K m = 3. Therefore, mice should be treated with a 12-fold higher dose. The chosen dosage of 50 mg/kg/day is slightly below the calculated maximum dose of 61.7 mg/kg/day [(400 mg/80 kg)* (37/3)/day]. The lC 50 of spironolactone is > 1,000 mg/kg/day. Spironolactone is FDA-approved, used in patients for decades, and shows no major side effect in treated mice. Behavioral tests applied for mouse behavior analysis Open field and hole board Spontaneous locomotor activity was verified in the open-field test using a Plexiglas box (45 × 45 × 55 cm). The same test arena was modified with a floor insert containing 16 symmetrically allocated holes for the hole board test. During a 10-min testing session, mouse behavior was monitored by infrared sensors and recorded by the ActiMot software (TSE, Bad Homburg, Germany). Levels of urination (scored in events) and defecation (scored as feci in events) were determined manually during the open-field test. Light-dark preference The light-dark preference test was conducted in a plastic chamber divided into two compartments of same size, with one having black and the second one having transparent Plexiglas walls. A door-like opening in the center of the separating wall allowed transitions between both compartments. For testing, each mouse was placed into the light compartment facing away from the door and left undisturbed. The latency to enter the dark compartment, the time spent in the dark compartment, and the number of crossings between the compartments were monitored for 5 min using the AnyMaze software. Mice that did not enter the dark compartment within 10 min were excluded from the experiment. After each session, the chambers were cleaned with 70% ethanol. Y-maze The assessment of working memory was performed using an in-house made Y-shaped runway. Animals were placed individually into the Y-maze facing the wall and allowed to explore the maze for 10 min. The experiment was video recorded. The number of arm choices (as a measure of activity) and the percent of alterations (choices of a "novel" arm, i.e., when animals chose a different arm as before is regarded as a measure of working memory) were scored and analyzed. To avoid any olfactory cues, the apparatus was cleaned with 70% ethanol between animals. Tail suspension test Mice were manually suspended upside down for 6 min by attaching them to a fixed rod using an adhesive tape positioned at the tip of the tail. The escape motivation of a mouse was measured as the time spent active, video recorded, and scored offline. Prepulse inhibition (PPI) The startle response was measured using a two test cabinet (SR-LAB, San Diego Instruments) using a protocol as described in Brzózka et al (2010). Fear conditioning Fear memory assessment that is measured by freezing behavior was performed using the Ugo Basile Fear Conditioning System (Varese, Italy). For conditioning, mice were placed into the animal box (furnished with a stainless shock grid floor and striped black-white walls) and positioned into an isolation cubicle equipped with a lamp, a loudspeaker and infrared camera. For conditioning, striped black-white walls were inserted into the animal box. The conditioning and fear memory assessment were performed as described in Brzózka et al (2010). Hot plate Pain sensitivity was measured in the hot plate test. Animals were placed onto a metal plate preheated to 52°C. The latency to the first reaction (hind paw licking or jumping) was scored manually. Immediately after the first response, mice were placed onto another metal plate (not heated) to allow cooling their paws. Statistical analysis Statistical significance was determined using Microsoft Excel, IBM SPSS Statistics v22 and GraphPad Prism 5.0 software. Data are presented as means AE SD or SEM as indicated (n ≥ 3, for luciferase assays n = 6). For behavioral experiments, Student's t-tests were used for comparing two data samples. If the experimental setup required a paired data analysis, paired Student's t-test or paired Wilcoxon signed ranks test was used for comparing two normally or not-normally distributed data samples, respectively. Two-way ANOVA with Bonferroni post hoc test was used for the analysis of three or more samples. Two-way ANCOVA was used for the agecorrected analyses of open-field, Y-maze and light-dark preference tests. Repeated-measures ANOVA was used to analyze the effects of treatment in the PPI analyses. The robustness of cell-based assays was assessed using the Z' factor. Data from screening were analyzed using the cellHTS2 package available for R and evaluated using the z-score. Expanded View for this article is available online. The paper explained Problem NRG1-ERBB4 signaling is a schizophrenia risk pathway in humans and altered signaling activity causes schizophrenia-relevant endophenotypes in transgenic mouse models. To date, no treatment options are available targeting this pathway in schizophrenic patients. Results Here, we have developed a NRG1-ERBB4 pathway-selective screening assay based on the split TEV technology to monitor activities of FDAapproved drugs for repurposing. The anti-mineralocorticoid spironolactone was identified as top candidate from the screen to antagonize ERBB4 receptor activity. Spironolactone's effect was biochemically validated both in vitro and in vivo, and it was found to improve schizophrenia-relevant behavioral deficits in a Nrg1 transgenic mouse model. Impact We provide preclinical evidence for an approved drug that may immediately qualify for a clinical study in schizophrenic patients.
2017-08-31T02:36:50.062Z
2017-07-25T00:00:00.000
{ "year": 2017, "sha1": "3183517797fd78f171e2ece910eae99cab58bb32", "oa_license": "CCBY", "oa_url": "https://doi.org/10.15252/emmm.201707691", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "3183517797fd78f171e2ece910eae99cab58bb32", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245733466
pes2o/s2orc
v3-fos-license
An Approach to the Construction of a Recursive Argument of Polynomial Evaluation in the Discrete Log Setting : Succinct Non-interactive Arguments of Knowledge (SNARks) are receiving a lot of attention as a core privacy-enhancing technology for blockchain applications. Polynomial commitment schemes are important building blocks for the construction of SNARks. Polynomial commitment schemes enable the prover to commit to a secret polynomial of the prover and convince the verifier that the evaluation of the committed polynomial is correct at a public point later. Bünz et al. recently presented a novel polynomial commitment scheme with no trusted setup in Eurocrypt’20. To provide a transparent setup, their scheme is built over an ideal class group of imaginary quadratic fields (or briefly, class group). However, cryptographic assumptions on a class group are relatively new and have, thus far, not been well-analyzed. In this paper, we study an approach to transpose Bünz et al.’s techniques in the discrete log setting because the discrete log setting brings a significant improvement in efficiency and security compared to class groups. We show that the transposition to the discrete log setting can be obtained by employing a proof system for the equality of discrete logarithms over multiple bases. Theoretical analysis shows that the transposition preserves security requirements for a polynomial commitment scheme. Introduction Zero-Knowledge Succinct Non-interactive Arguments of Knowledge (zk-SNARKs) are non-interactive proof systems between the prover and the verifier. They provide a way for the prover to convince the verifier that the statement claimed by the prover is true without disclosing any other information except the validity of the statement while maintaining a short proof size and an efficient verification by the verifier. Since their adoption to cryptocurrency systems, such as Zcash [1] and Ethereum [2], zk-SNARKs are regarded as an essential technique for solving data privacy issues in blockchain-based applications. There have been numerous SNARK proposals in the literature. Some constructions present very efficient proof systems with the help of a trusted setup [3][4][5]. Because the transparent property is desirable for applications, such as cryptocurrency, recent constructions [6][7][8] have focused on a proof system with a transparent setting, i.e., they have no trusted setup. The construction of SNARKs with no trusted setup heavily relies on a transparent and efficient polynomial commitment scheme. At a high level, transparent zk-SNARKs can be constructed using the framework from polynomial interactive oracle proofs (IOP) [3,6] as follows: (1) The prover expresses the required computation for proving a statement as a set of low-degree polynomials over a finite field F, which is a representation of its witness. (2) The prover sends commitments to low degree polynomials to the verifier, and the verifier then checks the proof by querying evaluations of polynomials for points chosen uniformly at random from F, where we crucially require a polynomial commitment scheme. (3) Finally, one can obtain the non-interactive version of the previous proof systems by applying the Fiat-Shamir heuristic [9]. In this paper, we focus on polynomial commitment schemes. Let f (X) be the prover's secret polynomial over a field F with the degree at most d, i.e., deg( f ) ≤ d. In polynomial commitment schemes, the prover sends the commitment to f to the verifier. Later, upon input of a public point (x, y) ∈ F × F, the prover convinces the verifier that the committed polynomial f holds y = f (X) with a proof. We call a polynomial commitment scheme transparent if it requires no trusted setup to generate public parameters for the scheme. Since the first construction was developed by Kate et al. [10], a variety of polynomial commitment schemes have been proposed in the literature. For polynomial commitment schemes, the main factors of efficiency consist of the computation complexities of the prover (prover complexity) and verifier (verifier complexity), and the communication complexity between them. Usually, constructions with a trusted setup provide higher efficiency than those with a transparent setting. Recently, Bünz et al. [6] proposed an efficient polynomial commitment scheme with a transparent setting. Asymptotically, it achieves a logarithmic verifier complexity and proof size for evaluation (communication complexity). In brief, it improves efficiency by applying an evaluation protocol in a recursive manner. It reduces the degree of a polynomial f by half at each iteration; hence, log deg( f ) iterations overall. Transparency in the scheme relies on the use of a group of unknown order whose concrete candidate is an ideal class group of imaginary quadratic fields. The security of a group of unknown order stems from the infeasibility of computing the order of the group. Previous cryptographic constructions over a class group considered concrete group parameters, such as a 1665-bit negative fundamental discriminant for 128-bit security [11,12], which was used in Bünz et al.'s scheme [6]. However, recent works report that the above parameters for class groups provide less security than expected. Notably, Dobson and Galbraith estimate that class groups with a 1665-bit discriminant only offer 55-bit security [13]. They, therefore, claim that orders of a random class group should be at least 2 3328 for a 128-bit security level. Those parameters correspond to approximately a 6656-bit discriminant. This leads to a decreased efficiency for the cryptographic primitives based on class groups. In this paper, we put forward a study to overcome the efficiency degradation of Bünz et al.'s construction caused by the use of class groups. To do this, we focus on transposing their techniques in the discrete log setting, preserving a no-trust setup. This approach brings about two advantages. First, the (elliptic curve) discrete log problem is one of the standard cryptographic assumptions as opposed to the order assumption of class groups. To date, its security has been well-understood. Second, the group operation in the discrete log setting (e.g., elliptic curve groups) is much more efficient than that in the class groups, which significantly reduces the actual computation cost for both the prover and the verifier. In addition, a group element in the discrete log setting is shorter than that in class groups. This advantage cuts the cost of bandwidth spent by the prover and the verifier when applying the evaluation protocol of a polynomial commitment scheme. Our approach is built on an information-theoretic abstraction given in [6] to construct a polynomial commitment scheme. The abstraction requires two properties, a linear homomorphism and a monomial homomorphism, which the underlying commitment scheme should provide. These two properties enable the verifier to apply the computations among polynomials over their committed forms, such as a linear combination (a linear homomorphism) and a degree-shift operation (a monomial homomorphism) of polynomials. The two properties are necessary for an evaluation protocol using a recursive call, which is critical in achieving a logarithmic verifier and communication complexities. To realize these properties in a discrete log setting, we utilize a polynomial encoding method devised by Bootle et al. [14]. This method uses a variant of the Pedersen commitment scheme [15], which naturally provides a linear homomorphism. Unfortunately, however, the Pedersen commitment scheme is not a monomial homomorphism, which is easily obtained in a class group-based scheme [6]. Thus, we focus our attention on the study of a discrete log-based proof system to prove that a monomial homomorphism is verifiably computed in the discrete log setting. The contribution of this work is as follows. • We clarify a proof system that proves the correct computation of a monomorphism in the discrete log setting. Specifically, we show it suffices to have a proof system to check the equality of a discrete logarithm over multiple bases, say PoE mDL . Given two subsets {g 1 , · · · , g d } and {h 1 , · · · , h d } of a group G, PoE mDL allows the prover to convince the verifier that i have equal exponents, i.e., a i = b i for i = 1, . . . , d, without disclosing raw exponents. A number of studies on PoE mDL have been carried out independently of the construction of polynomial commitment schemes. This work bridges two rather independent proof systems and provides a blueprint to combine these proof systems for the construction of an efficient, transparent polynomial commitment scheme in the discrete log setting. • We propose a recursive argument to show the correct polynomial evaluation by employing PoE mDL . Our approach is to transpose a recursive argument from a class group in [6] to that from the discrete log setting. We present a security analysis to demonstrate the completeness and soundness of the proposed protocol. In addition, We present a zero-knowledge version of the obtained polynomial commitment scheme. A zero-knowledge version ensures that no information of the prover's secret polynomial f (X) is leaked while the prover convinces the verifier that y = f (x) holds for a point (x, y). The remainder of this paper is organized as follows. In Section 2, we review related works. In Section 3, we provide the background on the hardness assumption and building blocks for polynomial commitment schemes. In Section 4, we present our approach to transpose Bünz et al.'s techniques in the discrete log setting and investigate a sub-routine protocol as a sufficient condition for our approach. In Section 5, we discuss the performance and security of our approach. In Section 6, we extend the polynomial commitment scheme in the previous section to the version with a zero-knowledge evaluation protocol. Finally, we provide some concluding remarks in Section 7. Related Work A lot of recent research on polynomial commitment schemes have been carried out in the context of Succinct Non-interactive ARguments of Knowledge (SNARKs). In particular, a polynomial commitment scheme provides a key tool to generate a zk-SNARK from a polynomial interactive oracle proof (IOP) [3,6]. Kate et al. first constructed efficient and succinct polynomial commitment schemes for univariate polynomials [10]. The construction is based on bilinear pairings over elliptic curves and requires a trusted setup. Its extension to multivariate polynomials has been proposed by Papamanthou et al. [16] and Zhang et al. [17]. Zhang et al. [18] also presented the zero-knowledge version of their work [17]. These schemes all use bilinear pairings and require a trusted setup. Associated with transparent SNARKs, polynomial commitment schemes with a transparent setting have received significant attention and, along with the previously mentioned constructions, many schemes can be found in the literature. Bootle et al. [14] constructed a transparent polynomial commitment scheme in the discrete log setting. They represent a polynomial of degree d as a matrix with √ d rows and columns and then write a polynomial evaluation as matrix multiplications. This leads to a O( √ d) commitment size, verifier complexity, and communication complexity. Wahby et al. presented a transparent polynomial commitment scheme [7] for multilinear polynomials under the discrete log assumption. The scheme is built on the ideas of a matrix commitment of Bootle et al. [14] and the innerproduct argument of Bünz et al. [19]. For a polynomial of degree d, the O( √ d) commitment size, verifier complexity, and communication complexity are required. Ben-Sasson et al. [20] introduced the Fast Reed Solomon IOP of Proximity (FRI), which implicitly yields a transparent polynomial commitment scheme. Kattis et al. [8] and Zhang et al. [21] independently presented a method for obtaining polynomial commitment schemes from FRI. Their construction has O(λ) size commitments for the security parameter λ and O(log 2 d) communication complexity and supports quantum resistance. In addition, Lee [22] proposed a multivariate polynomial commitment scheme with a transparent setting using pairing-based commitments. The scheme builds on inner product arguments given in Bootle et al. [14] and Bünz et al. [6]. Recently, Boneh et al. [23] studied additive polynomial commitment schemes, where commitments form an additive group [6,10,14,19,22]. They showed that the additive property yields a batch evaluation of polynomial commitments, which can be used for the efficient construction of SNARKs. Groups of unknown order provide a mathematical structure for interesting cryptographic applications, such as delay functions [24], accumulators [25], and polynomial commitment schemes [6]. Most cryptographic applications consider two candidate groups of unknown order, i.e., RSA groups [26] and ideal class groups of imaginary quadratic fields [27]. RSA groups assume a trusted setup in generating the RSA modulus and hence do not meet our current interest. By contrast, class groups do not require a trusted setup and thus have been used in recent constructions with a transparent setting [6,24,25]. Dobson and Galbraith [13] analyzed the security of the candidate parameters for class groups proposed in [11,12]. They argued that the parameters in [11,12] do not meet the desired security level and present much larger parameters, which lead to an extremely large size-up for commitments in previous constructions. In this line of research, Belabas et al. [28] recently reported that the order assumption in class groups of imaginary quadratic fields does not hold in certain special classes of prime numbers. Some studies have explored alternative source groups of unknown order with a transparent setting. As an example, Dobson and Galbraith [13] suggested the Jacobian of hyperelliptic curves of genus 3, whereas Lee [29] pointed out that the order of the Jacobian of a hyperelliptic curve can be efficiently computed. Preliminaries Throughout the paper, λ denotes the security parameter written in unary. The function negl : N → [0, 1] denotes a negligible function, i.e., negl(λ) = 1 − λ ω (1) . For a set S, we use e $ ← S to denote that an element e is sampled uniformly at random from S. For a probabilistic algorithm A, we write y ← A(x) to denote that y is returned as the result of A on input x together with a randomness r picked internally. The Discrete Logarithm Assumptions Let Ggen be an algorithm that takes on input λ and returns a λ-bit prime number p, cyclic group G of order p, and a generator g of G. Definition 1 (Discrete Logarithm Assumption). The discrete logarithm assumption holds relative to Ggen if for any polynomial-time adversary A, Definition 2 (Discrete Logarithm Relation Assumption). The discrete logarithm relation assumption holds relative to Ggen if for any polynomial-time adversary A, In the above definition, ∏ n i=0 g a i i = 1 for some a i = 0 is called a non-trivial discrete logarithm relation. It is well-known that the discrete logarithm relation assumption is equivalent to the discrete logarithm assumption [14]. Zero-Knowledge Arguments of Knowledge Let R ⊂ S × W be a polynomial-time-decidable binary relation. s ∈ S and w ∈ W are called a statement and a witness, respectively. We define L R as the set {s ∈ {0, 1} * : ∃w ∈ {0, 1} * such that (s, w) ∈ R}, which is called the language of R. We consider an argument system for a relation R consisting of three probabilistic polynomial-time algorithms (Pgen, P, V ). A non-interactive algorithm Pgen takes the security parameter λ as an input and returns a common reference string (crs) pp. P and V are called a prover and a verifier, respectively, and both are interactive algorithms. In addition, P takes as input a triple of pp, a statement s ∈ S, and a witness w ∈ W. Moreover, V takes as input a pair of pp and a statement s ∈ S and outputs 0 or 1. We denote the transcript produced by P and V for an interaction by tr ← P (pp, s, w), V (pp, s) and write P (pp, Definition 3 (Argument of Knowledge). We call the triple (Pgen, P, V ) an argument of knowledge for relation R if it has completeness and witness-extended emulation, as defined below. Definition 4 (Perfect Completeness). (Pgen, P, V ) has perfect completeness if for all nonuniform polynomial-time adversaries A, [30,31]). (Pgen, P, V ) has witness-extended emulation if for every deterministic polynomial-time prover P * there exists an expected polynomial-time emulator E such that for all non-uniform polynomial-time adversaries A, the difference between the following two probabilities is less than or equal to negl(λ): Definition 5 (Witness-Extended Emulation Pr   A(tr) = 1 : where the oracle called by E P * (pp,s,st),V (pp,s) permits rewinding to any round and running again on fresh verifier randomness, and st is the initial state of P * . Definition 6 (Public Coin). An argument system (Pgen, P, V ) is called public coin if the verifier chooses its messages uniformly at random, and independently of the messages sent by the prover, i.e., the challenges correspond to the verifier's randomness. We recall special honest verifier zero-knowledge, which states that the view of the verifier can be simulated if the verifier follows the protocol honestly and if challenges made by the verifier are known in advance. Definition 7 (Perfect SHVZK). A public coin argument system (Pgen, P, V ) is called a perfect special honest verifier zero-knowledge (SHVZK) argument for relation R if there exists a probabilistic polynomial-time simulator Sim such that for all interactive non-uniform polynomial-time adversaries A, where ρ is the public coin randomness used by the verifier. The general forking lemma [6,14] is useful for proving that an argument system has witness-extended emulation. Consider a public coin interactive argument system with r rounds. We view ∏ r i=1 n i distinct accepting transcripts as having a tree format with depth r and ∏ r i=1 n i leaves, which we call an (n 1 , . . . , n r )-tree. For 1 ≤ i ≤ r, let c i be the i-th round challenge chosen among exactly n i ≥ 1 values. The root node is labeled with a statement s and has exactly n 1 children labeled with a distinct value for c 1 , where each edge from the root to a child is labeled with a message from the prover to the verifier on c 1 . Similarly, each node in depth 1 ≤ i ≤ r − 1 is labeled with a distinct value for c i and has n i+1 children labeled with a distinct value for c i+1 , where each edge from c i to c i+1 is labeled with a message from the prover to the verifier on c i . Note that each path from the root to a leaf then corresponds to an accepting transcript. Lemma 1 (General Forking Lemma [6,14]). Let (Pgen, P, V ) be a public coin argument system for relation R with r rounds. Let χ be a witness extraction algorithm that succeeds with overwhelming probability in extracting a witness from an (n 1 , . . . , n r )-tree of accepting transcripts in probabilistic polynomial time. If ∏ r i=1 n i is bounded above by a polynomial in the security parameter λ, (Pgen, P, V ) has witness-extended emulation. Commitment Schemes We review the definitions and security properties regarding the polynomial commitment schemes. In the following, we use a tuple (a 0 , . . . , a n ; b 0 , . . . , b n ) for arguments or a returned tuple of the prover P and the verifier V. In a tuple, (a 0 , . . . , a n ) before the semicolon and (b 0 , . . . , b n ) after it denotes public variables known to both P and V, and secret variables known to only P, respectively. Definition 8 (Commitment Scheme). A commitment scheme is a triple (Cgen, Commit, Open) of probabilistic polynomial-time algorithms defined as follows: • pp ← Cgen(1 λ ) takes the security parameter λ on input, and outputs the public parameter pp, which specifies a message space, a randomness space, and a commitment space; • (c; r) ← Commit(pp; m, r) takes a secret message m and an optional random r chosen uniformly at random on input and returns a commitment c and (optionally) a secret opening hint r; A commitment scheme is hiding if for all non-uniform polynomial-time adversaries A = (A 0 , A 1 ), In a polynomial commitment scheme, V additionally checks whether the evaluation at any point is correct with respect to the committed polynomial f (X) given by P. The below definition of polynomial commitment schemes is given by Bünz et al. [6], which extends that of Kate et al. [10]. Definition 9 (Polynomial Commitment Scheme [6,10]). Let (Cgen, Commit, Open) be a commitment scheme for a message space R[X] over a ring R. A polynomial commitment scheme additionally consists of a protocol Eval as follows: is an interactive public coin protocol between P and V. Both P and V have as input a commitment c, points x, y ∈ R, and a degree d. In addition, P knows the opening of c to a secret polynomial f (X) ∈ R[X] with deg( f (X)) ≤ d and a secret opening hint r. P convinces V that f (x) = y by applying the protocol. Privacy-Preserving Blockchain with SNARKs Recently, SNARKs have been receiving a lot of attention from the blockchain industry as a solution for balancing privacy and publicly-verifiable integrity. For instance, Zcash employs SNARKs to provide Bitcoin with user anonymity and privacy of transaction data with anonymous coins [1]. SNARKs are also used to verify Ethereum smart contracts over private input [2]. Figure 1 presents a high-level architecture of privacy-preserving blockchains with SNARKs. A typical way that SNARKs are used in blockchains is as follows. The real data is stored in off-chain storage. The data posted to the on-chain blockchain (blockchain ledger) consist of the commitment to the transaction and its proof that the target transaction is valid. Cryptographic commitment schemes ensure that it is very difficult to obtain the original input value from the committed value, and the proof generated using SNARKs can be verifiable by any node in the blockchain network. Therefore, the privacy problem is solved because the data is hidden in the public on-chain blockchain. In addition, since zero-knowledge techniques provide fast verification, they are being used in various ways to improve the performance and minimize the size of the blockchain. It is worth noting that a polynomial commitment scheme is a key building block to compile a polynomial IOP system, which is a formal representation of a proving statement, into a SNARK [3,6]. Our Approach In this section, we present our approach to transpose Bünz et al.'s techniques in the discrete log setting. We investigate and identify a sufficient condition for a transposition. Specifically, we show how to employ a proof system for the equality of discrete logarithms. For the rest of this paper, we encode a polynomial f (X) = ∑ d−1 i=0 f i X i over a field F into a vector f = ( f 0 , . . . , f d−1 ) ∈ F d . For a group G let g be a vector (g 0 , . . . , g −1 ) ∈ G for some positive integer . For d ≤ we denote the multi-exponentiation ∏ d−1 i=0 g f i i by g f . When it is clear from the context, we write the commitment to a polynomial f (X) as Commit( f ) instead of Commit(pp; f (X), r) for the sake of convenience. We also take a finite field F as Z p for a prime p. Table 1 presents frequently-used notations in the paper. the computation cost for group operation in G f (X), deg( f (X)) a polynomial f (X) ∈ F[X] and its degree, respectively f L (X), f R (X) the left and right half parts of a polynomial f (X), respectively f the vector representation Bünz et al.'s Abstraction Bünz et al. [6] presented a polynomial commitment scheme for their construction of SNARKs. The proposed scheme operates in a recursive way by reducing the degree of polynomial f by half during each iteration, and hence, there are log deg( f ) iterations overall. More precisely, given an odd degree d polynomial f (X) = ∑ d i=0 f i X i , the prover splits it into two polynomials which both polynomials have a degree of roughly d/2 satisfying The prover then sends the verifier the commitments Commit( f L ) and Commit( f R ) to f L (X) and f R (X), respectively. At the end of each iteration, the prover takes the next input polynomial as for a random α received from the verifier. Because the verifier needs to check if f (x) = y from committed polynomials, the verifier should be able to homomorphically compute a committed form of the current f (X) and the next f (X) from Commit( f L ) and Commit( f R ) to see whether (1) holds. The verifier also needs to compute a commitment to a polynomial in (2) for the next iteration from Commit( f L ) and Commit( f R ). To support the computation of the committed form, Bünz et al. [6] define the following two abstract properties. • linear homomorphism Commit( f ) a · Commit(g) b = Commit(a · f + b · g) for polynomials f and g, and scalars a and b (a linear homomorphism); Base Commitment Scheme to Polynomial We construct a polynomial commitment scheme based on a generalization of the Pedersen commitment scheme [32]. In the generalized Pedersen commitment, pp consists of a group G of a prime order p, and group elements g, g 0 , ..., g n−1 It is well-known that the generalized Pedersen commitment is perfectly hiding and computationally binding under the discrete logarithm relation assumption [32]. It is also important to note that the generalized Pedersen commitment scheme is homomorphic, i.e., Commit(pp; m, r) · Commit(pp; m , r ) = Commit(pp; m + m , r + r ). We consider a commitment scheme, which does not use the randomness in the generalized Pedersen commitment scheme, as follows: • Cgen(1 λ ): On input of the security parameter λ, it first samples G ← Ggen(1 λ ) of a prime order p of length λ. It then chooses g 0 , . . . , g d $ ← G and returns pp = (G, p, g) where g = (g 0 , . . . , g d ). Open(pp, c, f (X)): On input c and f (X), the verifier computes c ← ∏ d i=0 g f i i and checks if c = c in G. Because the generalized Pedersen commitment scheme is computationally binding under the discrete logarithm relation assumption, so is the commitment scheme above. Evaluation Protocol As presented in Section 4.1, Bünz et al.'s approach considers two properties (a linear homomorphism and a monomial homomorphism) for the underlying commitment schemes, which is crucial for the verifier to compute (1) from the commitments to the polynomials f L and f R on the right-hand side. However, our base commitment scheme does not provide monomial homomorphism, while it immediately holds the linear homomorphic property. A monomial homomorphic commitment scheme is not known thus far in the discrete log setting with no trusted setup. This is because a monomial homomorphic property may require some special structures for base elements in the group, which is impossible to generate without a trusted setup. Thus, our approach focuses on providing a way to check the integrity of f L and f R , i.e., f = f L + X d/2 · f R using the linear homomorphic property only. The idea behind our approach is presented in Figure 3. To avoid monomial homomorphism, our approach simply lets the prover send one additional commitment to X d/2 · f R besides two commitments to f L and f R so that f = f L + X d/2 · f R can be verified using the linear homomorphic property. Our approach for recursive argument. Aside from two commitments, Commit( f L ) and Commit( f R ), the verifier additionally receives Commit(X d · f R ) to confirm they properly come from the input polynomial f using linear homomorphic property only. We present the evaluation protocol Eval in Algorithm 1, which is a transposition of Bünz et al.'s construction [6] in the discrete log setting. In Line 13 of the Eval protocol, the prover sends one additional commitment c RR ← Commit(X d/2 · f R ). The verifier is then able to compute Commit( f ) = Commit( f L ) · Commit(X d/2 · f R ) using a linear homomorphic property of the underlying commitment scheme (Line 13). However, because the polynomial f R (X) is committed to c R and c RR independently, it is required for the prover to prove that c R and c RR are generated from the same polynomial f R (X). More precisely, given public parameter pp(= {g 0 , . . . , g d }) and two target instances d +1+i , the prover needs to convince the verifier that c R and c RR have the same exponents. Thus, we require a proof for equality of discrete logarithms, which is invoked as a sub-protocol PoE mDL in the Eval protocol (Line 12). PoE mDL takes pp, c R , c RR , and deg ( f R (X)) = d on input and returns 1 if c R and c RR have the same exponents and 0 otherwise. If the returned value is 0, V aborts the Eval protocol because P is a cheating prover. We remark that several cryptographic protocols on PoE mDL have been proposed [33,34]. The works for PoE mDL have been developed independently from the construction of polynomial commitment schemes. V returns 1 if all checks pass, 0 otherwise 6: else 7: P and V compute d ← d : d +1+i in G 10: 13: V checks that c = c L · c RR in G and returns 0 if the equation does not hold 14: V checks that y = y L + y R · x d +1 in Z p and returns 0 if the equation does not hold 15: V chooses α $ ← Z p and sends it to P 16: : // deg( f (X)) = d Discussion: Performance & Security Analysis Let Π = (Cgen, Commit, Open, Eval) be the polynomial commitment scheme described in Section 4. In this section, we analyze the performance and security of the Eval protocol. Performance We analyze the efficiency of our approach in comparison with the recently proposed schemes with a transparent setting found in the literature. For a concrete performance analysis, we borrow the examples of groups and parameters at the 128-bit security level given by Lee [22], which is presented in Table 2. In Table 2, G denotes a cyclic group of known order, which is implemented by curve25519-dalek [35]. An imaginary class group G U [36] is taken as an example group of unknown order. The discriminant of G U is fixed as ∆ = −(2 6656 − 26, 745), which is estimated to offer the 128-bit security level [13]. This is implemented by ANTIC [37]. For a pairing-based construction, G 1 , G 2 , and G T denote the two source groups and the target group of a pairing P. The groups of pairing are implemented by RELIC [38] over the curve BLS12-381 [39]. We analyze the efficiency of our approach. Let |G| and |PoE mDL | be the size of an element in G and the communication complexity of PoE mDL , respectively. Let [G], [PoE mDL ] P , and [PoE mDL ] V be the computation cost in G, the prover's computation cost for PoE mDL , and the verifier's computation cost for PoE mDL , respectively. Below we focus on the dominated terms for each complexity comprising the transmission and operations of the group elements, i.e., we neglect operations over a field Z p . The Eval protocol makes recursive calls roughly log d times. The messages between the prover and the verifier consist of log d times of three elements in G, and |PoE mDL |, respectively. Thus, the communication complexity is equal to 3 log d · |G| · |PoE mDL |. The prover applies log d times of three multi-exponentiations [40] of roughly d/2 size and one operation over G, and PoE mDL on the prover's side. This leads to O(d) · [G] · [PoE mDL ] P computation complexity for the prover. The verifier applies log d times of one exponentiation and two operations over G, and PoE mDL on the verifier's side, which leads to O(log d) · [G] · [PoE mDL ] V computation complexity overall. The size of public parameter pp is d · |G| · |PoE mDL |. We now provide a comparison for polynomial commitment schemes [6,22] with a transparent setting, which achieves a logarithmic verifier complexity in Table 3. The table focuses on the dominated terms for each complexity comprising the transmission and operations of the group elements. As mentioned above, We apply multi-exponentiation techniques [40] to both our approach and Bünz et al.'s construction to reduce the prover complexity by a factor of log d. In the case of Bünz et al.'s construction, it is possible to reduce the size of public parameter pp to a single element of G U when multi-exponentiation techniques are not applied. Table 3 summarizes the efficiency analysis on communication and computation complexities, and the size of the public parameter for recent polynomial commitment schemes with transparent setting and our approach. Table 3 shows that the efficiency of our approach depends on that of PoE mDL . If PoE mDL has a constant communication/computation complexity, we observe that each complexity is almost the same across the schemes, and the efficiency of a scheme depends highly on the underlying group. The benchmark result on base groups in Table 2 shows that the sizes of the element in G U and G T are approximately 25× and 6× larger than that of G, respectively. For the operation time, G U and G T are approximately 844× and 18× slower than that of G, respectively. EC pairing operation P -1600 Table 3. Comparison between polynomial commitment schemes with a transparent setting. | · | and [·] denote the size of an element and the computation cost of a group operation in the corresponding group, respectively. We express communication complexity in the number of group elements and computation complexity in a number of group operations. Scheme Bünz et al. [6] Lee [22] Our Approach The above discussion shows that our scheme based on an elliptic curve group in the discrete log setting is very promising in the case that we have an efficient PoE mDL . Unfortunately, currently known PoE mDL protocols have O(d) communication complexity for the number of bases, i.e., degree of a polynomial in our setting, which is not desirable for our purpose of logarithmic complexity. However, we emphasize that it is meaningful to observe that two independent cryptographic primitives are closely connected and suggest a stepping stone for the construction of an efficient, transparent polynomial commitment scheme with a recursive argument in the discrete log setting. Security We analyze perfect completeness (Definition 4) and witness-extended emulation (Definition 5) of the proposed polynomial commitment scheme. Theorem 1. The Eval protocol of the polynomial commitment scheme Π has perfect completeness. Proof of Theorem 1. First, we show that the case of d = 0 satisfies the perfect completeness. When d = 0, the valid input consists of the constant polynomial f (X) = f 0 , c = g f 0 0 ← Commit(pp; f (X)), and y = f 0 . Thus, the verification equations checked by V immediately hold. Next, we consider the case of d > 0. For the polynomial , let t in ← (c, x, y, d; f (X)) and t out ← (c , x, y , d ; f (X)) be the input and output tuples for every recursive step in the Eval protocol. For the perfect completeness, it suffices to show that t out satisfies the relations c = g f , y = f (x), and deg( f (X)) ≤ d when the relations c = g f , y = f (x), and deg( f (X)) ≤ d hold for t in . When d + 1 is odd, we can see that (c , y , f (X)) from t out is exactly equal to (c, y, f (X)) from t in and deg( f (X)) = deg( f (X)) ≤ d ≤ d = d + 1. Thus, the relation holds for t out . When d + 1 is even, we have f L (X) and f R (X), such that f (X) = f L (X) + X d+1 2 f R (X) and f (X) = α · f L (X) + f R (X). Thus, we can see that the following equations hold: This completes the proof of the perfect completeness. We now prove that the Eval protocol is sound, i.e., it has witness-extended emulation. In brief, we need to show that we can extract a witness polynomial f (X) from a tree of accepting transcripts, where the number of transcripts is bounded in a polynomial in λ. This can be done by extracting an intermediate secret polynomial at each iteration of Eval, i.e., from Level i + 1 to Level i in the tree. In Lemma 2, we first show that given two accepting transcripts, we can extract an intermediate witness polynomial at each iteration of the Eval protocol. We then prove the witness-extended emulation for the whole Eval protocol by using the lemma from leaf nodes to the root node sequentially in Theorem 2. Lemma 2. Let pp = (G, p, g 0 , . . . , g d ) be the public parameter generated by Ggen. Suppose we have two accepting transcripts (x, c L , c R , c RR , y L , y R , α, f (X), y) and (x, c L , c R , c RR , y L , y R , α , f (X), y ) for two distinct numbers α, α ∈ Z p , such that g f = c α L c R and g f = c α L c R . Furthermore, suppose f (X) and f (X) are polynomials in Z p [X] with a degree of at most d and y = f (x), y = f (x) ∈ Z p . Then on the input of the above transcripts, there exists a probabilistic polynomial-time algorithm E that extracts either f L (X), f R (X) ∈ Z p [X] with a degree of at most d such that y L = f L (x) ∈ Z p , y R = f R (x) ∈ Z p , or a breach of the binding property of the Pedersen commitment scheme relative to Ggen. Proof of Lemma 2. Because the two transcripts are valid, it holds that We then have Thus, E is able to compute from the binding property of the Pedersen commitment scheme. In addition, because it holds that f (x) = y = α · y L + y R and y = f (x) = α · y L + y R , we let y L ← y−y α−α and y R ← αy −α y α−α . Then, y L and y R are identical to the evaluations of the above f L (X) and f R (X) at X = x, respectively. Theorem 2. The Eval protocol has witness-extended emulation for a relation if the discrete logarithm relation assumption holds for Ggen. Proof of Theorem 2. For witness-extended emulation, we call the general forking lemma (Lemma 1). Thus, we need to construct an expected polynomial-time extractor E that extracts a witness from a tree whose number of leaves is bounded above by a polynomial in λ. For the statement (c, x, y, d) ∈ L R Eval , we consider the following tree of the accepting transcripts. The root node is labeled with the first input statement (c, x, y, d) to Eval. Including the root node, let N be a node labeled with the statement (c, x, y, d). We denote the corresponding witness polynomial to (c, x, y, d) by f (d) (X) ∈ Z p [X]. N has two child nodes as follows. By rewinding the oracle P * , V two times with two different challenges α 1 and α 2 on the same input statement (c, x, y, d), each child node for the given challenge is labeled with the update statement (c , x, y , d ). Finally, nodes with d = 0 are leaf nodes of the tree. Because the number of levels with a branching factor of 2 is bounded by log(d + 1) , there are at most 2 1+ log 2 (d+1) ≤ 4(d + 1) transcripts in total, which is a polynomial in λ. We now prove that there exists an extractor E that extracts a witness f (X) from the above tree, which we construct in a recursive way. That is, we construct an extractor E (d) to extract f (d) (X) for a statement (c, x, y, d) at each node starting from the leaf nodes in the tree. We note that E (d) for the degree bound d in the root node is a desired extractor E . We fist consider E (0) to extract a witness from the leaves of the tree, i.e., d = 0. In this case, E (0) directly obtains a witness f (X) = f 0 ∈ Z p from the transcript given by the prover, such that f 0 = y and c = g f 0 0 . We now move to the case of d > 0. From the construction of the tree, the node has two child nodes, where each node is labeled with the update statement c , x, y , d = d 2 on the same input statement (c, x, y, d) with two distinct challenges α 1 and α 2 . We assume that we have the extractor E (d ) that returns the valid witness f (d ) for each child node. We then construct the extractor E (d) . E (d) extracts f , whose degree is bounded by 2d = 2 d 2 ≤ d. Because the tree consists of the accepting transcripts, we have c = g f (d) and Then, by the general forking lemma, we conclude that Π has witness extended emulation. Extension to Zero-Knowledge Polynomial Evaluation In this section, we extend the polynomial commitment scheme from Section 4 to a zero-knowledge version. The zero-knowledge protocol enables the prover to convince the verifier that the prover has a polynomial f (X) with deg( f ) ≤ d such that f (x) = y for a public point (x, y) but does not leak any other information about f that is formally defined in the notion of perfect SHVZK (Definition 7). For this, we require a hiding commitment scheme to polynomials, such as the generalization of the Pedersen commitment scheme, which uses randomness when generating a commitment [32]. Below, we give a formal description of the generalization of the Pedersen commitment scheme (Cgen H , Commit H , Open H ) over the polynomials in Z p [X]. and checks if c = c in G. We present our zero-knowledge evaluation protocol EvalZK in Algorithm 2. The EvalZK protocol is also obtained by transposing the corresponding zero-knowledge evaluation protocol given by Bünz et al. under the discrete log setting [6]. The basic idea is to mask the prover's secret polynomial with a random polynomial using the blinding technique introduced in [14,19,41] and then run the Eval protocol on it. ) and y =f (x) 8: P and V run Eval(pp, c, x, y, d;f (X)) The EvalZK protocol receives a hiding commitment to the prover's secret polynomial f (x) on input, i.e., c f ← Commit H (pp H ; f (X), d, r f ), which is perfectly indistinguishable to a random element in G. To hand it over to the Eval protocol, it is necessary to remove the randomization part g r f from c f = g r f ∏ d i=0 g f i i , which is equal to Commit(pp, f (X)). However, because this reveals information on f (x), the protocol lets the prover and the verifier collaboratively blind f (x) byf (X) = h(X) + α f (X) (Line 5). Here, h(X) ∈ Z p [X] is a random polynomial selected by the prover (Line 1) and α ∈ Z p is a random number selected by the verifier (Line 4). Consequently, both the prover and the verifier succeed in generating a non-hiding commitment tof (X) under Π and the point (x, y =f (x)), and then start the Eval protocol (Lines 7-8). Proof of Theorem 3. (perfect completeness) We show that Π H has perfect completeness. Because the Eval protocol has perfect completeness (Theorem 1), it suffices to show that c and y are a valid input to Eval. That is, c is the correct commitment tof (X) = h(X) + α f (X) under Π and y is the evaluation off (X) at X = x in Z p . Given f (X) = ∑ d i=0 f i X i of a degree of at most d and h(X) = ∑ d i=0 h i X i of degree d, we have gf i i = Commit(pp,f (X))) and y = y h + α f y f = y(x) + α f f (x) =f (x) mod p. (witness-extended emulation) We show that Π H has witness-extended emulation. From Theorem 2, we have an expected polynomial-time extractor E that extractsf (X) for the Eval protocol. Using E , we construct an extractor E H to extract a witness f (X) from EvalZK. The extractor E H runs the prover to obtain {c h , y h }. At this point, E H then rewinds the oracle P * , V twice with distinct challenges α f and α f and obtains the corresponding commitments (c, y) and (c , y ) to the witnessesf (X) andf (X), respectively. Then, E H runs E on inputs (pp, c, x, y, d) and (pp, c , x, y , d) and receives the corresponding witnesses f (X) andf (X), respectively. Finally, E H is able to extract the witness f (X) fromf (X) and f (X), similarly to Lemma 2. This completes the proof of the witness-extended emulation. (perfect SHVZK) We construct the simulator Sim. Given only the public input, the simulator Sim outputs a simulated transcript that is identical to the valid transcript produced by the prover and the verifier in the real interaction. The simulator Sim first samples a random polynomialf (X) of degree d and rf $ ← Z p . In addition, Sim samples a random challenge α f $ ← G and computes c h = c · c −α f f · g rf and y h = y − α f · y f . The simulator Sim then simply applies the Eval protocol honestly usingf (X) as the witness. Because in a real execution, the values α f and rf are distributed uniformly at random over Z p , the simulated α f and rf are identically distributed to real values. In addition, the real c f andf (X)) are distributed uniformly at random over G and Z p [X] of degree d, respectively, and the same distributions hold for the simulated c f andf (X), respectively. The simulated c h is also distributed uniformly at random over G, and thus the real c h is, because of the perfect hiding property of the underlying commitment scheme. Clearly, the simulated (c h , y h , α f , rf ) holds the relations Finally, the Eval protocol does not leak more thanf (X) itself, which contains no information about f (X). Therefore, the views of the simulated and real transcripts are identically distributed. This completes the proof of the perfect SHVZK. Conclusions In this paper, we presented how to transpose a recursive argument of polynomial evaluation over a class group proposed by Bünz et al. to the discrete log setting as a way to improve the efficiency. The transposition follows from their information-theoretic abstraction. We found that the challenge for a transposition is to provide a monomial homomorphism for an underlying commitment scheme. We observed that when we use a polynomial encoding method that presents coefficients of the polynomial to the power of random group elements, an essential sufficient condition is a proof system for the equality of discrete logarithms (PoE mDL ) over multiple bases. We believe that our approach suggests a stepping stone for the construction of an efficient, transparent polynomial commitment scheme with a recursive argument in the discrete log setting. Currently, the efficiency of known proof systems for PoE mDL is not sufficient to hava logarithmic communication and verifier complexities. Therefore, in future work, we will continue to research how to improve the efficiency of PoE mDL , which leads to high-efficiency gains for the proposed construction in the discrete log setting. Funding: This research received no external funding.
2022-01-06T16:17:23.339Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "944ba53e091436e348106eae6138e85cfeba73ea", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-9292/11/1/131/pdf?version=1641018453", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "732d2f59dca69a41fa71812fd5fce538056cef9c", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [] }
212981613
pes2o/s2orc
v3-fos-license
Learner Autonomy as Strategy to Enhance the Quality of Learner The aim of this research is to measure the effectiveness of learner autonomy as a strategy to enhance the quality of learner. In designing a dynamic and quality learning, it is necessary to design an appropriate strategy to achieve the goals. For that reason, the class environment should be designed to bring autonomy learning for learners. The autonomy learning itself has a close relationship with the ability of learners in expressing, becoming more creative, having self-esteem, understand conceptual learning and love to be challenged. The method used in this research is the descriptive method, with a sample of 20 students from the German Language education program as the analyst unit in 4th-semester batch 2018/2019, which had been selected randomly. The data had been collected through questionnaires which consist of five aspects namely; motivation, planning, perform, surprise and evaluation. The result of the research indicates that there is a significant correlation between Learner Autonomy and the quality of learner. It means that the students, (1) are more motivated to study; (2) actively involved in student learning; (3) decision-making opportunities, (4) encouraging reflection. Based on the results, it can be concluded, that student’s capacity to learn for themselves is encouraged and they are encouraged to develop their own learning strategy. INTRODUCTION Today many teachers tend to prioritize learning outcomes rather than learning processes so that learners more often pursue high grades or scores in various ways and ignore the process of how that grades or scores are obtained (Tomasouw, 2018: 42). This is what causes the quality of learners to decline. While Karademir (2019:1) said that, the learning process, defined as behavior change, does not only involve learning; the individual is also expected to take an active role in this process and an academic risk in uncertain situations According to Sanjaya, (2008: 11), to improve the quality of learners, of course, teaching strategies should be wellpacked by the instructor. Therefore it is necessary to design dynamic and qualified learning through selecting the right learning strategies to achieve the goals. In other words, Najeeb , (2013) quoted in Salimi andAnsari (2015: 1107) explained that autonomous learners realize their learning program goals, take the responsibility for their learning, take part in the process of activity planning and monitor and evaluate its effectiveness. In line with Littlewood; (1999) cited in Yurdakul, (2016:1), explains that autonomous learning is involving student's capacity to use their learning independently of teachers and the capacity to communicate autonomously. Therefore it needs a strategy that can help students understand learning well. So, this is important because learning strategy is the plan and way of teaching that the teacher does by setting the main steps of teaching in accordance with the teaching objectives to be achieved in the curriculum. Micael, and Jurgen et.al. (2016: 132) said that learning strategies include approaches, models, methods and learning techniques that are specifically designed to serve student's needs regarding learning and how to think better. Smasal, (2010: 171), described teaching-learning strategies are one way of encouraging student independence. Learning strategies are tools that can be used to acquire knowledge as well as control and develop their receptive and productive language-processing skills. They can also help to solve problems with learning and using a foreign language. The use of learning strategies can have a significant influence on students' learning efficiency and output. However, it is important that students have a wide range of repertoire of learning strategies, but also that they apply them proficiently and in the appropriate situation (Smasal, 2010: 171). There are some strategies can be used to influence the effectiveness of the learning. One of which is used to improve student independence expressed by White (1995: 207), He underlines that strategies learners use strategies in self-instruction contexts and the degree of autonomy they exercise to develop foreign language skills without the help of a teacher or learning group has received little attention. Whereas Hamdani (2010: 19) explained that, effective learning strategies, of course, will help teachers have an idea of how to help students in their learning activities. This is in line with what was conveyed by Prasetyo, (2017: 1) that a teacher must know and master various appropriate learning strategies so that students can learn effectively and efficiently, as well as achieving the expected goals. Nunan (1999: 193) underlines the effective language learner as one who can make effective choices in terms of learning tasks and strategies. Thus, current approaches in language teaching have focused on the ways of learning language better and more effectively. For this reason, one of the basic roles of the teacher is to teach learners explicitly the underlying strategies behind the tasks. "Many learners are content to leave the teacher to be completed, but will still need to develop the ability to use a wide range of strategies and to choose strategies that are appropriate for the task, if they are to take full responsibility for their learning " (Reinders, 2010) and Egel (2008Egel ( : 2026 said that focus of language instruction from teacher-centered to the learner centered has given learners the responsibility of their own languge. Scarcella & Oxford (1992:63) cited in Oxford (2003:1) defined the learning strategy as the specific actions, behaviors, steps, or techniques such as seeking out conversation partners or giving oneself encouragement to handle a difficult language task --used by students to enhance their own learning. Further explained that when the learner purposely chooses strategies to adjust the learning style and the L2 task at hand, these strategies become a useful toolkit for active, conscious, and purposeful self-regulation of learning The relationship between strategy use and autonomy is complex and not direct, they both aim to help learners become better learners. Cohen (1998), cited in Benson (2006: 23); Richards,2014:1 ) explains well the role of strategy training in the development of autonomy, explicitly teaching students. The way how to apply language learning and language use strategies can enhance student's efforts in reaching language program goals, encourages them to find their own pathways to success, and thus it promotes learner autonomy and self-direction". Consequently, it can be considered that the learners who use a wide range of learning strategies for appropriate tasks have a high tendency to autonomous learning. But, Little (1995:1) argues that learning strategy and learner training can play an important supporting role in the development of learner autonomy. According to Holec, (1981:3); Cem. (2010:90), autonomy is "the ability to take charge of one's own learning". Benson (2001:48) describes autonomy "as the capacity to take control of one's learning as one that establishes a space in which differences of emphasis can co-exist". He also argues that it is important to consider three levels of control exercised by the learner; learning management, cognitive process and learning content (Tassinari, 2012:11). In order to be more autonomous, learners are requested to develop their capacity to plan learning, monitor learning progress, and evaluate learning outcomes. Learner autonomy is very effectively used in new language learning. For example, It is much more useful to learn a language by being exposed to it in comparison to learning patterns of different tenses. According to Vygotskian psychology, which supports the idea of autonomous learning, the development of students learning skills is never entirely separable from the content of their learning, seeing as learning a new language is quite different from learning any other subject. It is important to underline that the students can discover the language for themselves, with only a little guidance from their teacher so that they can fully understand it. Benson (2008:15), From the teachers' perspective, autonomy tends to imply the learner taking control of arrangements whose underlying legitimacy is unquestioned. From the learners' perspective autonomy is primarily concerned with learning, Benson (2001:17) additionally that Autonomous learning makes learning more personal and focused and, consequently, is said to achieve better learning outcomes since learning is based on learners' needs and preferences. There are five principles for achieving autonomous learning: (1). Active involvement in student learning. METHODOLOGY This research used a quantitative descriptive method. The research objectives are to assess students' perspectives on learner autonomy, the student learning guidelines for importance, The population in this research was students of the German Language Study Program at the Faculty of Teacher Training and Education at Pattimura University. While the sample of the research involved 20 students from the fourth semester academic year 2018/2019. Samples were selected using random sampling techniques. The study used a questionnaire survey, which is adapted by Tassinari, (2010: 132-133) which consists of five aspects, namely, motivation, planning, performance, evaluation and performance with 22 items. Each statement consists of three alternative answers namely I can do that, I will learn it and not important The time schedule of implementation is March to April 2019. Data were then analyzed using descriptive statistics to see the students' tendency towards the autonomous learner learning strategy. RESULTS AND DISCUSSION This research aims to look at the students' perspective on autonomous literary learning strategies. Data collected from students varied for each aspect assessed As explained bevor, there are 5 important aspects that used as the basis to arrange the instruments that is: motivation, planning, performance, supervision, and evaluation. The data above shows that the majority of students, namely 60%, consider that the motivational aspects greatly contribute to the autonomy learner. This is related to the introverted desire to learn from students and they always do, for example, always asking if there are obstacles encountered in language learning. While 40% said they were still learning to be motivated. It means that the students That means that students still need time for how they enhance their motivation so that learning outcomes are better. Figure 2. Planning Aspect In planning an effective learning course, it needs a good learning strategy, therefore only 52% of students need to learn to manage learning strategies that enable them to obtain maximum learning outcomes. While 40% will try to do a strategic learning plan, and 8% of students feel the learning plan is not important. Based on the above data it can be concluded that planning in learning strategies is very important. Where every student can design their learning strategies The date above shows that 67% of students were able to perform tasks assigned independently, even though the learning activities are difficult for them, they can handle them. On the other hand, 33% of the students have not been able to carry out the tasks given independently, they still need assistance. It means that most students can study independently. The data above illustrates that 38% of students were able to know their personality through learning styles and learning strategies. This is very good because in this way they can control themselves both learning and personal. While the remaining 62% said I will learn that. That means that most students were still hesitant to determine which learning strategy is most suitable and their independence. Figure 5. Evaluation Aspect In evaluating learning independence regarding the ability to master a foreign language, it turns out that 49% of students tend to learn about their learning strategies. While 45% were able to recognize and evaluate their way of learning so that the quality of their learning is proofed DISCUSSION Based on the research data presented above, it turns out that students in the German education study program have not been able to use the right learning strategies to improve the quality of their independent learning. Only two aspects that are motivation and performance that they use as a way to improve the quality of their learning. Whereas more than that to improve their independent learning, there needs to be assistance to direct them in using autonomous learning strategies. Thus it can be said that students still need an explanation, they have not been able to be independent. Therefore as a teacher, there needs to be an initial observation about the weakness of the learning strategies used. Because learning strategies are tools to achieve goals, students need to be given responsibility in doing something. They must be committed to doing their tasks and involved in the decision-making process. The students showed the flexibility of learning time because they had to make sure that they had already finished their responsibilities before starting their studies. This showed that the students had a high commitment to complete a task. Unfortunately, it seemed that they put their learning responsibilities after their other responsibilities. Yet, for the successful non-traditional students morning became their most preferred time for learning The successful on traditional students stated that they can learn in any place. To balance their professional or familial responsibilities with their academic responsibilities, they must make themselves able to learn anywhere in limited time. CONCLUSION Based on the results of the study it can be concluded that the autonomy Lerner is a learning strategy that can help improve the quality of learners. But in this study of five aspects that were used as indicators only two aspects which according to students they had been carried out properly. This will be effective if the instructor understands his role as a teacher by paying attention to his functions, such as (1) The teacher becomes less of an instructor and more of a facilitator. (2). Students' capacity to learn for themselves is encouraged. (3). Students are encouraged to develop their learning strategies. The results of this study are to be very important because if students are more familiar with their learning strategies, they have the right to choose which is best for them. Autonomy Lerner will provide many benefits such as building confidence and feeling responsible for what will be done. For many teachers, student autonomy is an important aspect of their teaching, which they try to manifest in a number of different ways -for example, through careful analysis of their learners' needs, through introducing and modeling strategies for independent learning, through consultation with students to help learners plan for their own learning and through the use of a self-access centre where a variety of independent learning resources are available.
2020-02-13T09:15:03.234Z
2020-02-06T00:00:00.000
{ "year": 2020, "sha1": "2d21c5af391e2fe606e71231e02cbcf509e43f14", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.2991/assehr.k.200129.063", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "d498dbbf3bfa8b41e9c1d7391e2dce295c7c975a", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
238637265
pes2o/s2orc
v3-fos-license
The prognostic value of preoperative serum lactate dehydrogenase levels in patients underwent curative‐intent hepatectomy for colorectal liver metastases: A two‐center cohort study Abstract Background The prognostic value of lactate dehydrogenase (LDH) in colorectal cancer patients has remained inconsistent between nonmetastatic and metastatic settings. So far, very few studies have included LDH in the prognostic analysis of curative‐intent surgery for colorectal liver metastases (CRLM). Patients and Methods Five hundred and eighty consecutive metastatic colorectal cancer patients who underwent curative‐intent CRLM resection from Sun Yat‐sen University Cancer Center (434 patients) and Sun Yat‐sen University Sixth Affiliated Hospital (146 patients) in 2000–2019 were retrospectively collected. Overall survival (OS) was the primary end point. Cox regression model was performed to identify the prognostic values of preoperative serum LDH levels and other clinicopathology variables. A modification of the established Fong CRS scoring system comprising LDH was developed within this Chinese population. Results At the median follow‐up time of 60.5 months, median OS was 59.5 months in the pooled cohort. In the multivariate analysis, preoperative LDH >upper limit of normal (250 U/L) was the strongest independent prognostic factor for OS (HR 1.73, 95% confidence interval [CI], 1.22–2.44; p < 0.001). Patients with elevated LDH levels showed impaired OS than patients with normal LDH levels (27.6 months vs. 68.8 months). Five‐year survival rates were 53.7% and 22.5% in the LDH‐normal group and LDH‐high group, respectively. Similar results were also confirmed in each cohort. In the subgroup analysis, LDH could distinguish the survival regardless of most established prognostic factors (number and size of CRLM, surgical margin, extrahepatic metastases, CEA, and CA19‐9 levels, etc.). Integrating LDH into the Fong score contributed to an improvement in the predictive value. Conclusion Our study implicates serum LDH as a reliable and independent laboratory biomarker to predict the clinical outcome of curative‐intent surgery for CRLM. Composite of LDH and Fong score is a potential stratification tool for CRLM resection. Prospective, international studies are needed to validate these results across diverse populations. | INTRODUCTION Colorectal cancer (CRC) is the third most common cancer and the second leading cause of cancer-related mortality worldwide. [1][2][3] The liver is the primary life-limiting distant metastatic site for CRC. 4 About a quarter of CRC patients present concurrent liver metastases, and over half will develop liver metastases through the course of diseases. 5 Surgical excision-based locoregional therapy remains the only possible curative option for colorectal liver metastases (CRLM). 6 However, only about 20% of CRLM patients are candidates for curatively intended liver resection at diagnosis. 7 Whereas a growing number of curative hepatectomy has been achieved through multidisciplinary therapy within the latest decade, most patients (50%-80%) would develop a further recurrence. 8,9 The survival outcomes derived from different studies remain heterogeneous, with 5-year survival rates ranging from 25% to 60%. 6,10,11 Thus, a better selection of patients before initiating treatment is needed to refine the therapeutic decisions. Recent studies have shown that apart from conventional clinicopathology variables, gene expression signatures, intratumoral immune cell infiltrations, and circulating tumor cells also have a prognostic impact on colorectal cancer. [10][11][12] In particular, the serum biochemical markers, namely Gamma-glutamyl transferase (GGT), alkaline phosphatase (ALP), and lactate dehydrogenase (LDH), also gained the appreciation for their prognostic implications in mCRC. 13,14 As the key enzyme in aerobic and anaerobic glycolysis, LDH plays a pivotal role in tumor metabolism by mediating the conversion of pyruvate and lactate. 15 Evidence is emerging that LDH is closely related to hypoxia, angiogenesis, inflammation, and immune status in the tumor microenvironment (TME). High serum LDH levels indicate poor prognosis among various cancer entities and promote resistance to chemo/radio/targeted therapy. [16][17][18] However, the prognostic value of LDH in CRC has remained inconsistent between nonmetastatic and metastatic settings. [19][20][21] Elevated circulating LDH levels were reported to be an adverse prognostic factor in unresectable CRLM patients receiving systemic therapy or hepatic arterial infusion. [22][23][24] In contrast, this effect was not evident for the overall survival of nonmetastatic CRC patients. 25,26 Moreover, very few studies have included LDH in the prognostic analysis of curative-intent surgery for CRLM. Therefore, it remains to be determined whether preoperative LDH levels could predict the outcome of complete CRLM resection, in which situation patients usually achieve a no evidence of disease (NED) status. To address this issue, we performed this two-center, retrospective observational study in a cohort of 580 patients with resected CRLM. Our objectives are (a) to evaluate the prognostic impact of preoperative serum LDH levels on curative-intent surgery for CRLM and (b) to integrate LDH into the established Fong scoring system within this Chinese population to improve patient stratification for CRLM resection. | Study population This two-center, retrospective cohort study enrolled consecutive 580 histologically proven CRLM patients who underwent curative-intent hepatectomy in Sun Yat-sen University Cancer Center (cohort 1) and Sun Yat-sen University Sixth Affiliated Hospital (cohort 2). Cohort 1 included 434 patients from September 2000 to December 2016, while cohort 2 included 146 patients from August 2012 to June 2019. Detailed clinical information (preoperation and postoperation clinicopathological data, blood examination, follow-up information, etc.) was retrieved from electronic-and paper-based medical records from each center. The inclusion criteria were listed as follows: (1) histologically confirmed colorectal adenocarcinoma, (2) patients who underwent curative-intent CRLM resection, (3) postoperative follow-up period of at least 3 months, and (4) preoperative serum LDH values had to be available within 2 weeks before hepatectomy. This was a noninterventional, observational, and retrospective study in which the patient data used were kept strictly confidential. All patients were provided written consent for the use of their data at the time of hospitalization. The study was performed following the Helsinki declarations and the ethics committee from both centers. The originality and authenticity of this article have been validated by uploading the key raw data onto the Research Data Deposit public platform (www.resea rchda ta.org.cn). | Follow-up Overall survival (OS) was defined as the time from hepatic resection to death from any cause or latest followup. Recurrence-free survival (RFS) was measured from the date of hepatic resection to confirming recurrence or death for any reason, whichever occurred first. Patients were followed up through outpatient clinical visits or via telephone. The follow-up starts 1 month after the operation and ends when tumor relapse or death was verified, while subjects who were lost or still alive at the date of the last contact were considered censored. | Blood sample test Data from blood examination (blood routine tests, blood chemistry tests, and tumor marker tests) were eligible for analysis if performed within 2 weeks before hepatectomy. The blood examination was performed by each center's laboratory. Enrolled patients were divided into LDHnormal and LDH-high groups, using the upper limit of normal (ULN) established by each center's laboratory as the cutoff value, in anticipation of elaborating a practical clinical tool for future use. The ULN of LDH at both centers was 250 U/L. Preoperative immune/inflammation-related factors (including neutrophil, lymphocyte, monocyte, and platelet counts, LMR, LNR, and LPR, and C-reaction protein) were collected. LMR, LNR, and LPR were defined as absolute lymphocyte count divided by absolute monocyte, neutrophil, and platelet count, respectively. | Modified Clinical Risk Score establishment and validation The clinical risk score (CRS) was calculated according to the criteria initiated by Yuman Fong. 27 Briefly, five clinical criteria, primary lymph node-positive, the disease-free interval from the diagnosis of primary tumor <12 months, number of CRLM >1, maximum CRLM diameter >5 cm, preoperative CEA levels >200 ng/ml, were assigned one point for each and total scores were defined as CRS. We integrated preoperative LDH levels into the CRS model to test whether the predictive ability improved. Two models were established as follows: (a) LDH was added to the CRS model (LDH-CRS). The LDH-CRS was calculated as follows: primary lymph nodepositive, the disease-free interval from the diagnosis of primary tumor <12 months, number of CRLM >1, maximum CRLM diameter >5 cm, preoperative CEA levels >200 ng/ ml, and preoperative LDH levels >ULN were assigned one point for each and total scores were defined as LDH-CRS. (b) Preoperative CEA levels were replaced by LDH levels (modified CRS [mCRS]). The mCRS was calculated as follows: primary lymph node-positive, the disease-free interval from the diagnosis of primary tumor <12 months, number of CRLM >1, maximum CRLM diameter >5 cm, and preoperative LDH levels >ULN were assigned one point for each and total scores were defined as mCRS. The discriminatory ability of models was assessed by area under the curve (AUC) in the time-dependent receiver operating characteristic (ROC) analysis. Harrell's discrimination concordance index (C-index, which is defined as the probability that predictions and outcomes are concordant) was employed to validate the predictive ability of the models. | Statistical analysis Patients' characteristics between different groups were compared with Student's t-test, χ 2 , Wilcoxon rank-sum test, or Kruskal-Wallis test as statistically appropriate. The survival curves were generated using the Kaplan-Meier method and compared with the log-rank test in terms of RFS and OS. OS was the primary end point. To identify independent prognostic predictors, univariate and multivariate Cox proportional hazard regression analyses were performed. The associations between baseline clinicopathologic variables (age, gender, primary tumor location, grade of differentiation, pathology, T and N stage of primary tumor, preoperative CEA and CA19-9 levels, LDH level, number of CRLM, maximum diameter of CRLM, extrahepatic metastases, surgical margin of CRLM, preoperative chemotherapy, disease-free interval from discovery of primary tumor to liver metastases) and survival outcome were explored and quantified by hazard ratios (HRs) and corresponding 95% confidence intervals (CIs). Parameters with p < 0.10 in the univariate analysis were selected and further included in the multivariate analysis, relying on the ENTER algorithm with a selected level of 0.05. In the multivariable analysis for pooled population, the cohort was obligated to be an adjustment factor to exclude the confounding factor of different affiliates. KRAS and BRAF mutation was not considered for Cox regression analysis because it was not available for all patients, especially for patients in cohort 2 (Table 1). Hence, a sensitivity analysis in cases with available data of KRAS mutation status was performed in a multivariable model. Furthermore, subgroup analyses were carried out stratified by demographic and clinicopathologic variables and presented by forest plots. As for the comparison of time-dependent AUC between different models, the Wilcoxon matched-pair signed-rank test was applied. Time-dependent AUC was calculated by the Package timeROC (version 0.4). The C-index was calculated by the Package rms (version 5.1-3.1). Statistical analyses were conducted with the SPSS software version 19 (SPSS, Chicago, IL), STATA (Release 14.2; StataCorp LP, College Station, TX), and GraphPad Prism 7.0. | Characteristics of patients A total of 580 consecutive CRLM patients at two Chinese medical centers were enrolled. Four hundred and thirtyfour patients were from cohort 1, and 146 patients were from cohort 2. Clinicopathology and treatment (Table S1). | LDH levels and correlations with clinical characteristics The relationship between serum LDH and clinicopathologic parameters was detailed in pathology differentiation, and T and N stage), metastatic site characteristics (presence of extrahepatic disease, number of CRLM, and perioperative chemotherapy), and KRAS and BRAF mutations (data not shown in Table 2 because gene test was not carried out in a minority of patients). However, we observed patients with a maximum diameter of CRLM ≤2.5 cm (the median diameter) had a higher proportion of elevated LDH levels than patients with a maximum diameter of CRLM >median (24.2% vs. 9.7%, p < 0.001). Patients with elevated CEA also had a greater possibility of having elevated LDH than those with normal CEA levels (18.9% vs. 10.7%, p = 0.011). A similar trend was observed for CA19-9 levels (p = 0.006). Patients with synchronous CRLM had a higher proportion of elevated LDH than those with metachronous CRLM (17.9% vs. 10.1%, p = 0.033). In addition, patients with CRS of 4-5 had a higher proportion of elevated LDH than patients with CRS 2-3 or CRS 0-1 (51.2% vs. 14.2% vs. 8.7%; p < 0.001). | Cox regression analysis of relapsefree survival and overall survival Due to some missing data for the baseline variables (details in Table 1), 490 patients were finally included in the multivariable model. Elevated preoperative LDH levels (defined as LDH >ULN) were found to be the strongest prognostic factor for OS (Table 3). In the univariate analysis, age, pathology differentiation, T stage of the primary tumor, lymph node metastases of the primary tumor, preoperative CEA and CA19-9 levels, number of CRLM, maximum diameter of CRLM, presence of extrahepatic metastases, preoperative chemotherapy, R0 resection margin, and LDH levels were significant predictors for OS. After adjusted for the above clinicopathologic parameters, eight factors were ultimately identified as independent prognostic makers for OS in the multivariate analysis: age In the stratified analyses for each cohort, LDH remained its independent prognostic value for OS in the multivariate analysis, both in cohort 1 (HR, 1.77; 95% CI, 1.17-2.69; p < 0.001; Table S2) and in cohort 2 (HR, 3.71; 95% CI, 1.75-7.89; p = 0.001; Table S3). In terms of RFS, LDH remained an independent predictor in the multivariate analysis (HR, 1.53; 95% CI, 1.01-2.03; p = 0.042), along with lymph node metastases of the primary tumor, number of CRLM, and the maximum diameter of CRLM (Table S4). Additionally, in the sensitivity analysis in cases with available data of KRAS mutation status, only the number and size of CRLM were independent predictors for OS in multivariable models probably due to the limitation of sample size (Table S5). | Survival outcomes according to LDH levels and subgroups analysis In the pooled cohort, patients with elevated LDH showed impaired OS compared with patients with normal LDH levels ( Figure 1). On the other hand, patients with elevated LDH had significantly shorter RFS (8.5 months vs. 22.0 months; HR, 2.11, 95% CI, 1.54-2.89; p < 0.001) than patients with normal LDH levels in cohort 1 ( Figure S1). Subgroup analyses revealed that LDH produced consistent prognostic value across patient subgroups stratified by age, sex, primary tumor characteristics (location, T and N stage), liver metastases characteristics (number, maximum diameter, surgical margin, disease-free interval from primary tumor, extrahepatic disease), perioperative chemotherapy, preoperative CEA and CA19-9 levels, even by Fong score. The forest plots provided a clear trend that patients with lower LDH levels obtained better survival benefits from hepatectomy for OS (Figure 2). On the other hand, LDH levels could overcome the CRS scoring system. 8.7% of the patients in the CRS 0-1 group had LDH >ULN and presented with significantly poor outcomes than patients who had LDH ≤ULN (mOS 29.7 months vs. not reached, p = 0.005); conversely, 48 LDH ≤ULN (30.5 months vs. 60.2 months, p = 0.002) ( Figure S3). | Receiver-operating characteristic (ROC) analysis for the comparison of CRS and LDH-CRS in prediction ability Time-dependent ROC analysis displayed that LDH-CRS and mCRS exhibited a better predictive value than CRS in the pooled cohort for OS (p = 0.016). In the CRS model, the C-index of the 5-year OS probability forecast was 0.653 ± 0.029, the C-index of the LDH-CRS model was 0.674 ± 0.029, whereas the C-index of the mCRS model was 0.681 ± 0.028 (Figure 4). These results suggest that adding LDH to the CRS scoring system demonstrated a better accuracy. | Association of LDH levels and immune/inflammation-related indices In an exploratory analysis, it is interesting to note that LDH levels varied with a set of immune/inflammatory factors ( Figure 5). Specifically, patients with elevated LDH had higher preoperative neutrophil counts (p = 0.031), higher C-reaction protein (CRP) levels (p < 0.001), and lower lymphocyte counts (p = 0.022) than patients with normal LDH levels. Consequently, patients with elevated LDH also had a lower lymphocyte-to-monocyte ratio (LMR; p < 0.001) and lymphocyte-to-neutrophil ratio (LNR; p < 0.001). On the contrary, LDH levels were not associated with preoperative total white blood cell counts or monocyte counts. | DISCUSSION Resection of colorectal liver metastases is fraught with high rates of recurrence. It represents an area of intense investigation in desperate need of predictive biomarkers to aid in surgical decision-making. In the current study, we found that LDH was the strongest prognostic factor for OS both in the univariate and the multivariate analyses. Patients with elevated LDH had a nearly two-fold higher risk for mortality (mOS, 27.6 months vs. 68.8 months). The 5-year survival rate in the normal-LDH and the high-LDH groups was 53.7% and 22.5%, respectively ( Figure 1). Although some scholars have investigated the utility of LDH as a serum biomarker in resectable and unresectable CRC, its usefulness has been limited by underpowered studies as well as its nonspecificity. [19][20][21][22][23][24][25][26] To the best of our knowledge, our study is the first to address the independent prognostic impact of preoperative LDH levels in curative-intent CRLM resection. Increased LDH is closely linked to hypoxia and angiogenesis in aggressive tumor phenotypes showing accelerated growth kinetics. [28][29][30][31][32] The metabolism of fast-growing cancer cells is shifted toward high glucose uptake and enhanced lactate production. 33 In the TME, lactate promotes proinflammatory cytokines, such as TNF-α, IL-1, IL-6, prostaglandins, and nuclear factor-κB; enhances immune-suppressive cells, such as myeloid-derived suppressor cells (MDSCs) and dendritic cells (DCs); inhibits cytolytic cells, such as natural killer (NK) cells and cytotoxic T-lymphocytes (CTLs); recruit tumor-associated macrophages (TAMs) and promotes their conversion into immunosuppressive phenotype. [34][35][36] Therefore, elevated LDH is a negative prognostic biomarker not only because of its key role in cancer metabolism, but also because it modulates the complex interplay between the TME and the host immune system, impacting the proliferation, invasion, and migration potential of malignant tumors. 37 Interestingly, exploratory analysis unexpectedly showed that LDH levels strongly correlated with systemic inflammation indices, namely lymphocyte-to-monocyte ratio (LMR), lymphocyte-to-neutrophil ratio (LNR), and C-reaction protein. In contrast, this correlation was not observed for CEA levels (data not shown). It has been reported that systemic inflammation leads to lymphocytopenia and increased the presence of TAMs, resulting in decreased cellular immunity. [38][39][40][41][42] Meanwhile, growing evidence has shown that LDH could be a marker of diminished antitumor immunity, which inversely correlated with response to immune checkpoint blockade therapy. 43 Moreover, the overexpression of hypoxia-regulating factors, such as HIF-1, Foxp3, and CCL-28, might contribute to an immunosuppressive microenvironment by recruiting myeloid-derived suppressor cells (MDSCs) and TAMs. 33,44 Thus, the mechanisms or pathways regulating LDH may intersect with hypoxia and antitumor immunity. 15,32,35 LDH may serve as an alternative indicator of systemic inflammation and immunosuppression. LDH is recently emerging as an anticancer target. 45,46 Herein, we postulate that the perioperative use of nonsteroidal antiinflammatory drugs might decrease the recurrence risk of CRLM resection. 47 It was reported that LDH could be the product of tumor necrosis due to hypoxia, which is a sign of a high tumor burden. 15 In the present study, serum LDH levels did not show much relevance to most clinicopathologic parameters (such as primary tumor sidedness, T and N stage, KRAS status, pathology and differentiation, and disease-free interval). Though elevated LDH was indeed associated with the maximum diameter of CRLM in our analysis, nevertheless, for patients with elevated LDH, 32.2% (30/93) of them virtually had the maximum diameter of CRLM below median value (2.5 cm) ( Table 2). It was also worth noting that LDH levels were not associated with the number of CRLM. Perhaps more importantly, subgroup analysis showed that the prognostic value of LDH was independent of the number and size of CRLM. LDH also demonstrated even strong prognostic value among patients with extra-hepatic metastases or with R1 surgical margin. Besides, LDH could distinguish the survival regardless of the Fong score ( Figure 2). The above findings suggested that the prognostic attribute of LDH in the current study might go beyond a simple indicator of heavier tumor burden. High LDH levels might denote aggressive biology in a way that is independent of traditional molecular and clinicopathologic features. LDH might be both a metabolic and an immune surveillance prognostic biomarker. The prognostic scoring system proposed by Fong et al. (1990) has been widely used in clinical practice to stratify CRLM patients over time. 27,48,49 Nevertheless, it has been questioned for rationality in current times. 50,51 The Fong score was originated from a single-institution cohort, which might be influenced by local clinical practice patterns and biases. Therefore, it has not been successfully validated across different institutions, 52,53 especially in patients with long-time follow-up, 54 or in the setting of neoadjuvant chemotherapy prior to hepatectomy. 55,56 Furthermore, in consideration of racial and genetic differences, data on Chinese populations were limited. Though routine CEA test in CRC care is recommended globally, only 6.3% of patients in our data set had CEA F I G U R E 4 Receiver-operating characteristic (ROC) analysis for the comparison of different scoring systems in prediction of overall survival in the pooled cohort. Abbreviations: CRS, Clinical Risk Score; mCRS, modified Clinical Risk Score; AUC, area under curve; C-index, concordance index; OS, overall survival >200 ng/ml, while a higher proportion of patients (16%) had elevated LDH. Consistent with recent studies, 9,48,49,57 we found that CEA had the insufficient statistical power to detect OS differences (p = 0.184). Notably, LDH could provide additional discriminatory ability on the basis of CEA and CA19-9 levels. Specifically, among patients with CEA >5 ng/ml, median OS was distinguishable between patients with elevated and normal LDH levels (24.2 months vs. 60.6 months). For patients with CEA <5 ng/ml, elevated LDH still indicated worse OS (36.3 months vs. not reached). We observed an even more significant trend for CA19-9 levels ( Figure S2) Figure S3). Therefore, combing the Fong score with LDH, with a better prognostic discriminatory ability, outperformed the Fong score. Remarkably, both LDH-CRS and mCRS identified a relatively higher proportion of patients in the high-risk group (score 4-6) than CRS (13.2% vs. 12.0% vs. 8.5%). Thus, they could better define a portrait of the optimal candidate for CRLM resection with long-term F I G U R E 5 Associations between preoperative serum LDH levels and serum immune/inflammation-related factors. (A) LDH levels and WBC counts, (B) LDH levels and neutrophil counts, (C) LDH levels and lymphocyte counts, (D) LDH levels and monocyte counts, (E) LDH levels and C-reactive protein levels, (F) LDH levels and lymphocyte-to-monocyte ratios, (G) LDH levels and lymphocyte-to-neutrophil ratios, (H) LDH levels and lymphocyte-to-platelet ratios. Abbreviations: WBC, white blood cell; NEU, neutrophil; Lyn, lymphocyte; CRP, C-reactive protein survival, as well as a picture of patients in whom direct hepatectomy may be ill-advised and further neoadjuvant and adjuvant systemic therapy would be preferable. We acknowledge that our analysis has some limitations due to its retrospective and observational nature. Some genetic parameters, including RAS, BRAF, microsatellite status, and postrelapse treatment, were not available in some data sets. It would be meaningful to combine LDH and specific mutations and molecular features of CRC in future. Recurrence time was not thoroughly recorded in cohort 2. Estimating of RFS was not stringently carried out at protocol-specified intervals, though most physicians assessed the tumor status every 8-12 weeks. Lessfrequent assessment may bias in favor of a longer RFS time. Nevertheless, this factor is less likely to influence the primary OS outcome, which could genuinely reflect the clinical benefit of hepatectomy. 58 Because the determination of the optimal cutoff value of LDH was beyond the scope of this study, we used the ULN to dichotomize this continuous variable, and the two participating centers adopted the same ULN of 250 U/ml. Finally, the enrollment dates of the two cohorts differed. Prospectively defined resectability criteria for CRLM were not established in the study protocol, the therapeutic decisions were made by a multidisciplinary team (MDT) in each medical center. However, since surgical interventions might outline a selection process per se, this would minimize the variations in patient selection between the two cohorts. However, the above weaknesses had to be seen through the lens of clear strengths. The advantage of our study resided in the large sample size, the division in two independent cohorts, the long-term follow-up, and the heterogeneous cohort of unselected, real-world patients. We also discovered that LDH might provide additional information on tumor metabolic and immune states. The accessibility and reproducibility of the noninvasive laboratory serum LDH test support its routine use in clinical practice. We expect future studies with prospective designs to validate our findings and a more explicit understanding of the molecular mechanisms of LDH in governing tumor biology. | CONCLUSION Our study implicates preoperative LDH level as a reliable and independent laboratory biomarker to predict the outcome of curative-intent surgery for CRLM. Integrating LDH into the established Fong scoring system can enhance the discrimination ability. Composite of LDH and Fong score is a potential stratification tool for CRLM resection. Prospective, international studies are needed to validate these results across diverse populations. ETHICS STATEMENT The authors declare that ethical approval has been acquired from the Research Ethics Committee of Sun Yatsen University Cancer Center and Sun Yat-sen University Sixth Affiliated Hospital for this retrospective analysis. All patients were provided written consent for the use of their data at the time of hospitalization. All methods were carried out in accordance with Helsinki guidelines. No further ethical approval was required. CONSENT FOR PUBLICATION All authors have read and approved the final version to be published and signed the author disclosure form. SUPPORTING INFORMATION Additional supporting information may be found in the online version of the article at the publisher's website.
2021-08-20T18:35:21.835Z
2021-10-12T00:00:00.000
{ "year": 2021, "sha1": "93eb241dc08d0272783039afe2088d9debdb34a1", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/cam4.4315", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "32e25b6d86e4054d60c884d78edd1ec4dff32b82", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221632445
pes2o/s2orc
v3-fos-license
Central Nervous System Miliary Brain Metastasis Secondary to Breast Cancer Miliary metastasis to the central nervous system (CNS) is a rare presentation of metastasis mainly found in primary adenocarcinoma of the lung. Its association with breast cancer is even less frequent. We present the case of a 50-year-old female patient diagnosed in 2010 with stage IIA infiltrating ductal breast cancer RE (-), RP (+), HER 2 (-), HER2 NEU (+). She was treated with modified radical left breast mastectomy, radiation therapy, and chemotherapy. Her condition began presenting oppressive frontal headache without irradiation, predominantly in the evening, intensity 8/10, which decreased when sleeping and was exacerbated with stressful situations, in addition to progressive cognitive deterioration. Simple and contrasted computed tomography (CT) of the skull and thoracoabdominal were requested, showing multiple micronodular lesions with calcium density in the brain parenchyma, left pleural effusion, hypo and hyperdense lesions in the liver parenchyma, as well as osteoblastic lesions in the lumbar spine. Simple and contrasted magnetic resonance imaging (MRI) of the skull showed multiple supra and infratentorial intra-axial lesions. The most frequent associated symptom with miliary metastasis is cognitive impairment. Miliary metastasis, confirmed by imaging studies and histopathology, requires the ruling out of other causes of this calcification pattern, such as neurocysticercosis, due to specific treatment for each pathology. Introduction And Background Miliary metastasis was first described in 1951 as "carcinomatous encephalitis." It was described as multiple plaques formed from perivascular distribution [1]. It is a rare presentation in breast cancer. At this time, its mechanism of occurrence is not fully known [2][3]. Physiological (pineal gland, choroid plexus, habenula, falx cerebri, tentorium, among others), and pathological (tuberculosis, cysticercosis, TORCH (Toxoplasma gondii, others (including Treponema pallidum, Listeria, Varicella, and parvovirus B19), rubella virus, cytomegalovirus (CMV), and herpes simplex virus (HSV)) disease, chronic viral encephalitis, Fahr's disease, thyroid or parathyroid disease) [4] calcifications could complicate the clinical diagnosis of miliary metastasis since they present with similar symptoms such as hemiparesis, dysarthria, short/long-term memory loss, seizures, language abnormalities, ataxia, dementia, psychosis, or headache [2,5]. The objective of this work is to describe a case of central nervous system miliary metastasis secondary to breast cancer, as well as a literature review on this topic. Patients and methodology A literature review of both English and Spanish was performed in the Medline databases using the following keywords: "miliary calcifications brain", "miliary brain metastases", "metastasic breast cancer", and "miliary brain calcifications", as well as their respective keywords in Spanish. Independent of the primary tumor strain, all case reports since 1988 to 2019 on miliary brain metastases in the central nervous system, were selected and their clinical characteristics described. Likewise, a patient's case was reported and sent to the neurology service of our unit, where this diagnosis was made. Case presentation A 50-year-old female nurse, with genetic load for diabetes mellitus, denied smoking and drug use, occasional alcoholism, and allergic to sulfonamides. She denied chronic-degenerative diseases, transfusions, or trauma-related conditions. She was diagnosed with breast cancer in 2010, treated with a modified radical mastectomy of the left breast. Histological variety described as an invasive ductal carcinoma stage IIA RE (-), RP (+), HER 2 (-), HER2 NEU (+). She was administered adjuvant chemotherapy with anthracyclines and taxanes sequentially, eight cycles, 25 sessions of radiotherapy, tamoxifen for five years, with extended adjuvant anastrozole and exemestane to date. Symptoms began on July 30, 2019, as she described an acute frontal headache without irradiation, predominantly in the evening, intensity 8/10. It decreased while sleeping and exacerbated with stressful situations, without other accompanying symptoms. On August 5, 2019, she attended her checkup at the oncology medical visit, commenting on her symptoms. In addition, the physical examination noted disorientation in time and space and a language disorder being inconsistent and inappropriate. It was decided to hospitalize her for the complete study protocol. Regarding the initial exploration, her vital signs were within normal parameters, normocephalus, without obvious lesions, rhythmic precordium without murmurs, left basal hypoventilation without crackles, and decreased vocal resonance. Abdomen and limbs were without abnormalities. Upon neurological examination, the patient showed alterations in mental state, with disorientation in time and space. Folstein's mini-mental test was 19/30, with impaired memory, abstraction, judgment, and language. Cranial nerve examination showed no abnormalities. There was motor function with preserved tone and trophism in all extremities, strength was 4/5 proximal and distal in the right arm, 5/5 in all the others. Deep tendón reflexes were ++/++++ and the Babinski sign was negative. The sensory system was not objectively evaluated due to the mental state of the patient. The cerebellar function showed bilateral eumetry and eudiadocinesia, without pathological nystagmus, as well as normal gait. There were no meningeal irritation signs. Pathologic reflexes were present: sucking, palmomental, and grasping. The patient was admitted with a diagnosis of an acute confusional state. A simple and contrasted CT scan of the skull was performed, showing multiple micronodular lesions in the entire cerebral parenchyma, both white and gray matter, without meningeal enhancement. (Figure 1, Appendix 1). Complementary tests showed hematic cytometry, blood chemistry of three elements, serum electrolytes, and thyroid function test within normal parameters. Liver function tests reported total bilirubin 0.3 mg/dL, conjugated bilirubin 0.3 mg/dL, aspartate aminotransferase (AST) 43 U/L, alanine transaminase (ALT) 52 U/L, gamma-glutamyl transferase (GGT) 158 U/L, alkaline phosphatase 100 U/L, albumin 3 g/dL, parathormone levels 22.7 pgr/mL (4-58.1). The cerebrospinal fluid analysis was reported as acellular, the acid-fast Bacillus (AFB) test was negative, cysticercus antigen was negative, proteins 50 mg/dL. The serology was as follows: CMV immunoglobulin G (IgG) >180, CMV IgM <5, toxoplasmosis IgG 22.5, toxoplasmosis IgM <3, rubella IgG 13.8, and rubella IgM 11.7. Anti-HIV 1 and 2 (antihuman immunodeficiency virus 1 and 2) were negative, hepatitis C was negative, and hepatitis B surface antigen was negative. With these results and due to the clinical presentation dexamethasone was started. FIGURE 1: Transverse cerebral CT scan Red arrows in A-D: Micronodular lesions with calcium density distributed throughout the brain parenchyma. With gray and white substance affection. CT: computed tomography Simple and contrasted thoracoabdominal CT was requested, which showed left pleural effusion, multiple hypodense hepatic lesions with contrast, as well as osteoblastic lesions at the lumbar spine level (Figure 2). MRI of the skull showed multiple intra-axial supra and infratentorial lesions with gadolinium enhancement in the T1 sequence ( Figure 3). FIGURE 3: Cerebral MRI (T1) A-D: Multiple hyperintense micronodular lesions in T1 distributed heterogeneously in the brain parenchyma (red arrows). The white matter lesions showed enhancement after administration of gadolinium. MRI: magnetic resonance imaging The patient was treated with prophylactic anticonvulsants and antiedema steroids and was transferred to a tertiary center to receive holocranial radiotherapy. Discussion Breast cancer is the most common cancer in women in the United States. In 2018, the World Health Organization (WHO) estimated that it affects 2.1 million women each year, 627,000 women died of breast cancer, in addition to it accounting for 15% of all cancer deaths among women. Nearly 30% of new breast cancer diagnoses have already spread to regional lymph nodes and 5% of these occur with metastases at the time of presentation, with the median age of diagnosis being at 50 years [5][6]. Breast cancer can be classified by microarray techniques into several intrinsic subtypes: luminal A, luminal B, enriched with HER2, and triplenegative [7]. Central nervous system (CNS) metastasis is most frequently observed in the following subtypes of breast cancer: Triple-negative, shows a higher incidence of visceral and cerebral tumor metastasis (46%), HER2-positive (5-30%), as well as TP-53 positive; the last one of these was reported with a 38% higher probability of occurrence as compared to TP-53 negative [5][6]. Metastatic lesions have been related to the initial site of appearance; it is more common after initial metastasis to the bone (26.7%). Once pleural metastasis has been established, the disseminated disease is more common, including CNS compromise in 63.6% [8]. There have been 26 cases of miliary metastasis to the CNS reported to date. The most frequent primary tumor was lung (61.54%), followed by breast and unknown origin, with 11.54% each. The clinical presentation of miliary metastasis to the CNS is heterogeneous, the most frequent symptoms, in general, were cognitive impairment in 28%. In patients with primary breast tumors, the most frequent symptoms were psychiatric alterations and language disorders. To date, the most associated cancer with military metastasis to the central nervous system is lung cancer. The importance of our case report lies in the rarity of its association and the clinical description of its manifestations. From all the reported cases, this is the third worldwide with a primary origin in the breast. Besides, the most frequent symptomatology has been described according to each lineage to date. Among the differential diagnoses to be considered in our population are neurocysticercosis and CNS tuberculosis, however, clinically, the first one usually presents with epilepsy, while tuberculosis tends to manifest more frequently as basal arachnoiditis, affecting the lower cranial nerves [9][10]. In consideration of these findings, data from CT, MRI, and cerebrospinal fluid (CSF) study, we excluded these diagnostic possibilities. The main weakness of our study was the lack of a confirmatory histopathological study. However, the most relevant differential diagnoses were excluded. Conclusions CNS metastases are common among cancers. Miliary metastasis is rare, with breast cancer being even less common. Therefore, its finding requires differential diagnoses with other
2020-08-13T10:09:46.378Z
2020-08-01T00:00:00.000
{ "year": 2020, "sha1": "58573e0c54d4cda6398a98e93e426104bb00389f", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/34203-central-nervous-system-miliary-brain-metastasis-secondary-to-breast-cancer.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3cc350def00601743839350b6af7f648a3eecdd0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258836723
pes2o/s2orc
v3-fos-license
Expression pattern analysis of m6A regulators reveals IGF2BP3 as a key modulator in osteoarthritis synovial macrophages Background Disruption of N6 methyl adenosine (m6A) modulation hampers gene expression and cellular functions, leading to various illnesses. However, the role of m6A modification in osteoarthritis (OA) synovitis remains unclear. This study aimed to explore the expression patterns of m6A regulators in OA synovial cell clusters and identify key m6A regulators that mediate synovial macrophage phenotypes. Methods The expression patterns of m6A regulators in the OA synovium were illustrated by analyzing bulk RNA-seq data. Next, we built an OA LASSO-Cox regression prediction model to identify the core m6A regulators. Potential target genes of these m6A regulators were identified by analyzing data from the RM2target database. A molecular functional network based on core m6A regulators and their target genes was constructed using the STRING database. Single-cell RNA-seq data were collected to verify the effects of m6A regulators on synovial cell clusters. Conjoint analyses of bulk and single-cell RNA-seq data were performed to validate the correlation between m6A regulators, synovial clusters, and disease conditions. After IGF2BP3 was screened as a potential modulator in OA macrophages, the IGF2BP3 expression level was tested in OA synovium and macrophages, and its functions were further tested by overexpression and knockdown in vitro. Results OA synovium showed aberrant expression patterns of m6A regulators. Based on these regulators, we constructed a well-fitting OA prediction model comprising six factors (FTO, YTHDC1, METTL5, IGF2BP3, ZC3H13, and HNRNPC). The functional network indicated that these factors were closely associated with OA synovial phenotypic alterations. Among these regulators, the m6A reader IGF2BP3 was identified as a potential macrophage mediator. Finally, IGF2BP3 upregulation was verified in the OA synovium, which promoted macrophage M1 polarization and inflammation. Conclusions Our findings revealed the functions of m6A regulators in OA synovium and highlighted the association between IGF2BP3 and enhanced M1 polarization and inflammation in OA macrophages, providing novel molecular targets for OA diagnosis and treatment. Supplementary Information The online version contains supplementary material available at 10.1186/s12967-023-04173-9. Background Osteoarthritis (OA) is the most common disabling joint disease that seriously hampers the quality of life. By 2020, OA is expected to affect over 500 million people worldwide [1]. As a multifactorial disease, mechanical overloading, trauma, inflammation, metabolism, and genetic vulnerabilities seem to be the potential risk factors of OA [2]. However, given its unclear pathogenesis, no radical cure for OA has been discovered. Current therapeutic strategies focus on pain relief and lubrication, whereas knee replacement is the only option for patients with late-stage OA to partially gain motor function [3]. Therefore, elucidating the mechanisms of OA is important for disease prevention and eradication. As a whole-joint disease, pathological changes in OA involve a wide range of articular tissues, including cartilage, subchondral bone, ligaments, menisci, fat pads, and synovium [4]. The synovium exhibits abnormalities at the onset of OA, even before visible cartilage loss, and the degree of synovitis is closely related to disease progression [5]. As major immunocytes in the synovium, emerging evidence suggests that synovial macrophages play a pivotal role in OA development [6]. In brief, OA macrophages polarized to the proinflammatory M1 subtype marked by CD86 (or iNOS in mice), which secrete inflammatory cytokines (such as IL-1β, IL-6, and TNF-α) to accelerate synovitis and chondrocyte senescence [7]. While the M2 polarization of macrophages, marked by CD206 (also named MRC1), is largely inhibited, leading to decreased production of anti-inflammatory mediators (such as IL-4 and IL-10) and insufficient tissue repair [8]. Importantly, studies have claimed that cleansing of pathological macrophages relieves joint pain, synovitis, cartilage damage, and osteophyte formation [9,10]. Therefore, targeting synovial macrophages may be a promising approach for treating OA. N6-methyladenosine (m6A) modification is the most prevalent post-transcriptional modification in mammals, occurring in nearly 0.1-0.4% of adenosines, accounting for approximately 50% of all methylated ribonucleosides [11]. Over 80% of m6A modifications appear in messenger RNA (mRNA) and are mainly detected in the consensus sequence RRACH (R = A or G and H = A, C, or U) near transcript termination codons and 3′ untranslated regions (3′ UTRs) [12,13]. As a reversible process, the m6A modification is precisely controlled by three regulatory proteins: writers, erasers, and readers [14]. Writers are a group of methyltransferases that recognize RRACH sequences with a methyltransferase domain and transfer the methyl group from S-adenosylmethionine to the adenosine of RNA [15]. In contrast, erasers are primarily demethyltransferases that catalyze the removal of m6A from RNA [14]. After modulated by writers and erasers, readers with unique m6A recognition domains bind to m6A-modified mRNA and enhance its stability by binding with RNA stabilizers (such as HuR, matrin 3), or promote its decay by facilitating the bridge between mRNA and RNase P/endoribonucleases [16,17]. Besides, m6A modification regulates other mRNA dynamic processes such as splicing, export, and translation, while these processes also interact closely with mRNA stability and decay [17]. Recent studies have found that aberrant m6A modifications may be correlated with OA [18][19][20][21][22]. Overexpression of the m6A writer METTL3 inhibited extracellular matrix (ECM) synthesis in chondrocytes, whereas ATG7 modified by METTL3 inhibited autophagy and promoted senescence in OA fibroblasts [18,23]. FTO-dependent m6A demethylation mediates the upregulation of AC008 to induce OA chondrocyte apoptosis [22]. However, the mechanism by which m6A regulators participate in the pathogenesis of OA synovitis remains unclear. Insulin-like growth factor 2 mRNA-binding protein 3 (IGF2BP3) is an important m6A reader in eukaryotes [24]. It preferentially binds to m6A-containing mRNAs and enhances their stability by protecting them from endonuclease digestion and miRNA-induced degradation [25]. IGF2BP3 is a well-known oncogene that promotes cancer cell proliferation, survival, drug resistance, and metastasis [26][27][28]. It is also identified as an inflammation-triggering factor by activating NF-κB signalling to enhance epithelial cell injury [29]. In OA, IGF2BP3 was found upregulated in destructed cartilage and IL-1β induced chondrocytes, while the expression and function of IGF2BP3 in OA synovitis is unclear [30]. This study aimed to explore the expression patterns of m6A regulators in OA synovial cell clusters and identify key m6A regulators that mediate synovial macrophage phenotypes. The levels of major m6A regulators and their potential targets in the OA synovium were obtained from bulk RNA-seq data. Next, the locations of m6A regulators and their downstream targets were matched to specific cell clusters by analyzing single-cell RNA-seq (scRNA-seq) data. Among all the core m6A regulators, the m6A reader IGF2BP3 was verified to be upregulated in OA synovial macrophages and to play an important role in promoting macrophage inflammation and M1 polarization. Our study showed that the expression patterns of m6A regulators differed significantly between normal and OA synovium, as well as among different synovial cell clusters. Targeted regulation of m6A regulators that are mainly expressed and take effect in OA synovial cell clusters may serve as a promising approach to modulate the functions of these cells, thus alleviating OA synovitis progression. Clinical samples Our study was approved by the Institutional Review Board (IRB) of the Third Affiliated Hospital of Southern Medical University (Ethics approval code: 2022-lunshen-053). All patients involved signed written informed consent. Synovial samples were collected from 10 latestage OA patients undergoing total knee replacement surgery, and normal synovium samples were obtained from 10 patients during arthroscopy for trauma or joint derangements. Patients with hypertension, diabetes, hyperlipidemia, rheumatoid arthritis (RA), other diseases affecting joints, and body mass index (BMI) greater than 35 were excluded from this study. The overall characteristics including genders, ages, and BMI were listed in Table 1. Cells Bone marrow derived macrophages (BMDMs) were harvested from bone marrow of 6-week-old male C57BL/6J mice. After the mice were sacrificed, femurs and tibias were separated and collected. Bone marrow cavities were exposed and flushed with complete DMEM (Gibco, Carlsbad, CA, USA) containing 10% fetal bovine serum (FBS) (Gibco). Red cells were then removed. The remaining cells were maintained for 24 h in complete DMEM. Non-adherent cells were centrifugally collected and planted in the complete DMEM containing 10% FBS. Meanwhile, 30 ng/mL macrophage colony stimulating factor (M-CSF, R&D systems, Minneapolis, MN, USA) was also supplemented to induce the survival, proliferation, and differentiation of macrophages for 72 h. BMDMs administered with 500 ng/mL lipopolysaccharide (LPS) (Invitrogen, San Diego, CA, USA), 5 ng/mL interleukin-1β (IL-1β) (R&D systems) or 20 ng/mL IL-4 (R&D systems) were harvested 24 h after treatment. Lipofectamine 3000 (Thermo Fisher Scientific, Waltham, MA, USA) and Lipofectamine RNAiMAX (Thermo Fisher Scientific) were used for plasmids and siRNA transfection. BMDMs transfected with IGF2BP3 overexpressing plasmids (2ug/mL) (Tsingke, Beijing, China) or IGF2BP3-siRNA (Tsingke) were collected 48 h for RNA and 60 h for protein after transfection. As for mechanical overloading, we applied the methods that have been validated in our previous study [31]. In brief, BMDMs were seeded into silicon stretch chambers coated with fibronectin at a density of 1 × 10 5 cells/chamber. Cyclic tensile strain of 20% elongation was applied for 24 h to induce mechanical overloading cell model by using FLEXCELL-5000 mechanical stretch system in a CO2 incubator. Control cells were seeded onto the same plate and cultured without cyclic tensile strain. The qRT-PCR Total RNA was isolated from BMDMs grown in 6-well plates using 1 mL/well TRIzol reagent (Takara Bio Inc., Shiga, Japan). 1 μg RNA sample was reverse transcribed to produce cDNA with Reverse transcription kit (Vazyme Biotech, Nanjing, China). The quantitative PCR (qPCR) assays were conducted to testify the expression levels of IGF2BP3, iNOS, CD206, IL-1β, IL6, TNF-α, IL-4 and IL-10 mRNAs relative to GAPDH mRNA applying Real-Time PCR Mix (Vazyme Biotech) in a 2 × ChamQ SYBR qPCR Master Mix (Vazyme Biotech). Primers for qPCR in this study were listed in Table 2. Immunofluorescence (IF) staining Mid-sagittal sections (4 μm thick) of paraffin-embedded clinical synovial samples were deparaffinized and rehydrated. Antigen retrieval was conducted by soaking slides in Tris-EDTA pH9.0 in a microwave oven for 10 min. After soaking three times in PBS, slides were administered with 3% hydrogen peroxide for 10 min at room temperature. Differentially expressed genes (DEGs) analysis The DEGs between normal and OA synovium groups and between C1 and C2 clusters based on IGF2BP3-related genes were identified using the limma package with an FDR-corrected p value < 0.05, fold change (FC) > 1.5 or FC < 1/1.5 [34]. A boxplot of m6A regulators in normal and OA synovium groups was drawn using the function ggboxplot of the R package ggpubr [35]. Volcano plot was drawn with function ggplot of the R package ggplot2 [36]. Heatmap was composed with function pheatmap of the R package pheatmap. LASSO (least absolute shrinkage and selection operator)-Cox regression modeling LASSO-Cox regression modelling was performed using the LASSO-Cox regression tool of Sangerbox, which contained the built-in R package glmnet [37]. The expression matrices of m6A regulators in GSE89408, GSE55235, and GSE5545 were extracted, with m6A regulator expression matrix of GSE89408 as the training set, and m6A regulator expression matrix of GSE55235 and GSE55457 as the testing sets of modelling. The survival times were uniformly set to the same value to cancel the impact of survival on the model since OA is a non-fatal disease, and the status of healthy samples was set as 0, whereas the status of OA samples was set as 1. Receiver operating characteristic (ROC) curve of a single m6A factor was constructed with function roc and plot of the R package pROC and ggplot2 [38]. Identification of m6A regulator-regulated genes Potential targeting genes of m6A regulators were identified by downloading RNA-seq data from the RM2target database and setting |logFC|> 2, FDR-corrected p value < 0.05 after m6A regulator perturbation (knock out or knock down). Next, function cor.test of the R package stats was applied to screen co-expression genes whose expression levels were significantly correlated with designated m6A regulators (statistical type set as "spearman" and p value < 0.05). These genes were intersected with RM2target collected genes and sorted according to the absolute value of the correlation coefficient with m6A regulotor expression in descending order to compose the upregulated and downregulated gene sets of each m6A regulator, with a maximum gene number setting of 50. Protein-protein interaction (PPI) network construction and hub gene network identification Key m6A regulators along with their regulated genes were used for PPI network construction using the STRING database (see Additional file 1). Multiple protein mode was selected, the organism was set as "Homo sapiens, " and the minimum interaction score was set at high confidence (0.700). The network visualization tool Cystoscape 3.9.1 software was applied for PPI network topological analysis [39]. Genes were labeled based on their belonging m6A regulators. The cluster query plugin MCODE was used for identifying hub gene network, with the degree cutoff set to 2, node score cutoff set to 0.2, K-core set to 2, and max depth set to 100 [40]. Hallmark gene set functional enrichment and KEGG enrichment analysis Genes from the whole and hub gene networks were used for hallmark gene set enrichment analysis. Hallmark gene sets represent 50 specific well-defined biological states or processes and were stored in the Molecular Signatures Database (MSigDB) [41]. Network genes were set as input for enrichment with function enricher of the R package clusterProfiler, with database set as hallmark gene sets, FDR-corrected p value threshold set as 0.05, and were visualized with function GOChord from the R package GOplot [42,43]. Enrichment analysis of upregulated and downregulated genes of C2/C1 clusters were performed with function enrichKEGG from the R package clusterProfiler, with FDR-corrected p value threshold set as 0.05, and visualized with function dotplot from the R package clusterProfiler. scRNA-seq quality check and batch effect removal The raw gene expression matrices of GSE152805 were converted into a Seurat object using the R package Seurat [44]. Cells with less than 200 expressed genes, over 10,000 expressed genes, or over 20% UMIs derived from the mitochondrial and ribosomal genome were excluded. For the remaining cells, the gene expression matrices were normalized to total cellular read count and to mitochondrial percentage with function NormalizeData, and were standardized with function ScaleData [44]. Dimensionality reduction and clustering The function RunPCA was used to calculate the principal components (PCs) [44]. Batch effects were then removed using Harmony [45]. The RunTSNE function in its default setting was applied to visualize the first 20 Harmonyaligned coordinates. Differential gene expression tests were run using the function FindAllMarkers with min. pct set to 0.25 and logfc.threshold set to 0.25. Acknowledged cell markers of synovial cell clusters were collected for cell clustering, and marker genes were visualized with function dotplot [46]. Python package PHATE was then used for denoising reduction [47]. AUCell scoring of m6A regulator regulated genes in synovial cell clusters M6A regulators-regulated genes were set as input genes, scored, and visualized with the R package irGSEA. Briefly, function irGSEA.score was used for scoring, with species set as homo sapiens, method set as AUCell, and kcdf set as Gaussian, and then scores were integrated with function irGSEA.integrate. The AUCell score heatmap was created with function irGSEA.heatmap, and the AUCell score feature plots were constructed with function irGSEA.density.scatterplot. Merge and batch effect removal of bulk RNA-seq datasets The GEO datasets GSE55235, GSE55457, GSE82107, GSE55584, and GSE89408 were normalized and set as input data in the batch removal tool of Sangerbox, which contained the built-in R package inSilicoMerging for data merge and the R package sva for batch effect removal, with method set as COMBAT [37,48]. Cell cluster deconvolution The R package MuSiC was applied to determine the cell type proportions in the merged bulk gene expression matrix composing of GSE55235, GSE55457, GSE82107, GSE55584, and GSE89408 [49]. Briefly, function music_ prop was used for cell cluster deconvolution, with sc.sce set as the expression matrix of GSE152805. ssGSEA scoring of m6A regulator regulated genes M6A regulator-regulated genes were used as input gene sets. The ssGSEA scores of each sample in the merged bulk-RNA seq dataset were calculated with function gsva of the R package GSVA in its default setting, with method set as ssGSEA [50]. Visualization of ssGSEA scores was performed with function ggboxplot. Person correlation between ssGSEA scores and estimated cell ratio of bulk RNA-seq data was calculated with function corr.test of the R package psych and was visualized with function geom_heat_tri of the R package ggDoubleHeat. Sample clustering based on IGF2BP3 regulated genes The expression matrix of IGF2BP3 upregulated and downregulated genes of the merged bulk RNA-seq dataset was used for sample clustering and visualization with the clustering tool of Sangerbox, which contained the built-in R package ConsensusClusterPlus, with maximum cluster number set to 10, number of subsamples set to 10, proportion of items to sample set to 0.8, and distance set as "pearson" [37,51]. When k = 2, the cluster-consensus was highest. PCA reduction of C1 and C2 was conducted with function prcomp of the R package stats and visualized with function ggplot2. Statistical analyses Experiments including qPCR, WB, flow cytometry, and cellular fluorescence were conducted in triplicate. The IF results in synovium tissues were estimated by two independent observers. Data are displayed as the mean ± SD. An unpaired Student's t-test was used to compare two groups of data. For data involving more than two groups, a one-way analysis of variance (ANOVA) was performed, followed by Tukey's post-hoc test. Statistical significance was set as P < 0.05. M6A regulators were dysregulated in OA synovium This study was conducted according to the flowchart shown (Fig. 1). Briefly, the bulk RNA-seq dataset GSE89408 was downloaded to identify differentially expressed m6A regulators and build a LASSO-Cox regression model to screen core m6A regulators in OA synovium. Next, the potential targets of these regulators were collected from the RM2target database to construct a molecular network indicating their interactions. Next, we used the scRNA-seq dataset GSE152805 to identify the localization of core m6A regulators and their target genes. A merged dataset composed of five GEO datasets (GSE55235, GSE55457, GSE82157, GSE55584, and GSE89408) was used for cell-type deconvolution, ssGSEA scoring of m6A regulator-targeting genes, and correlation analysis between cell proportion and ssGSEA scores. These results validated the strong correlation between IGF2BP3 and OA synovial macrophages. Additionally, samples in the merged dataset were clustered based on the expression of IGF2BP3 targeting genes to identify differentially enriched pathways between highand low-IGF2BP3 clusters. Finally, IGF2BP3 expression was detected in clinical samples and its functions were explored in BMDMs. First, we aimed to elucidate the alterations in the expression profile and m6A regulator levels in the OA synovium. DEGs analysis of the bulk RNA-seq dataset GSE89408 (containing 28 normal and 22 OA synovial samples) showed 789 upregulated and 63 downregulated genes in OA synovium compared with controls, indicating a distinct expression profile in OA conditions ( Fig. 2A). The top 20 enhanced and suppressed genes in the OA synovium were displayed as a heatmap (Fig. 2B). Next, the expression levels of 28 key m6A regulators (including 11 m6A writers, 2 m6A erasers, and 15 m6A readers; Fig. 2C) were determined in individual synovial samples and whole sample groups as heatmaps and boxplots, respectively (Fig. 2D, E). Among these regulators, four writers (METTL4, METTL5, METTL14, and WTAP), one eraser (FTO), and five readers (IGF2BP3, HNRNPC, YTHDF2, YTHDF3, and YTHDC2) were differentially expressed, or more precisely, were all upregulated in the OA synovium, indicating the existence of an overactive m6A regulating process in OA. m6A regulators formed an effective prediction model of OA To identify the core m6A regulators associated with OA progression and establish an effective OA prediction model, the GSE89408 dataset was subjected to LASSO-Cox regression analysis, a method for refined predictive modelling and key variable extraction [52]. When the lambda was 0.0230, six regulators (FTO, YTHDC1, METTL5, IGF2BP3, ZC3H13, and HNRNPC) were screened for modelling (Risk Score = 0.5 82* F T O + 0. 093*HNRNPC + 0.240*IGF2BP3 + 0.095*METTL5−0. 988*YTHDC1−0.145*ZC3H13, Fig. 3A, B). Figure 3C shows the relationship between the OA risk scores and the expression levels of these six genes. In the training set, the ROC curve showed a large area under the curve (AUC) of 0.92, indicating a high accuracy of OA identification (Fig. 3D). Single-molecule ROC curves of IGF2BP3, HNRNPC, FTO, METTL5, ZC3H13, and YTHDC1 had AUC of 0.781, 0.724, 0.720, 0.713, 0.550, and 0.536, respectively, indicating that factors such as IGF2BP3, HNRNPC, FTO, and METTL5 with AUC > 0.7 showed high diagnostic values alone (Fig. 3D). Next, GSE55235 (comprising 10 control and 10 OA synovium samples) and GSE55457 (comprising 10 control and 10 OA synovium samples) were used as testing sets. As expected, our model performed well to separate control and OA samples, and showed high AUC of 0.88 and 1.00 in testing sets GSE55457 and GSE55235, respectively (Additional file 2: Fig. S1A-D). In summary, a model based on m6A regulators showed high efficacy in predicting OA, and six m6A regulators were identified as key candidate variables closely related to OA. Their expression and functions warrant further investigation. The functional network suggested the potential role of core m6A regulators mediating OA synovitis Since one of the major mechanisms of m6A regulators is binding to downstream mRNA and mediating their stability and decay, the possibly targeted mRNAs were further included to build a molecular network based on these six core m6A regulators [16,17,53]. Target genes of core m6A regulators in the synovium were screened using the strategy described in the "Methods" section (Additional file 2: Fig. S2A, B), and are displayed in Additional file 2: Fig. S2C (upregulated target genes) and Additional file 2: Fig. S2D (downregulated target genes) (No ZC3H13 downregulated genes were identified). Next, a PPI network was built based on these m6A regulators, along with their potential target genes, to speculate on their interaction and signaling mediation in OA (Fig. 4A). In this network, the FTO-, HNRNPC-, IGF2BP3-, METTL5-, and YTHDC1-regulated genes were extensively interacted at the protein level, whereas few ZC3H13-regulated genes were observed. Subsequently, two hub gene networks were identified using MCODE plugin (Fig. 4B, C). Hub gene network 1 was mainly composed of YTHDC1-and IGF2BP3-regulated genes, whereas hub gene network 2 mainly contained METTL5-and YTHDC1-regulated genes, indicating (See figure on next page.) Fig. 1 Flow chart of the study. Bulk RNA-seq dataset GSE89408 was downloaded to identify differentially expressed m6A regulators and build LASSO-Cox regression model to screen core m6A regulators in OA synovium. Next, potential targets of these regulators were collected from RM2target database to construct a molecular network indicating their interaction. Afterwards, we used scRNA-seq dataset GSE152805 to identify the localization of core m6A regulators along with their targeting genes. A large merged dataset composed of 5 GEO datasets (GSE55235, GSE55457, GSE82157, GSE55584, and GSE89408) was then used for cell type deconvolution, ssGSEA scoring of m6A regulator targeting genes, and correlation analysis between cell proportion and ssGSEA scores. Above results validated a strong correlation between IGF2BP3 and OA synovial macrophages. Additionally, clustering of samples in merged dataset was performed based on IGF2BP3 targeting genes to speculate differentially enriched pathways between sample clusters. that these three m6A regulators and their regulated genes were more densely interconnected and may play a more important role in mediating biological processes of synovium. Furthermore, hallmark gene sets representing 50 specific well-defined biological states or processes were used for functional enrichment to identify crucial pathways co-regulated by these core m6A regulators. The 3 significantly enriched pathways of the m6A interaction network were IFN-γ response, G2M checkpoint, and apoptosis, which were highly consistent with the aberrant inflammation and cell cycle in the OA synovium (Fig. 4D). While hub gene network 1 was mainly enriched in cell cycle related pathways (G2M checkpoints, E2M targets, and mitotic spindle), genes of network 2 were strongly correlated with inflammation (IFN-γ, IFN-α, and TNF-α signalling via NF-κB), indicating these 2 hub gene groups can be a functional miniature of the complex m6A network (Fig. 4E, F). In summary, our data show a strong correlation between core m6A regulators and abnormal functions of the OA synovium. scRNA-seq revealed the localization of core m6A regulators and their target genes To determine the distribution of these six core m6A factors, public scRNA-seq data GSE152805 of three OA synovial samples were collected and data quality checks and batch effect removal were performed (Additional file 2: Fig. S3A-C). Based on acknowledged cell markers, OA synovial cells were categorized into six cell clusters (Fig. 5A, B). The expression of the six core m6A regulators is displayed in a heatmap and feature plots (Fig. 5C, Additional file 2: Fig. S3D). Here, we mainly focused on the major cell cluster (fibroblasts) and the immunocyte cluster (macrophages) of the synovium because they are the main effector cells in OA synovitis [54]. Relatively high FTO, METTL5, YTHDC1, ZC3H13 levels and a moderate expression of HNRNPC were found in fibroblasts, whereas IGF2BP3 and HNRNPC were highly expressed in macrophages. Subsequently, the AUCell scores of up-/ downregulated genes of m6A regulators were evaluated in each cell cluster (Fig. 5D, E, Additional file 2: Fig. S3E). In accordance with IGF2BP3 and HNRNPC distribution, OA synovial macrophages showed high expression of IGF2BP3-and HNRNPC-upregulated genes, whereas IGF2BP3-and HNRNPC-downregulated genes were lowly expressed in OA macrophages, indicating that IGF2BP3 and HNRNPC are potential m6A regulators that mediate macrophages in the OA synovium. Regarding the m6A regulators mainly expressed by fibroblasts, no significant location consistency was found among FTO, METTL5, YTHDC1, and ZC3H13 and their up-/ downregulated genes. IGF2BP3 was highly correlated with OA synovial macrophages To further validate the role of these m6A regulators in cell clusters of the control/OA synovium, public RNA-seq data of 113 synovium samples (comprising 55 control and 58 OA synovium samples) belonging to the GSE55235, GSE55457, GSE82157, GSE55584, and GSE89408 datasets were collected and merged (Additional file 2: Fig. S4A) and batch effects were removed (Additional file 2: Fig. S4B-E). Next, the MuSiC package was used for bulk cell-type deconvolution based on the scRNA-seq results from GSE152805. The proportion of cells in each synovial sample or whole sample group is displayed in Fig. 6A, B. Compared to the controls, the OA synovium displayed higher ratios of fibroblasts and macrophages, which is a typical manifestation of cell distribution in the OA synovium. Afterwards, we calculated the ssGSEA scores of up-/down-regulated gene sets of the six core m6A factors, and scores of all gene sets were significantly altered in the OA synovium compared to the healthy group (Fig. 6C). Furthermore, Pearson's correlation test showed the highest (0.70) and lowest (− 0.48) correlation coefficient between ssGSEA scores of IGF2BP3 up-/down-regulated genes and macrophage proportion, respectively, while correlation coefficients of only 0.48 and − 0.35 were observed between ssGSEA scores of HNRNPC up-/down-regulated genes and macrophage proportion, indicating IGF2BP3 was more associated with OA macrophage alterations than HNRNPC (Fig. 6D). For fibroblasts, no strong correlation was found between the ssGSEA score of any gene set and fibroblast distribution. Taken together, our analysis further demonstrates that IGF2BP3 may play an important role in regulating macrophages in the OA synovium. IGF2BP3 was associated with OA synovial phenotypic changes Subsequently, we clustered the 113 synovial samples based on the expression level of IGF2BP3 along with its up-/downregulated genes. Based on the highest consensus values, the samples were divided into two clusters [C1(N = 60) and C2(N = 53)] (Additional file 2: Fig. S5A, Fig. 7A). PCA revealed an approximate separation of the two clusters, indicating distinct gene expression patterns (Fig. 7B). Among these, 23 (39.7%) and 35 (60.3%) samples belonging to the C1 and C2 clusters, respectively, were OA synovium, indicating that the C2 cluster was more likely to possess OA features than C1 (Fig. 7C). Moreover, higher IGF2BP3 expression and higher/ lower ssGSEA scores of IGF2BP3 up/downregulated genes were detected in the C2 cluster (Fig. 7D, E). Next, we compared the DEGs in the C1 and C2 clusters. As shown in Fig. 7F, the C2 cluster showed a higher expression of markers of primitive M0 macrophages (FGCR1A and FGCR1B) and proinflammatory M1 macrophages (CD86 and TLR2). However, marker of anti-inflammatory M2 macrophage (MRC1) was also increased in the C2 cluster, although to a lesser extent than M1 markers, whereas the anti-inflammatory factor IL-4 did not show a significant change. To speculate IGF2BP3-associated pathways in the OA synovium, KEGG pathway enrichment analysis was performed on the DEGs between the C1 and C2 clusters. Enhanced genes in the C2 cluster were mainly enriched in macrophage-regulated pathways and macrophage-involved diseases, such as phagosome and lysosome pathways, RA, tuberculosis, and leishmaniasis (Fig. 7G). The suppressed genes in C2 were mainly enriched in ECM-related pathways such as focal adhesion and ECM-receptor interaction pathways (Fig. 7H). In summary, the RNA modification process mediated by IGF2BP3 showed a significant correlation with OA macrophage phenotypic changes. IGF2BP3 was upregulated in OA synovium To validate the expression level and localization of IGF2BP3 in the synovium, clinical synovial specimens of OA were collected (n = 10), with synovial samples from patients during arthroscopy for trauma or joint derangements as controls (n = 10); no significant differences in sex, age, and BMI were observed between the two groups (Table 1). High IGF2BP3 expression was detected in OA samples, as predicted, and IGF2BP3 strongly colocalized with CD68, a macrophage marker (Fig. 8A, B). To our surprise, in vitro assay showed IL-1β or LPS stimulated BMDMs did not show strong elevation of IGF2BP3 (Additional file 2: Fig. S6A, B), whereas IGF2BP3 was significantly upregulated in macrophages after mechanical overloading (Fig. 8C, D). In summary, IGF2BP3 levels are enhanced in clinical OA synovial samples, and mechanical factors, rather than inflammatory factors, may be the main factors promoting its expression. In contrast, after feasible siRNAs targeting IGF2BP3 were selected, BMDMs with IGF2BP3 knockdown tended to polarize in the M2 subtype rather than M1 and showed less transcription of inflammatory cytokines (Additional file 2: Fig. S7B). However, the anti-inflammatory factors (IL-4 and IL-10) did not show obvious alterations upon IGF2BP3 overexpression or inhibition in BMDMs. Flow cytometry analysis revealed that the proportion of CD206 − CD86 + (M1) macrophages remarkably increased after IGF2BP3 overexpression, while a higher proportion of CD206 + CD86 − (M2) macrophages was found in IGF2BP3 knockdown BMDMs (Fig. 9A). Accordingly, WB assay showed higher expression of M1 markers and proinflammatory genes, along with lower M2 marker levels in IGF2BP3 overexpressed BMDMs, while IGF2BP3 silencing inhibited the expression of M1 markers and inflammatory genes, and enhanced the expression of M2 markers at the protein level (Fig. 9B, C). We also performed cellular fluorescence staining to further validate the phenotypes induced by IGF2BP3 interventions ( Fig. 9D, E). In conclusion, these results suggest that IGF2BP3 plays an essential role in promoting M1 macrophage polarization and inflammation. Discussion In this study, we determined the expression pattern of key m6A regulators in the OA synovium, and six of these m6A regulators (FTO, YTHDC1, METTL5, IGF2BP3, ZC3H13, and HNRNPC) were used to build a high-fitting OA prediction model. Based on these six factors and their potential mRNA targets, we constructed a molecular network and found that they are involved in inflammation and cell cycle regulation in the OA synovium. In combination with the scRNA-seq data, we determined that IGF2BP3 expression was most strongly correlated with phenotypic alterations in macrophages. Further studies verified the increased IGF2BP3 expression in OA synovial macrophages and demonstrated that IGF2BP3 promoted synovial inflammation and M1 macrophage polarization, which identified a pivotal m6A regulator modulating OA progression. Synovitis is an important pathological condition highly correlated with OA onset and progression. As a sensible symptom occurring in the very early stage of OA, synovial inflammation can be found even before detectable cartilage damage [55]. Clinical research has shown that a greater effusion-synovitis volume was found 2 years before disease onset in over 50% of patients and was deemed a high-risk factor for accelerated OA [56]. In addition, a study that included 104 clinical subjects proved that synovitis is also a characteristic of late-stage OA [57]. More importantly, the presence of synovitis indicates a nine-fold greater risk of painful knee OA and a faster process of cartilage destruction [58]. Hence, synovitis scoring is a potential tool for predicting and evaluating OA, and synovitis clearance may be a promising method for OA treatment. Based on public RNA-seq data, we found 789 upregulated and 63 downregulated genes in the OA synovium compared to the controls, indicating that the OA synovium had markedly different gene expression patterns. At the cellular level, scRNA-seq data analysis indicated that fibroblasts and macrophages accounted for most OA synovial cells, while smooth muscle cells, endothelial cells, mast cells, and DCs also existed but to a lesser extent. Using the deconvolution algorithm, we validated that fibroblasts and macrophages were the two main cell clusters with enhanced cell proportions in the OA synovium compared to control samples. As the major OA synovial immune cells, macrophages exhibit abnormalities not only in cell number but also in activation status. Activated macrophages can be classified into classically activated M1 macrophages or alternatively activated M2 macrophages, which differ in their responses to microenvironmental stimuli. M1 macrophages are characterized by CD86 (or iNOS in mice) marker genes and enhanced production of proinflammatory cytokines, such as TNF-α, IL-1, IL-6, and IL-12 [59]. M2 macrophages, also known as wound-healing macrophages, are marked by the CD206 gene and display anti-inflammatory secretion phenotypes such as IL-4 and IL-10 [60]. In OA clinical synovial samples, a more than two-fold increase in M1 macrophages was found with a concomitant decrease in M2 macrophages, and this imbalance of M1/M2 polarization was proven to positively modulate synovial inflammation, cartilage destruction, osteophyte formation, and ultimately OA progression [61,62]. Drugs that mediate macrophage polarization have shown promising therapeutic effects in patients with OA. Transient receptor potential vanilloid 1 (TRPV1) activation is closely related to the alleviation of the pain sensation and inhibition of M1 macrophage polarization [63]. Intra-articular injection of the TRPV1 agonist CNTX-4975 was studied in a phase IIb clinical trial in OA patients and displayed the effects of reduction of pain scores [63]. TissueGene-C promotes the shift of synovial M2 macrophages in the joints and enhances their anti-inflammatory activity, ultimately reducing pain and promoting cartilage regeneration, and a phase III trial of this therapy in OA patients is ongoing [64]. However, current drugs can only partially inhibit inflammatory macrophage formation and OA development, and the molecular mechanisms regulating OA macrophage polarization require further studies to determine more effective and precise therapeutic targets. M6A modification is the most common posttranscriptional modification in mammals [65]. They participate in modulating biological processes such as mRNA splicing, localization, translation, and stability adjustment [66]. Aberrant m6A modifications hamper gene expression and cell function and ultimately cause diseases, including OA [16]. However, the mode of m6A regulator expression in OA synovium remains unclear. By analyzing public RNAseq data of OA and normal synovial samples, we identified 10 out of 28 m6A regulators that were upregulated in the OA synovium, indicating a relatively active m6A modification process in OA. Among the 28 m6A regulators, 6 (FTO, YTHDC1, METTL5, IGF2BP3, ZC3H13, and HNRNPC) comprised a well-fitting regression model showing high OA predictive efficiency in both the training and testing datasets, indicating that they may be promising OA biomarkers and effectors. PPI network analysis showed that these m6A regulator-regulated genes interacted extensively at the protein level, while YTHDC1-, IGF2BP3-, and METTL5-regulated genes were identified as hub genes, indicating they owned larger number of interactions and may play a more crucial role in phenotypic regulation in synovium. In addition, the nodes of these PPI networks were highly enriched in pathways contributing to OA synovitis, mainly inflammation pathways (IFNγ, IFNα, and TNF pathways), as well as proliferation and cell cyclerelated pathways (G2M checkpoint, apoptosis, mitotic spindle, E2F targets pathways) [67,68]. Previous studies have identified HNRNPC, FTO, and YTHDC1 as important m6A regulators mediating IFN and TNF secretion and responses, whereas all six m6A regulators have been reported to be closely associated with proliferation and cell cycle regulation in other diseases, which further confirmed our results [69][70][71][72]. To determine the localization of these six regulators and their target genes, scRNA-seq dataset was used for joint analysis. No strong correlation of cellular localization in fibroblasts was found among the upregulated genes, downregulated genes, and their corresponding m6A regulator expression. While in synovial macrophages, both IGF2BP3 and IGF2BP3 upregulated genes were highly expressed, whereas IGF2BP3 suppressed genes showed low expression profiles, indicating that IGF2BP3 may serve as a specific m6A regulator in synovial macrophages. Previous studies have found that m6A modulation is strongly correlated with macrophage aberrance and has great potential as a drug target for modulating macrophage phenotypes. For example, m6A reader YTHDF2 deactivates MAP2K4 and MAP4K4 to inhibit MAPK and NF-κB signaling thus alleviating inflammation in macrophages [73]. The m6A writer METTL3 induces M1 polarization while suppressing M2 polarization, and its highly selective inhibitor STC-15 is approaching phase I clinical trials for cancer [74,75]. For the m6A reader IGF2BP3, we found that the ssGSEA score of IGF2BP3 upregulated genes was markedly higher, whereas the score of IGF2BP3 downregulated genes was lower in the OA synovium than in controls. Both scores significantly correlated with the proportion of macrophages. Subsequently, synovial samples were categorized into two clusters based on expressions of IGF2BP3 and its target genes: Cluster C2 showed a higher proportion of OA samples, a higher expression of IGF2BP3, a higher ssGSEA score of IGF2BP3-upregulated genes, and a lower score of IGF2BP3-downregulated genes compared to C1. Interestingly, cluster C2 expressed more [80,81]. In our study, we found IGF2BP3 overexpressed BMDMs polarized into the M1 phenotype and secreted more inflammatory cytokines, whereas IGF2BP3 knockdown led to M2 polarization and inhibited inflammation. Nevertheless, neither the overexpression nor silencing of IGF2BP3 changed the expression levels of anti-inflammatory mediators. The possible reasons for these different conclusions may be the use of different cell models or different levels of gene overexpression and knockdown. This suggests that IGF2BP3 may serve as a fine-tuning regulator of macrophage polarization and warrants further investigation. Furthermore, given that mechanical overload induces IGF2BP3 expression, it may play a crucial role as a hub gene in converting mechanical stimuli into inflammatory signals, indicating its potential as a therapeutic target in the OA synovium. Several studies have also explored the expression patterns of m6A regulators in arthritis via bioinformatics analysis. Ni et al. analyzed the expression levels of 23 m6A regulators in OA chondrocytes and identified YTHDF3 and IGF2BP3 as upregulated m6A readers in OA chondrocytes and may be correlated with enhanced chondrocyte ECM catabolism [30]. Zhao et al. found that IGF2BP3 is enhanced in the synovium of patients with RA and is a potential regulator of inflammationrelated pathways [80]. Xiong et al. identified 12 differentially expressed m6A genes in the OA synovium based on RNA-seq data from 10 normal and 10 OA samples, including IGF2BP3 [82]. Combining these studies with our results, it is evident that m6A modulation and IGF2BP3 expression were dysregulated in both OA and RA conditions and affected functions not only in the synovium but also in the cartilage. However, compared to previous studies, our research has unique strengths and novelty. First, we collected as much public RNAseq data from healthy and OA synovial samples as possible, including 55 normal and 57 OA synovial samples, to make our bioinformatics analysis more convincing. Second, we conducted a novel joint analysis of bulk RNA-seq and scRNA-seq to identify the m6A regulatory pattern of each synovial cell cluster in OA for the first time and identified IGF2BP3 as specifically expressed and functioning factor in synovial macrophages. Third, we included potential mRNA targets of m6A regulators from the RM2target database to create a more comprehensive functional network. Finally, we conducted experiments to verify the clinical correlation between IGF2BP3 levels and OA, and to clarify its role in regulating synovial macrophage polarization. Nevertheless, our study still has some limitations. First, the lack of scRNA-seq data for the normal synovium may introduce deviations when deconvoluting normal synovium samples. Second, the potential targets of m6A regulators were collected from the RM2target database comprising data from other human cell lines. Thus, the interaction between m6A regulators and their predicted binding mRNA in macrophages should be experimentally verified. Third, IGF2BP3 knockout transgenic mice were not included in our current study, which would be supplemented in future to further clarify our conclusions. Conclusions In conclusion, our research identified the expression patterns of m6A regulators and depicted a molecular network based on core m6A genes in the OA synovium. Moreover, we revealed that the m6A reader IGF2BP3 partially functions in OA synovial macrophages by promoting M1 polarization and inflammation. Our study sheds light on the roles of m6A regulators in the OA synovium and preliminarily indicates how IGF2BP3 modulates OA macrophages, thus providing new targets for OA diagnosis and treatment.
2023-05-23T14:56:51.687Z
2023-05-22T00:00:00.000
{ "year": 2023, "sha1": "337447c76c7591b06cc34d0f150199ea77ef8958", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "337447c76c7591b06cc34d0f150199ea77ef8958", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
1999407
pes2o/s2orc
v3-fos-license
High-speed, image-based eye tracking with a scanning laser ophthalmoscope We demonstrate a high-speed, image-based tracking scanning laser ophthalmoscope (TSLO) that can provide high fidelity structural images, real-time eye tracking and targeted stimulus delivery. The system was designed for diffraction-limited performance over an 8° field of view (FOV) and operates with a flexible field of view of 1°–5.5°. Stabilized videos of the retina were generated showing an amplitude of motion after stabilization of 0.2 arcmin or less across all frequencies. In addition, the imaging laser can be modulated to place a stimulus on a targeted retinal location. We show a stimulus placement accuracy with a standard deviation less than 1 arcmin. With a smaller field size of 2°, individual cone photoreceptors were clearly visible at eccentricities outside of the fovea. Introduction The human eye is constantly in motion. Even when fixating on a target, our eyes move; drifting and making microsaccades which move a stimulus projected onto the retina over dozens to hundreds of photoreceptors. With the eye as an ever moving target, our ability to record high-fidelity images of the retina is limited. Moreover, targeted light delivery to the retina remains uncontrolled with constant eye motion. Recent advances in imaging technology have highlighted the importance of improved eye tracking to render true and accurate images. In the clinical domain, active eye tracking has proven to be effective in commercial systems [1,2]. At a more basic level, the benefits of accurate eye tracking and stimulus delivery prove to be useful for delivering stimuli to targeted retinal locations as small as a single cone [3]. An image-based method for eye tracking and targeted stimulus delivery has been implemented into an adaptive optics scanning laser ophthalmoscope (AOSLO) system in our lab and has been reported in a series of publications [4][5][6]. In this paper, we show that the same image-based tracking techniques can be implemented in a more traditional, larger field of view, confocal SLO. This system is the most accurate, fast and functional tracking system to be used in a standard ophthalmic instrument and demonstrates that rich texture in the image, not necessarily the presence of cones, is sufficient for this tracking method. The use of a conventional approach offers a more robust, compact and cost effective system that is readily deployable in a variety of settings. We will show that in a well-designed SLO system, the wider field of view (FOV) is able to capture retinal video rich with structure, allowing accurate image-based tracking during normal fixational eye movements. System hardware The tracking scanning laser ophthalmoscope (TSLO) was developed in the following manner. The optical design and system optimization was completed using optical design software (Radiant ZEMAX LLC, Bellevue, WA). System specifications were as follows: • Diffraction-limited optical design over an 8° FOV (excluding the eye) • Adjustable pupil size between 2 and 4 mm (no need for subject dilation) • Small focal length mirrors for a compact design • Flexible eye relief The main portion of the system contains three telescope assemblies that relay the pupil to the fast and slow scan mirrors and then to the light detection and delivery arm (Fig. 1). The Light exiting the super luminescent diode (SLD) is coupled into the acousto-optic modulator (AOM) before entering the system. The light is collimated and sent through a basic 4f series of lenses onto an adjustable aperture (A1). Light travels through three mirror based telescope assemblies (f = 250 mm) to the human eye. Light is then reflected off the retina and sent back through the system into the light detection arm. Another series of lenses in a 4f configuration relays the light to be collected by a photomultiplier tube (PMT). A 50 µm pinhole (1.95 Airy disc diameters for a 4mm pupil) is placed at the retinal conjugate plane prior to the PMT for confocality. The intensity (I) of the signal is sent to the PC for readout. This is a schematic layout, the actual components are not aligned in a single plane (see Fig. 2). telescopes are arranged in such a way so as to minimize astigmatism in both the pupil and retinal planes [7]. Each concave mirror used has a focal length of f = 250 mm. A pinhole was placed at the retinal conjugate prior to the photomultiplier tube (PMT) in order to make the system confocal. A galvo scanner and resonant scanner were placed into the system at pupil conjugate positions to scan the beam across the subject's retina both vertically and horizontally, respectively. The horizontal, or fast, scanner operates at ~16 kHz, while the vertical scanner operates at a rate of 1/512 of the fast scan to record frames at ~30 frames per second. An image is created, pixel by pixel, with each frame consisting of 512 x 512 pixels. Since each video frame is acquired over time, there are a unique set of distortions created by the subject's eye motion. It is these distortions that are used to extract the motion of the eye in real time. The opto-mechanical design for the system components was modeled in Solidworks (Concord, MA). Appropriate heights for mounting were determined based on the 3-D drawing. The use of Solidworks to lay out an optical system proved extremely helpful, as one can export the beam path directly from ZEMAX into the program and determine the necessary opto-mechanical components (Fig. 2). Diffraction-limited performance was achieved for nine points over an 8 degree FOV. The geometric spots at the nine points across the field are well within the Airy disc (represented by the circles), with most spots less than 5 μm (Fig. 3). An 840 nm super luminescent diode (SLD) (Superlum, Moscow, Russia), with a 50 nm bandwidth, was used for imaging. The light source is connected to an acousto-optic modulator (AOM) whose output intensity is continuously controlled with a voltage output from a 14-bit digital to analog converter (DAC). As the beam raster-scans the retina, the DAC drives the AOM to modulate the SLD so that it is only on during the central 80% of the forward sweep Opto-mechanical design (to scale) of the TSLO on a 60 cm x 30 cm breadboard (length x width). The telescope assemblies are built in such a way so as to limit system astigmatism through the varying of beam heights and angles. Note the changing of beam heights as the light propagates through the system to the eye. Light source, acousto-optic modulator (AOM) and chin rest not shown. of the mirror scanning cycle, thereby limiting the exposure to only those times when the light is being detected. The AOM is also used to modulate SLD power to place any gray-scale image point by point onto the retina [8]. A stimulus presented in this way appears in negative contrast (i.e. laser is switched off to write the stimulus) within the dim red field created by the scanning light source. Since the AOM is synchronized with the scanning of the beam, modulation timing can be manipulated to place a stimulus at any location within the raster scan [4]. In this manner, the imaging and stimulus delivery are done with the same light source. Other variants of this system using secondary sources have been reported [3], but are not used here. Software and hardware for eye tracking The details of the eye motion recovery used in this system have been previously reported [5,6,9]. Briefly, we will describe the methods herein. In order to extract retinal motion from the scanned images in real time, each frame of an SLO movie is broken up into a set number of strips that are parallel to the fast scanner. The number of strips is flexible and can be changed according to the user's experimental requirements. For each movie, a reference frame is selected, usually the first frame to occur in the series unless otherwise reselected. Each strip within a given frame is then cross-correlated with the reference frame. The (x,y) displacements of the new frame with respect to the reference frame are a measure of the relative motion of the eye at that specific point in time. Every subsequent frame can then be redrawn to align it with the reference frame (Fig. 4). This occurs in real-time so that the operator can see both the subject's actual retinal motion and the stabilized version of the retina side by side on the software interface. Using the real-time eye trace generated from the (x,y) displacements of each frame as described above, the timing of the stimulus delivery can be controlled to guide its placement to any targeted location on the retina. For the data presented here, cross-correlations are performed and (x,y) displacements are reported 32 times per frame, for a reporting rate of 960 Hz. The correlations are computed from 32 overlapping strips per frame, each of which is 32 pixels high. Eye position estimates are made after-the-fact (i.e. after the strip has been recorded). This computation is done very quickly, but the delivery of a stimulus to a targeted retinal location requires a prediction. Figure 5 shows the steps involved. For a 16 kHz line scan rate, the latency is 2.5 ± 0.5 msec, depending on where in the 16 pixel strip the stimulus actually starts. To ensure that the targeted stimulus delivery is accurate, we impose a threshold on the cross-correlation peak. In the current system, whenever the normalized cross-correlation peak is less than 0.3, a decision is made to not deliver the stimulus. Reductions in the crosscorrelation peak will occur whenever (i) the amount of overlap between the current strip and the reference is reduced (mainly due to horizontal eye movements), (ii) the features within the strip are distorted because of eye motion (iii) the quality of the image is reduced due to tear film break-up, accommodation, or pupil constriction (iv) the image is lost due to blinks, (v) the retinal image changes because of lateral and axial pupil displacements relative to the scanning beam and (vi) there are intrinsic changes in the reflectivity of the retina. The entire process is computationally demanding and requires a custom solution. In this version, the TSLO data is recorded with a custom-programmed field programmable gate array (FPGA) board [6]. The FPGA board (Xilinx, San Jose, CA) allows for the immediate access to the strips of image data as they are acquired. The cross correlation with the reference image employs a standard FFT-based algorithm and takes place on a graphics board (Nvidia, Santa Clara, CA) on the host PC. for the next strip before arming the AOM buffer with the stimulus? If YES, then wait for next strip and repeat process. If NO, then proceed to step G. G, Load AOM buffer so it is armed to play out the stimulus while the beam scans over the target. H, Start playing out the AOM buffer at this strip. I, Start delivering the stimulus (note: when the stimulus is greater than 64 pixels in size, steps B to I are repeated for every strip that the stimulus occupies). Testing on model and human eyes Detailed tracking performance was quantified on a custom-built model eye, which used a galvo scanner mirror placed between the optics and the retina. This allowed for controlled amounts of retinal motion with fixed frequencies and amplitudes. Human eye data are reported here for two subjects who are also co-authors on this paper. The experiment was approved by the University of California, Berkeley, Committee for the Protection of Human Subjects and all protocols adhered to the tenets of the Declaration of Helsinki. A chin rest with temple pads was used in order to minimize head motion for all human eye experiments. All measurements were recorded with a 4° FOV (512 x 512 pixels) providing a sampling resolution of 0.47 arcminutes per pixel. The power of the 840 nm light source never exceeded 500 µW at the pupil plane, which was computed to be within the ANSI safety limits [10]. Frequency analysis To quantify motion reduction as a function of frequency, sinusoidal retinal motion was input into the model eye. Both a raw video and stabilized video were recorded at each input frequency. An offline stabilization program was used to compute retinal motion in the raw and stabilized video. The offline software generated eye motion estimates at a frequency of 1920 Hz (64 strips per frame). The amplitude spectra were then computed for both eye motion traces and the amplitudes of each spectrum at the input frequency were compared before and after stabilization. In this manner, any residual motion in the stabilized video that was not corrected by the real-time system could be measured. Eye motion measurements are recorded at nearly 100% accuracy for frequencies up to 32 Hz (Fig. 6). The estimated bandwidth for 50% correction based on a double exponential fit to the data was just over 400Hz. Next, raw and stabilized videos were taken of the human retina in real time. Ten videos of ten seconds each were recorded of one of the subjects. The same offline stabilization program with increased sampling was used for the human eye videos. The percentage of erroneous eye motion estimates was 0.83%. The standard deviation of the residual motion of features in the stabilized videos was 0.19 minutes of arc (0.41 pixels). The amplitude spectra of motion as a function of frequency shows how the actual motion in the human eye is corrected in this system. Figure 7 shows two important results. First, eye motion of a normal eye fixating on a target is dominated by low frequencies, with the amplitude dropping proportionally with the inverse of the frequency [11]. At frequencies greater than 10 Hz, the amplitude of normal fixational eye motion is less than 0.5 arcminutes. Second, the TSLO suppresses the eye motion during normal fixation up to 100 Hz. The suppression of eye motion in the stabilized video means that eye motion is being reliably measured over these frequencies in real-time. Threshold velocity of eye motion The system's maximum tolerable velocity was computed in order to determine how fast an eye movement can be tracked without software failure. A triangular wave input was fed into the galvo scanner of the model eye and amplitude was increased (velocity increased) to the point where the tracking began to fail. These failures occur because, at high velocities, the shear of the features within a strip causes the height of the normalized cross-correlation with the reference frame to go below the 0.3 threshold level (see Subsection 2.2). The corresponding threshold velocity was found to be 1761 pixels/sec. Since the velocity of motion in pixels is inversely proportional to the field size, the velocity threshold in degrees per second will depend on field size. Equation (1) establishes this relationship: where VelocityThreshold is the maximum trackable velocity in degrees per second and FieldSize is the TSLO field size in degrees. According to this equation, a field size of just over 5.2° would be required in order to correct for the median microsaccade velocity of ~18 °/s [12,13]. For this metric, the larger field size of the TSLO over that previously reported with the AOSLO with a maximum field size of ~2°, offers a performance advantage. Stimulus accuracy The ability to measure eye motion and generate a stabilized video does not directly indicate the accuracy with which a stimulus can be projected onto the retina. While eye motion measurements can be reported at 960 Hz, the delivery of the stimulus involves a prediction and, given the manner in which the stimulus is delivered in the TSLO, some time is required to arm the laser to deliver this stimulus (see Subsection 2.2). We call this time the latency. The algorithm is written so that this prediction is completed prior to the beam scanning over the target. Any eye motion that occurs between the prediction and the stimulus delivery results directly in stimulus delivery errors. Therefore, it is important to keep this latency as small as possible. To compute the latency, fixed frequencies of sinusoidal motion (i.e. 30 Hz) were input into the model eye and the error in stimulus placement was directly observed as relative motion between the dark stimulus and the underlying image. By looking at retinal image strips within the same frame that were recorded prior to the delivery of the stimulus, we could determine the exact location where the motion of the retina was in phase with the stimulus. This location indicated the time point at which the prediction was made, establishing the latency of the stimulus delivery. The stimulus was observed to move in phase with the retinal locations that were 43 pixels away. Given the actual line scanning frequency of 15.74 kHz, these 43 pixels correspond to a latency of 2.73 msec, which is within the expected range given in Fig. 5. To measure stimulus delivery performace on a human eye, a black circle of 24 pixels in diameter was used as the stimulus. Multiple videos of the second subject were taken at roughly 600 frames each (except for one video with 300 frames). Figure 8 shows a registered sum of 300 stabilized frames from a movie where the stimulus was targeting a retinal location. The sharp stimulus delivery in the average image attests to the tracking accuracy of the system. The video entitled "Stimulus accuracy" in Fig. 9 shows a movie sequence where each frame of the video is cropped around the stimulus. The motion of the retina behind the stimulus is small, but readily apparent. The relative motion between the stimulus and the retinal patch between frames revealed the accuracy of the stimulus delivery [14]. Based on 2100 frames from four movies, stimulus delivery failure occurred in 53 frames, or 2.5% of the frames. As described in Subsection 2.2, failures occur whenever the normalized crosscorrelation peak of the image strip used to make the prediction of the target location does not exceed 0.3. The frames in which stimulus delivery did not occur were eliminated from motion error calculations. The standard deviation of motion error in the x direction was 0.65 arcminutes and in the y direction was 0.67 arcminutes. This gives an average standard deviation of motion of 0.66 arcminutes, or roughly the size of an individual cone photoreceptor, for stimulus accuracy. Fig. 8. Registered sum of 300 frames from a movie sequence (Media 1) where the stimulus was tracking a targeted retinal location. Fig. 9. Left, a single frame from the 300 frame movie entitled "Stimulus accuracy" showing the centroid of the circle stimulus superimposed upon the retina (Media 2). Right, motion error, including microsaccades, for a 300 frame movie. The red diamonds in the graph and media file indicate the frames in which the stimulus delivery failed. System resolution Although this system was designed to be of a wider field size than the typical 1-3 degree field of view of the AOSLO, the field of view may operate anywhere from 1 to 5.5°. By adjusting the angles of each of the scanners, the field size can be made smaller in order to determine the finest structures resolvable. With a 2° field of view, photoreceptor structure starts to become visible outside of the fovea region, with a clear photoreceptor mosiac seen at 4° and beyond without the use of adaptive optics (Fig. 10). Discussion The TSLO is a robust, high speed, image-based retinal eye tracker that is adaptable to both clinical and research settings. The system may serve as a stand-alone system for recording and stabilizing retinal movies in real-time, as well as providing targeted stimulus delivery for psychophysical experiments. A potential application of the TSLO is that it can also be coupled with a variety of other systems that are in need of accurate eye tracking. These systems include OCT, AOSLO, mfERG and laser guided surgery. The use of the TSLO coupled with these technologies will render high fidelity images without excess motion artifacts, as well as provide an unambiguous record of stimulus delivery onto the retina. One important question to ask is how does the TSLO compare with other current tracking technologies? Table 1 displays the tracking accuracy, latency and stabilization accuracy of various tracking technologies and methods. The TSLO fares well compared to other tracking methods, with only the AOSLO and the optical lever showing greater accuracy. However, it is important to keep in mind that the tracking capabilities of the TSLO are an extension of those first reported for the AOSLO. The greater accuracy of the AOSLO does come at a cost. The smaller field size of the AOSLO has two main consequences. First, the same lateral motions in a small-field system cause a greater loss of overlap with the reference frame, resulting in a larger number of tracking and stimulus delivery failures due to sub-threshold cross-correlation peak values. Second, the threshold velocity is lower for the smaller field sizes. Additionally, the AOSLO stimuli are limited in their extent due to the small scanning raster of the system (1°-3°). Therefore, while the increase in field size of the TSLO has lower tracking accuracy, the payoff is fewer software failures caused by larger eye motions, which leads to a more robust performance. In terms of the optical lever method for stimulus control, slippage of the contact lens causes ambiguity in stimulus placement, leaving an uncontrollable amount of retinal movement directly affecting stimulus placement. While it is clearly shown that the TSLO has many advantages over other tracking technologies, it is important to also understand its limitations as well. First, the reference frame itself will have distortions due to eye motion. Since each frame is built up pixel by pixel, each frame in a movie has unique distortions. Once a reference frame is selected, every falls into a broad class of eye trackers coupled with gaze contingent displays. The EyeRIS system here has the best reported performance of any of the systems we found. b Any tracking method can be used for this type of system but results from dPi system are reported since they provide the best results. c There are many head mounted video based tracking systems used for psychophysical experiments, but the Eyelink II was the most commonly reported system (28 out of 31, or 90% usage) in experiments for measuring human microsaccades in a recent review [23]. subsequent frame will then be stabilized against it. The selection of a reference frame is done manually, using a button press directly in the software interface. If the reference frame is selected during a large microsaccade or blink, the video will not be able to stabilize properly in real-time, revealing a warped stabilized video. In these situations, the operator has immediate visual feedback on the choice of a reference frame and can always reselect whenever necessary during an imaging session. Second, vertical shifts of the eye can cause a loss of retinal information for eye tracking. Currently, a single frame is used as the reference frame to compute eye motion. If the eye moves vertically, there will be strips that do not overlap with the reference. The TSLO does offer a larger FOV than the AOSLO (currently operating at 5.5° compared to the typical AOSLO usage of 1-2°), which allows one to accurately capture more retinal structure. While this proves highly beneficial for horizontal motion, it still produces error in the vertical direction. Lastly, only eye motions that cause horizontal and vertical displacements of the retina are computed. Rotations of the eyeball about its optical axis, or torsion, are not corrected for properly. The correction of torsion is possible, but the extra computation time needed for its contribution outweighs the benefits of correcting the motion in real-time. It should be noted that even though these errors add artifacts to the eye motion trace, they do not preclude accurate placement of a stimulus at the targeted location. Conclusion The TSLO is a high speed, robust eye tracking system that can provide high resolution retinal images as well as targeted stimulus delivery. It has been shown that the FPGA solution, with FFT-based cross correlation algorithms, can be translated to traditional scanning laser ophthalmoscope technology. The use of this technology will provide a more compact, robust and cost-effective solution for the study of the retina and fixational eye movements. Using smaller fields of view outside of the foveal region showcases the system's single cell resolution, with individual cone photoreceptors clearly visible without the use of adaptive optics.
2017-11-17T21:41:36.710Z
2012-03-26T00:00:00.000
{ "year": 2012, "sha1": "5abcfddcf63abc9a8b19896d1c7a34052c4b1703", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1364/boe.3.002611", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b7ee16627bbd41b7452e93c0a2ab06b245ca766d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
225774259
pes2o/s2orc
v3-fos-license
THE CONCEPT OF CHILD IN THE PERSPECTIVE OF THE QUR’AN (THEMATIC INTERPRETATION STUDY) The revival of Islamic religion has formed a new atmosphere for the condition or fate of the children of Arabs. Islam forbids the killing of children through the prohibition of Allah, among others, contained in QS. Al-Isra/17: 31. Besides prohibits killing children, featuring the provisions of Islamic law that shows how important attention to and even how to look after children with a loving since before and after birth. The Qur’anic view of children globally formulated in the principle:"The children do not cause problems and suffering of parents and also vice versa" In the Qur’an, Allah says "Do not be a mother suffering difficulties because of his son and father do not have to suffer because of his son, and likewise heir duty.” Qur'an has serious concern about the child. This is evident from the various terms used in the Qur’an to show the meaning of the child as źurriyah, ibn, walad, sabiy, usbah, gulam, thifl, nasl, rabaib, and ad'iya '. As a review of the limitation, this study will analyze the concept of child in the perspective of the Qur'an (interpretation of thematic studies). INTRODUCTION In the study of Islamic history, before the arrival of Islam in Arab regions, children were not treated well. In some Arab tribes, there was a habit of killing child by burying them alive. The behavior of people at that time was similar to animals that eat their own offspring (Fathiyah, 1979:11). The revival of Islam established a new atmosphere for children in Arab regions. Islam banned the killing of children such as in the Qur'an, Al-Isra, 17:31. In addition, legal provisions in Islam also mentions the importance of taking care a chid before and after the child birth. The Qur'anic view of children globally can be formulated in the principle: "Children are not the cause of hardship and misery of parents and vice versa". In the Qur'an (Al-Baqarah: 233), Allah says "…No mother should be harmed through her child, and no father through his child. And upon the [father's] heir is [a duty] like that [of the father]…" (Fathiyah, 1979:11). The Qur'an as a guide (Huda) (A-Baqarah/2:2, 97 and 185; Al-Maidah/ 5:46) can always guide humans in organizing their lives and as the source of knowledge (Al-An'am/ 6:38, An-Nahl/ 16:89). The Qur'an informs what humans can do to organize their lives through concepts, amśal-amśal, and stories of both individuals and groups as teachings, comparison, guidance and warning. This study aims to see how the term and status of children in the Qur'an and why the Qur'an expresses different terms about children. The depth understanding of the concept of children will have an impact on the implementation of education to be more communicative so that it can create a pleasant atmosphere for children/students, and it is expected to improve the quality of educational outcomes. As the limitation of the study, this study will analyze the concept of children in the perspective of the Qur'an (thematic interpretation study). LITERATURE REVIEW Family is the smallest unit in society consisting of a husband and wife, father and child, mother and child, or blood relatives in a straight line up or down to the third degree (Law Number 23: 2002). A child is defined as "manusia yang masih kecil" or young human (Kamus Besar Bahasa Indonesia, 1999:35). The definition of children also includes the time the child exists; this is to avoid the definition of children in their relation to parents and the definition of children once they become parents. In the Qur'an, children are often refered to by the word "walad" in the form of "jamakawlad" which means that a child is born from a mother's womb, male of female, small or big, single of plural. The word "al-walad" is used to describe a lineage, so the word "al-walid" and "al-walidah" are interpreted as biological father and mother. In contrast to the word "ibn" which does not necessarily indicate a lineage and the word "ab" does not necessarily mean a biological father. (Sihab, XV 2004: 614) The Qur'an also uses the term "thifl" (Q.S. al-Nur (24) concerning Child Welfare Article 1 paragraph 2 states that "A child is a person who has not reached the age of 21 (twenty one) years and has not married yet"; and specifically Such studies can be used as a comparison for this study, but they are not used as concerete references because different studies are assumed to have different patterns and colors. The Research Design This study is conducted by using Content Analysis because this study wants to collect Qur'anic verses related to the concept of children, and with the Maudu'I study which means the tafsir methodology to find answer in the Qur'an about a problem by gathering all related verses, and the analysis is done through relevant sciences to produce the whole concept of the Qur'an about the problem (Al-Farmawy, 1976 :41-42) The source of data The primer data for this study is the Qur'an and the secondary data for this study are books related to the issue discussed. This study follows library research, in the sense that all data are gathere d from written materials related to the topic discussed. The second source is books related to Tafsir that can represent this study such as; 1. Tafsir Jami' al-Bayan 'an Ta'wil ayyu Alquran (Tafsir At-Tabariy), a popular tafsir written by an expert of tafsir Tarikh Islam, Al Imam Abu Ja'far Muhammad bin Jarir bin Yazid bin Katsir bin Ghalib At-Tabariy who lived in 224-310 H. 2. Tafsir Mafatih al-Gaib (Al-Kabir) written by al-Imam Fakhruddin ar Raziy, This tafsir is one of the most comprehensive interpretations of birra'yi which is the most comprehensive one because it explains the entire Qur'anic verse. Abu Hayyan asserted that Fakhruddin ar-Razi collected and explained many things at length in this interpretation so that (as if) it is no longer needed interpretation. 3. Tafsir Al-Mishbah written by Prof DR. M. Quraish Shihab, indeed he is not the only expert of the Qur'an in Indonesia, but his ability to translate and convey the messages of the Qur'an in the context of present and post modern times makes him better known and superior to other Qur'anic experts. In completing and further refining the analysis and discussion, this study uses; Al- The Instrument of data generation In this study, the instrument used in data generation is Data Coding, the term "child" is searched in the Qur'an and when it is interpreted in the context of a verse related to the concept of a child, its content is analyzed because not all of these words have anything to do with the concept of children. The Analysis of Data The research procedures: 1. Identifying verses that contain words refering to child in the Qur'an. Different Terms of "Child" In The Qur'an The term "child" is explicitly metioned in the Qur'an 238 times in 50 chapters with the same and different topics, and expressed in 10 terms. 1. Walad, in its single and plural form is found 71 times in 29 chapters. 2. Ibn, in its single and plural forms is found 119 times in 41 chapters. 3. Zurriyyat, in its single and plural forms is found 31 times in 19 chapters. 4. At-Thifl, in its single and plural forms is found 4 times. 5. Ghulam, in its single form, mutsanna and plural form is found 13 times in 8 chapters. 6. Sabiyy, only in its single form is found 2 times in 1 chapter. 7. An-Nasl, in its single form is found 2 times in 2 chapters. 8. Rabaib, in its plural form is found 1 time in 1 chapter, namely An-Nisa'. 9. Ad'iya, in its plural form is found 2 times in 1 chapter, namely Al-Ahzab. 10. Al-'Usbah, in its single form is found 4 times in 3 chapters. a. Child as a trust Trust is something that is entrusted (given) to others, (Kamus Besar Bahasa Indonesia, 1999:30), a trust given to a person in relation to the preservation of property (Dahlan, Abdul Aziz et.al, 1996: 104). A child is a trust from Allah the almightly to be cared by parents, as in the Qur'an Allah says in At-Tahrim/66:6 "O you who have believed, protect yourselves and your families from a Fire whose fuel is people and stones, over which are [appointe] angels, hars and severe; they do not disobey Allah in what He commands them but do what they are commanded." This verse is a guide and command from Allah to Islamic believers so that they can guide themselves and educate their family (Al-Alusy, tt xxi: 101), and also to be responsible to escape from the torment of hell, believers should learn and teach taqwa to Allah SWT, with their knowledge to submit to Allah SWT and also to guide and teach their family to submit to Allah SWT (At-Thabari, tt xxiii: 492). A child as a trust, globally in the perspective of the Qur'an is formulated in the principle "Children are not the cause of hardship and misery of parents and vice versa". In the Qur'an chapter Al-Baqarah/2:233 Allah states "…No mother should be harmed In the tafsir of At-Tabary, it is stated that this verse descended on people who wanted to convert to Islam and emigrated, but their wives and children prevented them from committing the pilgrimage (At-Tabary, tt xxii:415). Quraish Sihab states that children are enemies because they can turn their parents away from religion, or demand something beyond their parents' ability that might let their parents to break the law of Allah (Sihab, xiv 2004:278). In the Tafsir Al-Kabir, Al-Razi (tt xiv: 368) it is stated that as an enemy because a child is a test from to human's allegiance to Allah, and a child who instructs his parent to do illegal activities such as stealing something. At-Tabataba'i (tt iii: 353) it is explained that a child is called as an enemy in the aspect of faith, turning away the parents from good deeds such as infaq, emigrating from disbelieving countries, using unlawful ways and etc. c. Child as temptation or trial The term "walad" in the Qur'an is an independent individual who becomes the second generation of hereditary links. A child is not an investment and capital to improve life ranks, not as a sedative and soulmate. A child (walad) has the role of being a trial for his parents, can be an enemy, can be a barrier to remember Allah, can act as a partner in disobedience to Allah, and the child (walad) cannot help his parents from the punishment of Allah SWT. Allah the almighty mentions that the property and children (auladuhum) of the disbelievers cannot reject Allah's punishment from themsevels (Ali-Imran, 3:10), therefore Allah warns believers not to allow their interest to property and children attracts them because their possessions and children might torture them in their life and hereafter if they are in a state of infidelity (QS. At-Taubah/9:55) Both verses above state that wealth and children are temptation or trial from Allah SWT, and Allah has a greater reward namely heaven with all the pleasures in it. "And know that your properties and your children are but a trial and that Allah has with Him a great reward" (Al-Anfal, 28). Furthermore, At-Thabari (tt xxii: 486) states that wealth and children of the munafiqs have attracted the attention of the Prophet Muhammad Saw, indeed wealth and children can torture them in the world and the hereafter if they are in a state of infidelity. In At-Tabataba'i (tt xix: 170), it is stated that temptation or trial is something with which the person is tempted and tested. Wealth and children are temptation because both are decorations in this world to which human lust is easily attracted by this, so human is tested by this trial and for those who prioritize wealth and children than the afterlife and obedience to Allah, so wealth and children are factors who can neglect human. Wealth and children are pleasures in the world for human, games and jokes, wealth is used for human's decoration and mutual pride.s Allah compares wealth and children to rain which fall on plants that amaze farmers, and when the plants become dry, the plants turn yellow and destroyed. (jewelry) Unlike a child (walad), a child (ibn) functions as zinah (jewelry) in this world. Zinah, as described by Ragib, is essentially something that does not bring disgrace to someone both in this life and hereafter (Ragib,tt: 223). In regards to zinah (jewelry), in the Qur'an, Ali-Imran/3:14, Allah SWT mentions that "Beautified for people is the love of that which they desire -of women and sons, heaped-up sums of gold and silver, fine branded horses, and cattle and tittled land. That is the enjoyment of worldly life, but Allah has with Him the best return." In At-Tabataba'i (tt iii: 353), it is stated that unbelievers assume that wealth and children will be able to protect them, Allah SWT dismiss this view in which wealth and children will not help them from the punishment from Allah SWT. This assumption makes them deviate the rules from Allah SWT by loving their favorite treasures and children and focus on them compared to more important thing, namely the hereafter. Allah the almighty states in the Qur'an, Ali-Imran verse 14 that "That is the enjoyment of worldly life, but Allah has with Him the best return". Therefore, the joy of wealth and with Allah is better and more lasting; so will you not use reason?" Caring toward children, both male and female, is human's nature, as well as liking to women (wives) because the aim is to continue the generation, although Allah the almighty mentions and reminds people that the jewelry of this world is to test human beings among those who best of his deeds. Allah SWT reminds human that good deeds are eternal. Accoding to Shihab (viii 2004:70), this means that the wealth and children that you are proud of and become the world adornments are impermanent, while good deeds are eternal and better for Allah SWT. Treasure and children are trials, the pleasures in this world. The world is only for a moment, jewelry whose purpose besides to make the hearth feel happy and proud for those who have it. The verse above shows that the prophet Ismail as a child (ibn) responded to the prophet Ibrahim As's request as parent to carry out obedience to Allah SWT at the sacrifaction of the prophet Ismail AS. The prophet Ismail As gave a positive response by motivating his parents that Insha Allah he (Ismail As) will be able to carry out obedience to Allah SWT together with his parents by carrying out the slaughtering. A child (ibn) as a result of the care and education of their parents can function as motivators and partners in carrying out obedience to Allah SWT. c. Child (zurriyat) as light Allah SW mentions in the Qur'an, Al-Furqan/25:74 "And those who say, Our Lord, grant us from among our wives and offspring comfort to our eyes and make us an example for the righteous." Ibn Abbas mentions the meaning of "qurratu a'yun" is to obey Allah the almighty so that our eyes are calm because of them in this world and hereafter (At-Thabary, tt xix:318). It is pleasant and soothing views of the world in matters relating to religious life rather than its relation to the life in this world (Ar-Razy, xi tt:456). Then, Ibn Abbas states that lilmuttaqina imama means to make us as leaders to follow and become example for people after us (At-Thabary, tt xix: 319). The existence of generation E. The context of the term "child" The term Walad Of many terms, the term walad can be classified into several topics, namely: a. A child as an heir or a descendant that is the second person in the family environment, a newborn child who is still breastfed, the child as an heir is called a walad. A child as a descendant and a second person in the family environment is a mandatory and a responsibility that must be cared and raised by parents. This can be understood from The term Ibn The analysis of the term "ibn" is focused on two aspects, namely a child as an independent individual and as an individual whose potential should be developed: a. In the Qur'an, Ali Imran/3:61, the invitation of the prophet Muhammad saw to the polytheists and infidels to change by involving children. Children (bana) brought by Rasulullah are their grandchildren, namely Hasan and Husein (Ar-Razy, iv tt: 421, al-Zamakhsyari, i tt:83;dan Shihab, viii 2004:112) In the Qur'an, Hud/11:78 the prophet Luth AS said these are my daughters. They could be his biological daughters or the domestic daughters. The two verses above indicate the term ibn (abnaukum dan banati) can be used for children and adult. Thus, the use of the term ibn is used for a child as a biological heir and also other children. b. The prophet Yaqub gives advice to their children by saying yabanaiyya (QS. Yusuf/12:67); the teachings given by Rasulullah Muhammad Saw to his wives and daughters calling them as abna…… namely (QS. Al-Ahzab/33:59); the advice from the prophet Ibrahim As and Yaqub As to their children (QS. Al-Baqarah/2:132). The verses above generally use the term "ibn" with the context of the conversations shows the existence of the process of education, learning guidance. Why is the term used by the messenger of Allah swt with the term baniyya dan bunaiyya, why Allah uses banatika in conveying orders to Rasulullah Saw to teach his daughters and other daughters of the same believers? The word of Allah SWT in the Qur'an Al-Ahzab/33:59. "O Prophet, tell your wives and your daughters and the women of the believers to bring down over themselves [part] of their outer garments. That is more suitable that they will be known and not be abused. And ever is Allah Forgiving and Merciful." Ragib al-Ashfihani (tt:60; Al-Munawiy,1410 H: 30) mentions that the word "ibn" comes from the word 'banawun" the word ‫لكونه‪ibn‬‬ ‫بناء‬ ‫أللب‬ ‫ألنه‬ ‫الذي‬ ‫بناه‬ ‫وجعله‬ ‫هلال‬ ‫سببا‬ ‫إليجاده‬ a child is called as ibn, because he is a building for his parents, because Allah the almighty made his parents as the cause of a child, the Ragib mentions that the activities carried out by someone for others such as educating him, visiting/guiding him, helping him a lot, or carrying out his business, then that person was called " hua abnahu" so the term "ibn" in various verses show the emphasis of meaning on education, coaching/mentoring and providing assistance for children's growth and development. The term Zurriyat The term Zurriyat in the Qur'an indicates that: a. A child who is still little and young, weak (zurriyatun dhu'afa) (QS.al-Baqarah/2:266); a child/grandchild to be protected by Allah swt from the devils (QS. Ali Imran/ 3:36). b. A good and pious child. The prophet Zakariyya prayed to Allah Swt to be given a good child (zurriyatan thoyyibah) (QS.Ali Imran/3:38); the prophet Ibrahim As hopes that his descendants will become imam (leader) (QS. Al-Baqarah/2:124); Some descendants of the prophet Ibrahim placed near the Baitullah so that they pray (QS. Descendants who can become priests, (3). Descendants who perform shalat, (4). c. Anak keturunan (zurriyat) sebagai cikal bakal manusia yang masih berada pada tulang belakang orang tuanya diambil persaksiannya bahwa Allah Swt sebagai rabnya, From the explanation above, it can be understood that the term zurriyat for a child indicates thata a child has the potential to be developed, the potential refers to: (1). The recognition of Allah as his God (QS/7:72), (2). The potential to submit and obey Allah, the potential to be acknowledged and obedient to Allah, this is illustrated by the meaning of qurrata 'ayun and is a follow-up for those who are pious (Ar-Razy, xi tt:46). To be able to be a part of the muttaqin of course must be with knowledge and good deeds. Even though every child has the potential to submit to Allah the almightly, it turns out that not all have the actual potential. This has been warned by Allah in answering the prayer of th e prophet Ibrahim PBUH, that among the descendants of the prophet Ibrahim PBUH there are those who become imams and there are those who do wrong (QS.al-Baqarah/2:124), as well as the descendants (zurriyat) of the prophet Ishaq PBUH. There are muhsin and zalimu linafsih (QS. As-Shaffat/37:113) Term Thifl The term thifl in the Qur'an indicates that: Thifl is a newborn child whose growth still needs the help of his parents until he reaches baligh (Ibn Zakariya, iii 1979: 322). Az-Zabidy (tt:7263-7264) states that the term thifl is for children up to mumayyiz, the term thifl is used for children up to they reach mumayyiz phase, and after that phase, the term thifl is not used anymore. (Abu al-'Abbas, ii tt:374) The term thifl in the paragraph above describes the biological growth of children up to the age of ihtilam/adult and psychological growth of children up to the level of not understanding female aurat (intimate parts), cannot distinguish the aurat yet. The term Gulaam The term Gulaam is used for humans and others, in the form of Gulman masdar which means to have a strong lust for a relationship between husband and wife (Ibn Manzhur, xii tt:439). Of the 13 times, the term gulaam in its single form, two and plural, as good news from Allah the almighty to the prophet Ibraim, Ishaq and Zakaria AS by giving them gulam. In understanding the Qur'an, as-Shaffaat/37:101, At-Tabary the term gulamin halim is a child who is very patient after he grew up because in childhood, children are still cradled not mentioned as Gulaamin halim. xxi tt:72) In the Qur'an, as-Shaffaat/37:102, it is mentioned that when the child (the prophet Ismail AS) reached the age of being able to work with the prophet Ibrahim, the prophet Ibrahim described his dream and the prophet Ismail answered by saying that do what is entrusted to you, Insya Allah, I will be among those who are patient. This age is thirteen years old, and at that time the prophet Ismail AS was already a gulamun halim, a child who has the perfect level of patience (Ar-Razi, xiii tt:138) The verse above shows that at the phase of ghulam (early adolescence), children already have self-identity, have a strong personality as a result of education in childhood. Assobiyy The term As-Sabiyy (infant), is mentioned in the Qur'an in two verses namely in chapter 19:12 and 29. The first verse shows that the prophet Yahya AS is given wisdom in infancy and the second verse shows the prophet Isa AS was still in the cradle. (Ibn Zakariya, iii 1979:322). In the Qur'an, Maryam/19:12 it is mentioned that when the prophet Yahya AS was still in sabiyy, he was given ability to understand the book of Allah SWT in childhood before adulthood, "[Allah] said, "O John, take the Scripture with determination." And We gave him judgement [while yet] a boy" (QS. Maryam/19:12) the word "hikmah" in this verse refers to the understanding of Taurat, religious or prophet because Allah SWT appointed the propthet Yahya AS and the prophet Isa AS to be the prophets when they were children (Ar-Razi, x tt: 276; Az-Zamakhsyari, iv tt:68). From this verse, we can understand that religious learning should be started from the time when the child is still in the cradle, such as gentle treatment, exemplifying religious values through daily behavior and attitudes and others. Nasl; The term Nasl is mentioned in the Qur'an, as-Sajadah/32:8. Then, He made his descendants from the essence, nasl means can be understood as a child and a descendant as the plural form of ansal and nasilah. Ansala; fall, is mentioned fall and grow and develop. Annasl; which is separated from something, is called a child because he was born from his parent (Ibn Manzur, xx tt :660). Rabaib "Prohibited to you [for marriage] are your mothers, your daughters, your sisters, your father's sisters, your mother's sister, your brother's daughters, your sister's daughters, your [milk] mothers who nursed you, your sisters through nursing, your wives' mothers, and your step-dauhters under your guardianship [born] of your wives unto whom you have gone in. But if you have not gone in unto them, there is no sin upon you. And [also prohibited are] the wives of your sons who are from your [own] loins, and that you take [in marriage] two sisters simultaneously, except for what has already occurred. Indeed, Allah is ever Forgiving and Merciful" (QS. An-Nisa/4:23) Rabibatu ar-rajul; the child of his wife with a previous husband (stepchild) Ibn Abbas (v tt:337) states that the meaning behind rabaib daughter of his wife, not biological children, it is called as rabaib because his stepdaughter is within the scope of his care.(Ar-Razi, tt:609) 9. 'Usbah A group of people between 10 to 40, this Ushbah is a close family on the part of the father because they are with him (Ibn Manzur, xiii tt: 515), the word of Allah in the Qur'an, Yusuf/12:14, "They said, "If a wolf should eat him while we are a [strong] clan, indeed, we would be losers." Ad'iya The word of Allah in the Qur'an, Al-Ahzab/33:4, "Allah has not made for a man two hearts in his interior. And He has not made your wives whom you declare unlawful your mothers. And he has not made your adopted sons your [true] sons. That is [merely] your saying by your mouths, but Allah says the truth, and He guides to the [right] way." The verse above expressly states that adopted children are not the same as children (ibn) as descendants of married couples, the statement of someone "you are my father" or "you are my child" does not change the position of a child or father. A. Conclusion Based on the findings of the study, it can be concluded that: 1. The differences in stating the term "child" in the Qur'an indicates the importance of different treatments in dealing with children in their growing and developing age, and the importance of the status, role and function of a child for parents. 2. The term "child" in the Qur'an is informed through the characters and traits attached to the child. For instance a growing child (thifl), growing from a physically weak and
2020-07-02T10:12:26.504Z
2020-06-30T00:00:00.000
{ "year": 2020, "sha1": "d9040fc4f49045fc0e2af176d339e5276474f324", "oa_license": "CCBYSA", "oa_url": "http://jurnaltarbiyah.uinsu.ac.id/index.php/tarbiyah/article/download/685/556", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "21ae7c128789795b2af09879074d8e1bc8806a83", "s2fieldsofstudy": [ "Philosophy" ], "extfieldsofstudy": [ "Sociology" ] }
255734957
pes2o/s2orc
v3-fos-license
Travelers’ Subjective Well-Being as an Environmental Practice: Do Airport Buildings’ Eco-Design, Brand Engagement, and Brand Experience Matter? The physical environment of airports plays a crucial role in improving travelers’ perceptions and well-being. Adopting a green physical environment may elicit customers’ cognitive and emotional responses and provide a convenient consumption environment. Brand experience and engagement are other important consumer–firm interactions that influence the attributes of the passengers’ well-being. The current study sought to assess the impact of the eco-design of buildings, brand experience and engagement on the well-being of travelers at an international airport in Saudi Arabia. Additionally, the current study investigated the possible effects of eco-design on airport experience and engagement. The results of the structural equation modeling analysis revealed that the eco-design of airport buildings was independently associated with passengers’ well-being and brand engagement, but not with brand experience. Additionally, well-being was significantly predicted by brand engagement and experience. Airport managers are advised to adopt an internal eco-design to help promote passengers’ connection with the brand and improve their well-being, which would eventually be reflected in their behavioral attributes and decision-making. Introduction The aviation industry accounts for a considerable share of greenhouse gas emissions across the world, and efforts have been made to address the increasing impact of aviation on the environment [1,2]. The anticipated growth in the global number of air passengers, particularly in the recovery period after the COVID-19 era [3], is essentially accompanied by negative effects on climate change, due to fossil fuel consumption [4]. Therefore, multiple organizations and research bodies have sought to implement regulatory measures and adopt strict approaches in regard to manufacturing products in a way that aims to protect the environment. This can be ideally attained through creating environmentally-friendly products and measures based on the expectations of the consumers [5]. Concomitantly, it has been shown that product appearance had a significant impact on consumers' perceptions [6], and the existence of eco-products has been an important factor in promoting the concept of environment-friendly entities [7]. Therefore, eco-design of buildings has frequently been a matter of research in the tourism sector, and the aviation industry is no exception [8]. Implementing an eco-design in an airport is defined as adopting a human-made environment that potentially impacts on the emotions, mental health, behaviors and physical health of occupants within the airport building [9]. Actually, eco-design is a multifaceted concept that integrates environmental attributes with measures of creating sustainable solutions. These measures eventually help satisfy user desires and needs [10]. Recently, airports have begun to seek ways to ensure that passengers have positive perceptions and attitudes when in the airport, feel engaged with the airport and have good psychological well-being in it, in order to actively ensure customer retention and long-term success in a highly-competitive sector [8,11]. Actually, the efficient utilization of eco-design is expected to be associated with positive outcomes, including reduced emotional exhaustion, stress reduction and customer retention, which are all crucial elements of a company's success [12]. Accumulating investigations into the cognitive and behavioral impacts of the eco-friendly design of buildings showed that a green physical environment had a positive healing effect on negative feelings, depression, distress and anxiety [13]. However, in the context of airports, little is known about the attributes of customer well-being after the implementation of eco-design in airport buildings, such as eco spaces, living plants, green décor and green atmospherics [13,14]. It has previously been shown that adopting an eco-design would help provide a relaxing consumption environment that supports consumers' perceptions of well-being [12]. In this vein, travelers' perceptions of subjective well-being is referred to as the extent to which a given brand would positively contribute to enhancing the impact of quality of life of the service provided by the airport [15]. Subjective well-being is related more to self-evaluation of life satisfaction and happiness, rather than being an objective measurement of economic and health aspects and other well-being attributes [16]. Customer well-being may also be connected to brandrelated variables, such as engagement with the airport brand and the overall experience of passengers [11]. Experience is a key element in developing positive memories and experience has frequently been cited in studies focusing on experience as a driving force of the market [17]. As with other industries, experience in the tourism sector is referred to as the interaction between the company and the consumer that elicits emotional interactions providing memorable services [18]. From another perspective, brand engagement has been identified as the level of connection and interaction between consumers and a brand. Consumers' engagement with a brand has evolved as a channel through which a consumer forms a passion for a brand, and develops an individual disposition that builds commitment towards a relationship with the brand [19]. Customer experience and engagement can both serve as important catalysts for high customer satisfaction and better business outcomes [20]. Therefore, an interplay of customer engagement and experience, as well as customers' well-being, should be a matter of research. In Saudi Arabia, to the best of our knowledge, no studies have studied the impact of an airports' eco-design, and brand experience and engagement, on the psychological parameters of travelers. On the national level, and considering the scarcity of knowledge regarding the topic, determinants of psychological attributes are considered important in future plans to improve the quality of services and enhance marketing performance. Therefore, the creation of a green service-scape environment needs to be traced back and linked to brand-related parameters, eventually reflected in travelers' satisfaction. The present study aimed to assess the impact of the green physical environment of airport buildings on customers' subjective well-being (as a function of life satisfaction and happiness), as well as brand experience and engagement. Given that travelers would temporarily be subjected to travel experience in airport lounges in specific time periods, the present study focused on the subjective well-being attribute. Additionally, the current investigation explored the role of customers' experiences and engagement with the brand on their well-being. Finally, we sought to investigate a potential moderating role of brand experience on the relationship between the eco-design of buildings and subjective well-being. The Concept of Eco-Design of Airports' Physical Environment The physical environment of buildings is a term used interchangeably with building design and building atmospherics [11]. Building design is an important factor that facilitates the buying process by generating distinct emotional effects on consumers to increase the likelihood of their purchasing a product or a service [21]. Airport buildings include the terminals, office building and hangers for the airplanes. The physical environment of traveler terminals has the utmost importance because these are heavily utilized by large numbers of passengers [11,22]. Green design cues include aesthetic and functional elements that affect consumers' evaluation of a given destination. A biophilic design consists of using natural cues, such as botanical gardens, in order to attract consumers, improving approach behavior and reducing the stressful atmosphere of the daily routine of travelers in airports [23]. It is, therefore, plausible to design airport buildings, particularly passenger terminals, to a passenger-friendly pattern. With the extensive variation of airport designs, greening was adopted as a core parameter in recent designs [24]. Basically, multiple green constituents of airport buildings have been increasingly considered, such as green décor (green items, plants and green walls), green ambiance (natural scents, air freshness and natural light) and green spaces [25]. Notably, eco-designs have been linked to passengers' emotional responses and selfevaluation of buildings [26]. There is a growing body of evidence indicating the role of environmental psychology and health atmospherics in supporting psychological health and enhanced experiences with services and products, in order to enhance post-purchase behaviors in the tourism and hospitality sectors [23,27]. Additionally, many researchers assessed the factors associated with a green physical environment, such as consumers' attitudes, resilience, satisfaction, and brand engagement [9,28,29]. For instance, Han and Hyun [9] recently showed that a green environment (in outdoor and indoor settings) played an important role in improving mental health perceptions, loyalty and emotional well-being. Therefore, the following hypotheses were developed: Hypothesis 1 (H1): Eco-design of the airport significantly influences brand engagement. Hypothesis 2 (H2): Eco-design of the airport significantly influences brand experience. Brand Experience and Engagement Designing a green physical environment is an important facet of efforts aimed at airport greening [11]. Indeed, since the physical environment frequently provides a tangible cue that can be relied on [30], it is critical to underline the tangible experience of tourists and visitors when designing tourism products and services [11,27]. Furthermore, the eco-design should be a tangible cue when visitors provide judgements on their experience at the airport. Researchers in previous studies investigated the green physical environment in airports, including green items (walls, plants, etc.), green ambient conditions, and green spaces, as well as resting areas, hallways, waiting lounges and restaurants [5,11,27,31,32]. Based on these studies, there is a consensus that a green physical environment induces positive responses in individuals' consumption behavior and enhances their experiences. Unsurprisingly, passengers' experiences, defined as the interactions and activities that the passengers have in an airport [33], elicit emotional connections and excitement about the service and/or product [34]. Interestingly, brand experience involves a cumulative experience of multiple contact points along the consumer journey rather than a single touch point with the brand [35]. Brand experience consists of several domains, including sensory (the experience as encountered with the senses), behavioral (the undertaken actions by consumers due to the experience), affective (related to emotional interactions due to the experience) and intellectual (due to perceptions and thoughts formed as a result of the experience) [36], The current study focused on the sensory evaluation because it is the most influential domain in decision-making [12]. Brand experience at airports includes the link between passenger and airport objects and staff. The active participation of travelers mediates deeper feelings. Discomfort with a given brand would elicit negative feelings, which might eventually impact travel experiences [37]. Considering the fact that the eco-design of a store building mediates a relaxing environment and a well-being consumption paradigm, visitors may form positive behaviors, increase brand engagement and enhance reputation or image of the brand [12]. Collectively, brand experience is influenced by service quality and physical services, and the experience, in turn, impacts brand engagement and well-being. Of note, brand engagement is another important attribute in the understanding of marketing domains. An engagement with a brand encompasses a number of non-transactional behaviors which are elicited because of the consumers' interests [38]. It is a multidimensional concept that relies on customers' expressions of their emotional, cognitive and behavioral attributes [39,40]. Therefore, brand engagement is referred to as the level of the consumer's state of mind related to the brand, self-motivation and the context, and is characterized by distinct levels of behavioral, emotional and cognitive activities during the interaction with a brand [39]. Few studies have examined the impact of green practices on brand engagement. Lee et al. [41] indicated that a green physical environment at luxury hotels had favorable evaluations compared to hotels with non-biophilic designs. These favorable evaluations included economic value and attitudes, which are antecedent predictors of customer engagement [41,42]. This was corroborated by Alfakhri et al. [43], where green designs in the hospitality industry impacted customer experience and subsequent purchasing behaviors. Chuah et al. [44] showed a significant correlation between perceived corporate social responsibility of airline corporations on sustainable customer engagement, and such a relationship was significantly moderated by green trust and environmental concerns. These findings guide airlines in addressing the effects of corporate social responsibility and green practices on brand engagement and communication [44]. Therefore, brand engagement acts dynamically, where a passenger interacts with the airport across the travel experience [45]. Travelers can promote the airport services, staff, and facilities, and the connection-related measures undertaken by airports can ultimately improve brand engagement [46]. Brand engagement is a common attribute which encourages airports to improve their services so that this is reflected un engagement behaviors [47]. Therefore, brand engagement is another measure of brand equity in the airport industry. Notably, there is a potential interaction between brand engagement and experience. This is because consumers' experience may be quickly attained, or time may be required to develop engagement with a brand before having a good perception [48]. Therefore, active engagement may mediate a good brand experience for passengers. Travelers' Subjective Well-Being In general, philosophers have defined well-being as the quality of a good life, and others have expanded the concept to a good society [49]. However, more specific terms have been proposed in the subsequently published material. An objective approach of well-being implies that quality of life indicators are the major determinants of subjects' well-being; these include material resources, such as housing, income and food, as well as social domains, such as health, education, social networks, etc. [16,50]. Another subjective approach has been frequently utilized, which relies on self-evaluation of one's life. In particular, subjective well-being is mainly oriented towards self-perceptions of life satisfaction (a cognitive attribute) and happiness or unhappiness (an emotional attribute) [49]. There has been a gradual increase in interest in the assessment of subjective well-being, given that it contributes to favorable life outcomes, such that individuals with high levels of subjective well-being possess stronger immune systems, low prevalence of cardiovascular disease and are more pro-social and cooperative [51,52]. In the tourism industry, well-being perception is defined as the perception of travelers of the extent to which a given airport brand positively mediates the quality-of-life enhance-ment [15]. It relates to self-evaluation of the quality of life within optimal physiological and psychological aspects, and it implies emotional and cognitive evaluation of life [53]. Consumers place importance on enrichment of the quality of life while making purchase decisions. From another perspective, travel is a significant source of positive emotions (e.g., relaxation, pleasure and prestige), and travel can be seen as an important contributor to well-being [54]. Consistent with early research [15,55], a traveler's well-being was defined as the extent to which a traveler's experience with a given airline lounge influences that traveler's self-perceived quality of life. In an airline lounge, the traveler's experience is perceived to influence the need of well-being if he or she perceives that using the lounge may improve the quality of travel experience. For example, Liang et al. [56] found that visitors with higher degrees of satisfaction regarding indoor environmental quality in green buildings had significantly higher levels of subjective well-being. Furthermore, Kim et al. [12] assessed the impact of multiple domains, including sensory, emotional and cognitive evaluation, on well-being perception among airway passengers. The results showed that travelers' cognitive and sensory evaluation of airport lounges were antecedent predictors of well-being perceptions [12]. Cognitive factors relied on items related to physical and non-physical attributes, whereas the sensory factors were primarily focused on service scape attributes that form the immediate responses of travelers [12,57]. In another recent quantitative investigation, Han et al. [58] revealed that specially designated green areas and natural surroundings in an airport exert significant positive impacts on the mental health value of that airport's occupants. The travel experience is enhanced when a passenger is relaxed in a comfortable atmosphere or accomplishes what he/she wanted to do [12]. In services marketing, cognitive evaluation of services is known as the perceived quality of services by passengers regarding the overall experience in airline lounges. The interaction between travelers and the facility or service in an airline lounge may also provide cognitive stimulation [12]. Importantly, cognitive evaluation has a significant role in well-being perception [12]. Therefore, passengers' perceptions of service quality (represented as the eco-design of airport buildings in the current study) may be linked to subjective well-being. Green ambience also has significant effects on brand image, which indicates that consumers could perceive green ambience favorably [12]. In their study, Han et al. [58] stressed the significant effects of natural surroundings and the green physical environment on an airport's image. Furthermore, a traveler's experience in an airline lounge may also influence that traveler's well-being. Based on these observations, the hypotheses of the current study were formulated as follow: Hypothesis 4 (H4): Airport's brand engagement significantly influences brand experience. Hypothesis 7 (H7): Brand experience significantly moderates the relationship between eco-design and travelers' subjective well-being. A full framework of the hypothesis is illustrated in Figure 1. Construct Mesures A survey-based study was conducted adapting questions to cover the holistic idea that served the objectives of the study. Seven items of an airport's eco-design were adapted from Han et al. [59]. These items showed the dimensions of airport environmental design that travelers encounter during their stays at airports. Moreover, eight items were adapted to evaluate brand management from Prentice et al. [35] and Obilo et al. [38]. These items explored the emotional and rational attachments between passengers and the airports. Additionally, we adapted three items to brand experiences and well-being from Ma et al. [18]. These items helped fathom the essence behind the passenger perception of the airport as a brand and tourists' behavioral outcomes. These items were collected on a five-point Likert scale, ranging from 1 = Strongly disagree to 5 = Strongly agree. The included items under each domain are listed in the supplementary data (Table S1). Data Collection Travelling at airports is often a mass phenomenon demanding extensive passenger involvement levels. Thus, the present study collected data using an e-survey, which was chosen since it is easily accessible, cost-effective, and responses are received quickly [60]. We selected the respondents for the current study with a non-probability convenience sample at the King Fahd International Airport. The reason behind this airport being selected, as the context of analysis, is that it is one of the vital international airports in Saudi Arabia and is considered one of the busiest airports in the country [61]. Moreover, we ensured that the participants in the survey had fresh memories of the airports, according to the recommendations of Kim et al. [62]. So, we targeted passengers who had experiences at the airport of not more than one month previously to ensure accurate and specific results. We then distributed the e-survey by informing participants through a multinational travel agency from 01 June to 30 September 2022, at the peak of international passengers being at the King Fahd International Airport in Dammam city, Saudi Arabia. Statistical Analysis Data analysis was carried out using RStudio (R version 4.1.1). Categorical variables were presented as frequencies and percentages. Exploratory and confirmatory factor analyses were applied to assess the validity of the proposed model. A partial least squares structural equation modeling (PLS-SEM) approach was used. This method is feasibly used in models consisting of moderating relationships because the indicators are linearly com- Construct Mesures A survey-based study was conducted adapting questions to cover the holistic idea that served the objectives of the study. Seven items of an airport's eco-design were adapted from Han et al. [59]. These items showed the dimensions of airport environmental design that travelers encounter during their stays at airports. Moreover, eight items were adapted to evaluate brand management from Prentice et al. [35] and Obilo et al. [38]. These items explored the emotional and rational attachments between passengers and the airports. Additionally, we adapted three items to brand experiences and well-being from Ma et al. [18]. These items helped fathom the essence behind the passenger perception of the airport as a brand and tourists' behavioral outcomes. These items were collected on a five-point Likert scale, ranging from 1 = Strongly disagree to 5 = Strongly agree. The included items under each domain are listed in the Supplementary Data (Table S1). Data Collection Travelling at airports is often a mass phenomenon demanding extensive passenger involvement levels. Thus, the present study collected data using an e-survey, which was chosen since it is easily accessible, cost-effective, and responses are received quickly [60]. We selected the respondents for the current study with a non-probability convenience sample at the King Fahd International Airport. The reason behind this airport being selected, as the context of analysis, is that it is one of the vital international airports in Saudi Arabia and is considered one of the busiest airports in the country [61]. Moreover, we ensured that the participants in the survey had fresh memories of the airports, according to the recommendations of Kim et al. [62]. So, we targeted passengers who had experiences at the airport of not more than one month previously to ensure accurate and specific results. We then distributed the e-survey by informing participants through a multinational travel agency from 01 June to 30 September 2022, at the peak of international passengers being at the King Fahd International Airport in Dammam city, Saudi Arabia. Statistical Analysis Data analysis was carried out using RStudio (R version 4.1.1). Categorical variables were presented as frequencies and percentages. Exploratory and confirmatory factor analyses were applied to assess the validity of the proposed model. A partial least squares structural equation modeling (PLS-SEM) approach was used. This method is feasibly used in models consisting of moderating relationships because the indicators are linearly combined to construct composite variables [63][64][65]. The convergent validity was assessed using composite reliability (CR), the exact reliability coefficient (RhoA), average variance extracted (AVE) and Cronbach's alpha [66,67]. The discriminant validity of the model was assessed by using the Fornell-Larcker (F-L) criteria and heterotrait-monotrait (HTMT) ratio of correlations. The results of the bootstrapped structural path were expressed as beta coefficients (β) and 95% confidence intervals (95%CI). Demographic Characteristics The responses of a total of 352 participants were analyzed in the current study. Females represented approximately two-thirds of the sample (67.9%). More than a half of respondents were married (52.0%) and had obtained a university degree (52.0%). Less than a half of the sample had no children (43.5%). Approximately one-third (34.9%) of the respondents had a monthly income of >12,000 SAR (Table 1). Internal Consistency and Convergent Validity To confirm the validity of the survey used, an exploratory factor analysis (EFA) was carried out using a promax rotation. The EFA revealed a model consisting of three constructs. However, one item was excluded from the eco-design of airports domain because it was not significantly loaded to its main construct (factor loading of the variable Eco_07 was 0.49, Table S1). Based on the confirmatory factor analysis, the model showed satisfactory fit statistics (χ 2 = 358.329, df = 114, p < 0.001, CFI = 0.951, TLI = 0.941, RMSEA = 0.078). Furthermore, the standardized factor loadings were ≥0.7, indicating significant loadings ( Table 2). The internal consistency of survey subdomains was good, as confirmed by the high Cronbach's alpha values (ranging between 0.735 and 0.947). Furthermore, the RhoA values exceeded 0.7 and AVE values exceeded 0.5 [68] (Table 2). Discriminant Validity To confirm the discriminant validity of our model, the square roots of the AVE values were compared to the correlation between different constructs ( Table 3). The results showed that the correlation coefficients were lower than the square roots of AVE. Furthermore, the HTMT values were not higher than 0.85 (Table 4) [69]. In addition, the bootstrap confidence intervals of HTMT were not significantly higher than 1 (Table S2); therefore, the discriminant validity was assured. Discussion The results of the current study add to the existing literature regarding the impact of a green physical environment in airports. Based on a robust quantitative analysis, the current study supported the hypothesis (H3) and revealed that the eco-design of airport buildings significantly contributed to enhancing the passengers' subjective well-being, which is a key concept of success for every business. The well-being was also independently associated with brand experience (H6 was accepted). It was unsurprising that the biophilic building design effectively elicited cognitive and emotional responses, perceived during the overall evaluation of buildings and places in the airport [12]. Practitioners and researchers have stressed that nature provokes health benefits and emotional responses, particularly for individuals who are continually connected to the natural environment [70,71]. The integration of natural elements into a hotel physical environment led to increased customer retention, satisfaction and well-being [72]. Han and co-authors [59] also demonstrated a strong independent relationship between the green physical environment at airports and customers' subjective well-being. Moon et al. [11] also emphasized the need for a biophilic design to induce affective and cognitive appraisals of passengers' experiences. These consistent findings stress the importance of green items and spaces in supporting mental health perception and overall image of the brand. In the present study, we also showed that eco-design was a significant predictor of enhanced brand engagement (H1 was accepted). This was in agreement with previous evidence indicating that employing a green service design in hotels would help in the engagement of customers with the brand [73,74]. Additionally, the eco-design of airport buildings significantly influenced the reputation of airports [59]. The result also agreed with earlier tourism investigations which underlined the importance of brand reputation and engagement in explaining subsequent behaviors and decision-making [75,76]. In the hospitality industry, Lee et al. [77] stated that customers perceive hotels with biophilic designs as being superior in quality compared to those with non-biophilic designs. Another study has similarly shown that customers would have a stronger willingness to visit hotels with a green physical environment and to be engaged with green hotel brands [78]. In a recent study, Rosenbaum et al. [79] studied consumers' neural activation following exposure to natural elements, and showed that biophilic designs elicited consumers' interest, attention, and relaxation and supported brand engagement. Firms and marketing entities are becoming aware of the potential benefits of green practices and their relationships with consumer marketplace behavior [80]. Brand engagement is comprised of a two-way interaction path between the consumer and the brand, and the psychological perception of the subjects (consumers) is the most important factor in the creation of engagement. The perceived impacts of a green physical environment affected passengers in a way that promoted their engagement with the brand. Collectively, atmospheric designs have important implications on customers' attachment and engagement, and this should be exploited in further communicative strategies based on visitors' familiarity of airports. Incorporating a green environment elicits positive consumer evaluations, supports the well-being construct and enhances the decision-making process. As mentioned earlier (Section 2.2), a proportion of passengers may need time to become engaged with an airport brand before developing a good brand experience [48]. In the current study, brand engagement significantly impacted the experience; hence, H4 was supported. This was supported by the fact that consumers' experience formed via a number of stimuli which developed during direct and indirect interaction with a given brand [81]. Basically, brand interaction usually impacted the evaluation process and subsequently influenced post-consumption experiences, attitudes and moods. In this way, brand engagement and consumers' experience can be linearly correlated [82]. In the current analysis, the eco-design of airports positively influenced brand engagement, and, later, positively affected brand experience. Nevertheless, the eco-design was not associated with brand experience [74]. A possible explanation of this finding is that a single parameter of brand experience was included (sensory experience), and this might have impacted the interactive scheme of the model. However, the findings of the current study showed that eco-design indirectly influenced the sensory brand experience through brand engagement. Ultimately, it seems that enhancing the green environment at airports would support subjective well-being through three pathways, including direct effect and indirect effects, via brand experience and engagement. Strengths and Limitations In the current study, a survey with previously validated items was utilized for data collection, and the validation was confirmed statistically on the sample under study. The current investigation employed robust statistical approaches, and the model was well-fit; hence, we could retrieve reliable results. The findings of the current study would fill gaps in the current literature, particularly in the context of scant evidence in the airline industry. Although the impact of eco-designs on consumer behavior have been investigated elsewhere in the tourism literature [83][84][85], little is known about the effects of biophilic designs at airports on visitor behaviors and responses. The results presented in the current study provide a robust foundation regarding the green physical environment of airports in the decision-making process of customer behavior strategies. The current study focused on important attributes that would support the firm's reputation and consumers' well-being, which are undoubtedly crucial elements for every business. Particularly, the current study assessed how these attributes could be elicited by utilizing eco-designs, a matter which has scarcely been assessed. Empirically, evidence from the present findings demonstrated the importance of eco-design within the postulated framework, such that it was an essential element that influenced its subsequent constructs in the decision-making process by customers. Consequently, airports should not only stress the functional facets to satisfy passengers' needs, but also target the emotional and cognitive needs of visitors. In essence, providing a green atmosphere is a fundamental aspect to help visitors feel relaxed, healthy and happy in order to support brand-related attributes and airport reputation when compared to other brands. This could be attained by increasing eco-spaces, living plants, green rest areas and green physical environments. All these elements would eventually support the airport brand, increase the subjective well-being of customers and enhance the behavioral intentions. However, the study was not without limitations. Data collection was performed based on a convenient sampling approach. Furthermore, the study was carried out among travelers from a single airport. These limitations might limit the generalizability of the obtained results to a greater population in other airports inside and outside Saudi Arabia. Data may also be subject to information bias due to the self-reported questions. Additional studies should involve multiple airports in a single country or in multiple countries, and open-ended questions may be added to the survey to employ a mixed design. A random sampling technique might also be adopted to account for the generalizability options. Focusing on the brand experience domain, the present study exclusively relied on the sensory experience in our study (rather than other behavioral, affective and intellectual domains of experience), and this might have influenced the interpretation of direct and moderating relationships with other domains in our hypothesized framework. Therefore, future studies might benefit from including other experience attributes in order to get insights into the possible associations with other variables and domains. Another limitation is that subjective well-being was utilized as a key concept of passengers' behavioral variables. The theoretical framework should be expanded in future investigations by including more meaningful indicators of consumer behavior that reflect the decision-making process. Conclusions The current study included a sample of airline passengers to assess the role of the green physical environment at King Fahd International Airport on enhancing passengers' experience and well-being and the engagement with the airport brand. Based on a validated structural model, the current study showed that the eco-design positively influenced passengers' well-being and engagement with the brand. The subjective well-being was also influenced by passengers' experience and brand engagement. The current findings also showed no significant moderating role of brand experience on the relationship between eco-design and well-being. Our results support the arguments that a green physical environment positively affects the active engagement of passengers with an airport brand and customer well-being, and we suggest these important ingredients of airport passenger behavior are variables that warrant future emphasis by researchers and stakeholders in the airline industry. Airport managers are advised to implement green environmental measures and support sustainable, environment-friendly objects inside the airport system and in airport buildings, in order to directly enhance brand engagement and well-being and indirectly support brand experience. Supplementary Materials: The following supporting information can be downloaded at: https:
2023-01-12T17:26:43.735Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "dbeb72b91a45d4bd210ce343324fa6fcb99de858", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/20/2/938/pdf?version=1672837385", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "03a987f84a00c7bbb8fd7f4d5b19cb6264ef4d9b", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
220044663
pes2o/s2orc
v3-fos-license
Sailing in rough waters: Examining volatility of fMRI noise Background The assumption that functional magnetic resonance imaging (fMRI) noise has constant volatility has recently been challenged by studies examining heteroscedasticity arising from head motion and physiological noise. The present study builds on this work using latest methods from the field of financial mathematics to model fMRI noise volatility. Methods Multi-echo phantom and human fMRI scans were used and realised volatility was estimated. The Hurst parameter H ∈ (0, 1), which governs the roughness/irregularity of realised volatility time series, was estimated. Calibration of H was performed pathwise, using well-established neural network calibration tools. Results In all experiments the volatility calibrated to values within the rough case, H < 0.5, and on average fMRI noise was very rough with 0.03 < H < 0.05. Some edge effects were also observed, whereby H was larger near the edges of the phantoms. Discussion The findings suggest that fMRI volatility is not only non-constant, but also substantially more irregular than a standard Brownian motion. Thus, further research is needed to examine the impact such pronounced oscillations in the volatility of fMRI noise have on data analyses. Introduction A given functional magnetic resonance imaging (fMRI) blood oxygenation level dependent (BOLD) time series can be defined as (1) where the μ t is the mean, ε t is a one dimensional noise process, and v t is the volatility of the noise process. Detrending is typically conducted as part of preprocessing to remove signal drift. Thus, a given fMRI time series is often assumed to be a constant process, indicating that v t in Eq. (1) could be replaced by constant v. This assumption, however, has recently been challenged and there has been increasing interest in exploring time-dependent properties of fMRI noise [1][2][3][4][5][6]. It has been shown that factors such as head motion and physiological processes including respiration and pulse can introduce heteroscedasticity to the time series [1][2][3]7]. Heteroscedasticity in turn has been found to complicate the linear modelling, which has led to the introduction of several statistical models to counteract the impact of these artifacts [1,2,7]. One limitation of these models is that they cannot explain nonconstant volatility arising from unknown or uncontrollable sources, such as scanner noise. As volatility of a time series cannot be directly observed, a plethora of deterministic and stochastic models have been proposed to estimate it in financial returns data [8][9][10]. Over the years, direct comparisons of different volatility models have shown that stochastic models, which assume that logarithm of the volatility process behaves like standard Brownian noise with Hurst parameter H = 0.5, outperform their deterministic, data-driven counterparts providing a better fit to data [11][12][13]. This assumption implies in particular that volatility is not constant, 1 and exhibits an oscillatory behaviour on any finite time interval. This oscillatory behaviour is governed by a parameter H which in the Brownian case takes the value H = 0.5. More recently rough stochastic volatility models have been considered (see [14][15][16][17][18]) where the parameter H is allowed to vary in the range H ∈ (0, 1). In these models, as mentioned above, the parameter H governs the oscillations of the volatility process; the lower the parameter H, the stronger the oscillations on any finite interval. In particular, the values H ∈ (0,0.5) correspond to the rough case (i.e. rougher paths than a standard Brownian motion). Fig. 1 shows the roughness/irregularity of volatility paths for different H values. As H approaches 0 the paths become more irregular/rough. A rough stochastic volatility model, the rough Bergomi (rBergomi) model introduced in [15] by Bayer Friz and Gatheral, is described by the system where W and W ⊥ represent two independent standard Brownian motions with ρ ∈ [− 1, 1], η >0 describes the volatility of volatility, and ξ 0 (⋅) describes the initial variance curve, which we assume to be constant. Our motivation for choosing the mean reverting, driftless rough Bergomi model (2) derives from the practice of detrending mentioned above. As it is common practice to remove linear drifts from fMRI prior to further analysis, such a driftless model would be a good fit to the data. Volatility processes simulated using the rBergomi model exhibit remarkable similarity to realised volatility processes [15,18]. Furthermore, the rBergomi and other rough models introduced since have been found to provide important improvements to forecasting volatility [9,10,15,19]. In addition to improving forecasting accuracy, rough models can be used to assess smoothness of a given process by estimating the parameter H [17,18,20,21]. Estimating the parameter H can provide information about the extent of heteroscedasticity in the series, but requires access to the realised, historical volatility process, which cannot be directly observed. To bypass this difficulty, in finance intra-day data such as 5min asset prices returns are used to estimate daily realised volatility [22][23][24]. The daily estimates are then combined to form a realised volatility process, providing information about daily variances in an asset price over the course of months or years. Considering recent calls to explore the possibility of implementing models from the field of financial mathematics to fMRI [6,25] and the visual similarities between financial returns data and fMRI BOLD signal (Fig. 2), such an approach could be applied to fMRI data as well to examine time-dependent behaviour in volatility of the noise process. Utilising multi-echo acquisition, the data from each echo could be used as intra-time point data to estimate volatility. Thus, in a manner similar to standard combination of data from each echo time, we can produce a realised volatility series. These series could then be used to estimate the smoothness of volatility in fMRI data using models such as the rBergomi. Estimating rBergomi model parameters, including H, is computationally expensive and often relies on the use of Monte Carlo based calibration methods [26,27]. This limits the use of this model in practice despite the benefits it offers [15,18,28]. Recently, neural networks have been proposed as an efficient way to solve the calibration problem [29][30][31][32]. Neural networks provide a powerful way of identifying relationships between input parameters and model output and can be particularly useful for models that do not have closed-form solution [29,30]. Recent work found that neural network calibration framework can be successfully applied to a range of rough stochastic volatility models to aid accurate pricing and hedging [29,33]. The aim of this paper was to conduct an exploratory empirical study examining the volatility of fMRI noise. We were specifically interested in exploring whether volatility of fMRI noise exhibits time-dependent behaviour that cannot be explained by factors such as head motion and physiological noise. We aimed to collect multi-echo fMRI signal from a phantom to examine thermal noise. We also aimed to examine whether volatility patterns observed in the phantom data were present in noise in human scans. To achieve this aim, multi-echo resting state data was extracted from the ventricles of four participants from two different datasets. Observations collected at each echo time were treated as intra-time point data and were used to estimate realised volatility. The roughness of the realised volatility was assessed by estimating the Hurst parameter H, which was accomplished with using neural network calibration tools. As the study was exploratory in nature we did not have prior hypotheses. However, considering the visual similarities between many financial returns and fMRI BOLD series, we anticipated that the estimated H of the realised volatility processes was in the rough volatility range, 0 < H < 0.5. FMRI data acquisition Phantom data. Two MRI phantom s filled with liquid material was used to acquire multi-echo fMRI signal consisting entirely of thermal noise. The data were acquired with two different 3 Tesla GE Discovery MR750 units using 32-channel receive only head coils (Nova Medical, Wilmington, MA, USA). This was done to ensure the findings were not unique to a specific scanner. The functional multi-echo echo planar imaging (EPI) data consisted of 200 volumes and each volume consisted of 18 slices with the following parameters: 2.5 s repetition time (TR), 80∘ flip angle, 64 × 64 acquisition matrix, 3 mm slice thickness with 4 mm slice gap. The fMRI slices were acquired in an ascending order and eight echo times were used: 12 ms, 28 sm, 44 ms, 60 ms, 76 ms, 92 ms, 108 ms, 124 ms. Eight echo times were used as this was the maximum number of echoes that can be acquired with the MR units used. Human data. Multi-echo resting state data from two different datasets, ds000258 (https://openneuro.org/datasets/ds000258/versions /1.0.0) and ds000210 (https://openneuro.org/datasets/ds000210 /versions/00002), were used to examine whether patterns identified in the phantom data could be seen in vivo. The ds000258 data were acquired with a Siemens Trio 3 Tesla MRI scanner using 32-channel receive only head coil. T1-weighted magnetization prepared rapid gradient echo (MPRAGE) sequence was used to acquired the anatomical data with the following parameters: 1 mm slice thickness and 1.1 s inversion time. The functional multi-echo EPI data consisted of 239 volumes and each volume consisted of 32 oblique slices with the following parameters: 2.47 s TR, 78∘ flip angle, 64 × 64 matrix size, and 4.4 mm slice thickness with 10% slice gap. Alternating slice acquisition was used with ascending interleaved order and four echo times were used: 12 ms, 28 ms, 44 ms, and 60 ms. The data from the second dataset, ds000210, was acquired with a 3 Tesla GE Discovery MR750 unit using a 32-channel receive only phasedarray head coil. T1-weighted MPRAGE sequence was used to acquired the anatomical data with the following parameters: 2530 ms TR, 1 mm slice thickness, and 1.1 s inversion time. The resting state multi-echo EPI data consisted of 204 volumes and each volume consisted of 46 axial slices. The following parameters were used to acquire the data: 3.0 s TR, 83∘ flip angle, 72×72 matrix size, and 3.0 mm isotropic voxels. The slices were acquired in inferior-superior interleaved order and three echo times were used: 13.7 ms, 30.0 ms, and 47.0 ms. FMRI data preprocessing Phantom data. The phantom data were preprocessed using SPM12 (http://www.fil.ion.ucl.ac.uk/spm). Each echo was preprocessed separately to ensure the echoes could be used as intra-TR data to estimate realised volatility. The following preprocessing steps were taken: slice timing correction was applied first with the middle slice used as a reference slice. Although no motion was expected, the data were realigned and resliced to correct for head motion and estimate six rigid body transformations. Prior to combining the echoes and estimating realised volatility linear model based de-trending was conducted. Human data. As with the phantom data, SPM12 was used to preprocess the human data one echo at a time to enable estimation of realised volatility. The following preprocessing steps were taken: slice timing correction with the middle slice serving as a reference slice, and realignment with reslicing was used to correct for head motion and estimate six rigid body transformations. The anatomical data were then segmented into grey matter, white matter, cerebrospinal fluid, and skull, after which the anatomical data were co-registered with the mean functional image. After preprocessing, the six rigid body transformations were used to calculate framewise displacement using the spmup_FD function (https ://github.com/CPernet/spmup/blob/master/QA/spmup_FD.m). Framewise displacement was then used to determine which participants had the least amount of head motion. From each dataset, two participants who moved the least were selected (Supplementary Table 1), the data was subjected to linear model based de-trending, and then taken forward for further analysis. Additionally, to study the noise present in vivo, the anatomical scans were used to create ventricle masks. Studying signal from the ventricles enabled us to examine the volatility of the combination of scanner and physiological noise while avoiding contamination from true brain signal. Thus, only resting state data extracted from the ventricles were used for further analysis to estimate realised volatility and examine its roughness. T 2 *-weighted realised volatility As our data from the phantoms and ventricles contains only noise, we can re-write Eq. (1) at a given time point t…T as μ t = 0 as no true brain signal is present. Observations from each echo time n up to the last echo N were treated as intra-TR data which were used to estimate realised volatility for each point in the time series. To follow standard procedures and take into consideration the fact that fMRI signal decays rapidly (Supplementary Figs. 1-6), the observations from each echo time were weighted to avoid bias [34]. The weighting was based on T 2 * estimates, which were calculated in accordance with methodology used in tedana [35][36][37]: where S n represents the signal intensity at a given echo time n, represents the echo time in milliseconds, and S 0 represents the signal intensity at E = 0. The value of R 2 * is solved by a log-linear regression. T 2 *-based weights were then calculated as follows The weights were used to estimate the mean of the fMRI noise processes at each echo time n at each time point, t = 1…T. Realised volatility at each time point, t = 1…T, was estimated by calculating variance between observations at each echo time n. The estimated T 2 *-weighted variance, v t , served as a proxy of the unobserved volatility process and was used to investigate the smoothness of the fMRI noise series. As fMRI noise is believed to not exhibit similar exponential decay as true brain signal, we wanted to illustrate that the T 2 *-weighting used did not unduly impact the present findings by presenting analyses using non-weighted realised volatility data in the Supplementary Materials. The analyses using non-weighted data produced results which mirror those reported here. Estimating roughness of realised volatility We examined the roughness of fMRI noise volatility by adopting a neural network calibration method established in [31]. The rBergomi model was chosen to simulate the training data because it produces driftless, mean reverting processes which closely resemble fMRI data. Roughness of the volatility series was examined by estimating the H parameter. In addition to examining the roughness of the volatility paths, we also wanted to extract information about the volatility of volatility by simultaneously estimating the η parameter. If any of the fMRI noise volatility series had constant volatility the estimated η = 0 and if the volatility was not constant η > 0. Neural network architecture To estimate the roughness and volatility of volatility of the fMRI noise volatility series we used a one-dimensional feed-forward convolutional neural network (CNN) [31]. This approach has been previously shown to accurately estimate the Hurst parameter H and outperform other methods such as the least squares method both in terms of accuracy, as measured using root mean squared error (RMSE), and speed. A further introduction to neural networks is given in Appendix A; very simply one can think of a neural network as a composition of affine and non-linear functions that approximates a mapping of inputs to outputs. The CNN consisted of three kernel layers with kernel size 20. The first convolutional layer had 32 kernels followed by a dropout layer with dropout rate of 0.25, the second had 64 kernels followed by a dropout layer with dropout rate of 0.25, the third had 128 kernels followed by a dropout layer with dropout rate of 0.4, and the fourth dense layer had 128 units followed by a dropout layer with dropout rate of 0.3. Leaky ReLU activation functions followed each layer with α = 0.1 and max pooling layers with size 3 were added between each kernel layer. See [31] for rationale of this architecture and hyperparameter choice. Neural network training and test Altogether, 50,000 sample paths of the normalised rBergomi model log-volatility process, ṽ t := η ∫ t 0 (t − s) H− 1/2 dW s , were simulated. For each of the 50,000 sample paths simulated, 200 time points were used and H ~ Unif(0,1.0) and η ~ Unif(0,3.0). Hyperbolic tangent was used to scale η. Stone provides a rigorous mathematical justification for this set up [31, Section 3.2, p382]. The sample paths were generated using classical methodology which utilises the Cholesky decomposition to achieve exact distribution of the log-volatility paths (https://github. com/jennileppanen/fmri_vol). The sampling was conducted in a manner that ensured that each sample path had a unique H and η enabling better fitting to varying fMRI noise log-volatility processes. We took a nested cross-validation approach whereby the simulated sample paths were first divided into training and test datasets with 30% holdout. The training dataset was then further divided into training and validation sets with 20% hold out. Thus, the training dataset consisted of 28,000 training and 7000 validation sample paths and the final test dataset included 15,000 sample paths. Evaluation of CNN H and η parameter estimation The performance of the trained CNN was assessed by calculating the RMSE between the predicted θ n = {θ} In the present study, the test error was small, RMSE = 0.065, and the relationship between predicted and true H and η in Fig. 3 were strongly linear. the parameter H governs three aspects of fractional Brownian motion at the same time: the self-similarity, the roughness of the paths (the oscillation) and the autocorrelation of the time series. Therefore, performance of the CNN was additionally evaluated by examining agreement between the estimated H parameters and the memory in the fMRI noise log-volatility series. Agreement between the CNN H parameter estimates and memory was evaluated by conducting a Spearman correlation test. Memory was estimated by fitting autoregressive fractionally integrated moving average (ARFIMA) [0, d, 0] model to the logvolatility data and calculating the d parameter: where B is the backshift operator and d represents the memory parameter to be calculated. 0 < d < 0.5 indicates the series is a stationary, mean reverting long memory process, while d < 0 indicates the series is anti-persistent short memory process. 0.5 < d < 1 indicates the series is a mean reverting, non-stationary long memory process. Although the relationship between smoothness of log-volatility processes and long memory is a complicated one [18,[38][39][40], this correlation will give us an indication of the performance of the CNN in estimating H. Estimated roughness and volatility of volatility The summary statistics of the estimated H parameter of the realised log-volatility series in phantom and human data from the ventricles are presented in Table 1. On average the log-volatility series were rough, but the average H parameter estimates were somewhat higher in the data extracted from the ventricles (human) than in the phantom data. This could be because the phantom data should only contain scanner noise while the data extracted from the ventricles should include both scanner and physiological noise. In the phantom data there was also substantial variability in the H parameter estimates. The maximum estimated H remained under 0.5, suggesting that despite the substantial variability fMRI noise was rough across the phantoms. Similar variability was not observed in the data extracted in the ventricles and the maximum H parameter estimates were smaller in the human data. Finally, all η > 0 suggesting that there were no fMRI noise volatility processes that had constant volatility. Some edge effects were observed, such that the estimated H parameters were generally larger near edges of the phantoms than the middle. In both phantoms the voxels with the maximum H parameter estimates were found near the edge and appeared to form large clusters. In the middle of the phantoms the H parameter estimates were generally very small yet varied. Similar edge effects were not observed in the resting state data extracted from the ventricles, which could be due to the fact that the ventricles reside close to the middle of the brain. Still, as in the phantom data, it was apparent that the H parameter estimates varied from voxel to voxel within the ventricles suggesting spatially nonconstant volatility was present. Spatial pattern in estimated Hurst parameters Interestingly, phantom 1 has a small region near the top where the signal intensity was lower than in the nearby voxels, suggesting signal dropout due to a possible air bubble (Supplementary Fig. 14). This area consisted of four voxels and one of these voxels had the largest H parameter estimate in phantom 1. This voxel also represents the centre of the cluster near the top of the phantom in Fig. 4A. No such signal dropout was seem in phantom 2. Multi-slice view of scanner 1 phantom 1 (A) and scanner 2 phantom 2 (B) with log-volatility processes corresponding to maximum, minimum and mean H estimates. Multi-slice view of data extracted from the ventricles of two participants, sub-17,821 (A) and sub-21,300 (B), from the ds000258 dataset with log-volatility processes corresponding to maximum, minimum and mean H estimates. Multi-slice view of data extracted from the ventricles of two participants, sub-28 (A) and sub-30 (B), from the ds000210 dataset with logvolatility processes corresponding to maximum, minimum and mean H estimates. Agreement between the correlation governed by the Hurst parameter H and ARFIMA autocorrelation As mentioned earlier, the Hurst parameter not only governs the roughness of a volatility path, but also the autocorrelation function of the volatility time series. In this section we test for the agreement between the presence of autocorrelation predicted by the rough volatility model and the autocorrelation predicted by a standard ARFIMA model. As shown in Table 2 the correlation was significant and positive in both phantoms and data extracted from the ventricles. ρ = Spearman correlation coefficient; ds000258 and ds000210 refer to the two Openneuro. datasets used. The relationship between estimated H parameter and the ARFIMA[0, d, 0] memory parameter, d, of the log-volatility processes are presented in Supplementary Figs. 7 and 8. The correlations between the H and d parameters was more variable in the resting state data extracted from the ventricles, which may be related to the fact that the size of the ventricles and thus the number of voxels in the the ventricles varied between participants and participants with more voxels inside the ventricles has higher correlations. Discussion The aim of the present empirical study was to examine the roughness of fMRI noise volatility. We used multi-echo scans of two phantom s from two different MRI scanners to estimate realised volatility. We also used human data from two separate multi-echo resting state datasets to examine whether patterns observed in the phantom data were present in vivo noise, specifically, focusing on signal extracted from the ventricles. Roughness of logarithm of the realised volatility processes was lower. Overall, all H < 0.5 suggesting that across the phantom and human data, volatility was consistently rough. The present findings suggest that log-volatility of fMRI noise appears to behave like fractional Brownian motion with H parameter estimates between 0.03 and 0.05. As anticipated, these findings go some way to mimic the rough volatility pattern observed in high frequency financial data with Hurst parameter estimates varying between 0.02 and 0.14 [17,18]. Thus, it appears that although fMRI scanner noise on average does not have large fluctuations in volatility over time, i.e. the noise does not exhibit sustained periods of high volatility followed by sustained periods of low volatility. Instead, the noise processes exhibit rapid spikes and oscillations, indicating more "severe" heteroscedasticity. The heteroscedasticity observed in the phantom data cannot be explained by head motion, physiology, or other known sources of non-constant noise and cannot be easily entered into analysis as a covariate because scanner noise processes cannot be directly observed during brain scanning. These findings challenge the assumption that fMRI noise has constant volatility and adds to the steady accumulation of literature exploring heteroscedasticity in fMRI noise [1][2][3][4][5][6], further highlighting the importance of taking non-constant noise into consideration during analysis of the time series data. The impact of rapidly spiking and oscillating volatility on fMRI data analysis has recently been investigated. One study examined the impact of heteroscedasticity introduced by simulated head motion spikes on fMRI data analysis [1]. The authors found that a linear modelling approach based on weighted least sum of squares (WLSS) was able to accurately model impulse responses to stimuli if the heteroscedasticity was constant across all voxels [1]. However, when the number of head motion spikes varied from voxel to voxel, the WLSS failed to accurately detect impulse responses. These findings led the authors to propose a heteroscedastic general linear model which incorporates head motion covariates. However, our findings suggest that not only can heteroscedasticity also be present in the scanner noise, but also the pattern of heteroscedasticity varies from voxel to voxel, with different patterns of spiking and rapid oscillations. Furthermore, our findings also indicate that similar patterns in volatility can be observed in the human data, which can be taken to suggest that the heteroscedasticity observed in scanner noise is also present in vivo noise. Taking the above findings by [1] into consideration, it is possible that such spatially non-constant heteroscedasticity in fMRI noise could influence data analysis. Interestingly, to our knowledge only a few studies to date have examined the impact of heteroscedastic noise not explained by head motion or physiology on fMRI data analysis. In all studies the authors examined the usefulness of deterministic autoregressive conditional heteroscedasticity (ARCH) and generalised ARCH (GARCH) -type models, to aid investigation of time-dependent functional connectivity [6,25,41]. The studies specifically investigated GARCH(1,1) models with only one autoregressive and one moving average lag, suggesting the authors assumed the volatility would exhibit short memory. Simulation and real data experiments both showed that incorporating GARCH(1,1) model into the analysis helped to accurately model the time-dependent functional connectivity. Traditional approaches, including sliding window and exponentially weighted moving average models, on the other hand, were found to produce more false positive findings [6,25,41]. Moreover, previous Monte Carlo experiments have shown that heteroscedasticity violates the assumptions of not only correlation tests but also linear regressions in ways that can produce false positive findings [42][43][44]. Taken together with the present findings, we believe that further investigation of the impact of short memory heteroscedasticity on various different fMRI data analysis methods as well as selecting the most efficient and accurate methods to model the timedependent volatility is of interest. Such further work could ultimately help improve both resting state and task-based data analysis as the noise in the time series is better understood [3,45]. The present findings also show that the roughness of fMRI noise is not constant across regions in the phantom with the edges showing greater smoothness in the volatility relative to the centre of the phantom. This suggests that the volatility near the edges of the phantom was more likely to exhibit sustained periods of high and low volatility rather than rapid oscillations or spiking behaviour. To an extent these findings mirror those from previous work examining long-range dependence in the mean of fMRI noise [46,47]. Previous studies have found that the long-range dependence near the edges of the phantom has estimated H > 0.5, indicating persistence and sustained periods of high and low mean in the series [46]. Similar edge effects have also been observed in real brain scans [46,48]. Taken together with the present findings this suggests that fMRI data near the edges of an image appears to be more complex than that near the centre. Such time-dependent behaviour in the noise near the edges complicates data analysis as these effects violate assumptions of most time series modelling methods and can lead to both spurious regressions and correlations [42][43][44][49][50][51][52][53]. Further investigation of the impact of reported edge effects on fMRI data analysis methods is of interest. It is also important to note that in the present study, one of the phantoms had a small region of signal dropout, possibly indicating a presence of an air bubble. This region was the centre of one of the clusters where the smoothness of the volatility process was greater than in nearby regions. Previous studies have also found that air bubbles in phantoms can lead to drop in signal intensity, which has been suggested to due to susceptibility artifacts at the air-water boundary [54]. Air bubbles can also introduce phase errors and related magnetic field heterogeneity [55,56], which could go some way to explain the larger H estimates in one of the clusters in one of the phantoms. Interestingly, such an effect was only found in one of the phantoms, suggesting that all the edge effects could not be explained by air bubbles. Still further investigation of the spatial pattern of volatility in fMRI noise in a gel phantom prepared with warm water, which are less susceptible to air bubbles [57], would be of interest. The present study is not without limitations. First, the CNN was trained using simulated data as it was not possible use true realised volatility data because the true H and η of such data are unknown. Although such methods have been previously used in the field of financial mathematics and have been shown to outperform alternative models, such as those based on the sum of least squares [31], a model is always a simplification of reality. However, we argue that even though the simulated log-volatility paths used in the training of the CNN may indeed be different from the real data, they are no more different than the constant volatility assumption of traditional fMRI time series analysis methods. Additionally, we chose to use the mean reverting and driftless rBergomi model to simulate data because it closely reflects the behaviour in fMRI data. Additionally, the resting state data used to examine whether volatility patterns observed in the phantoms could also be seen in noise in vivo could have been influenced by head motion. Although, we took steps to minimise the impact of head motion on the analysis, it is possible that H parameters estimates were still influenced by head movements. However, considering the pattern of volatility observed in the resting state data extracted from the ventricles largely mirrored that seen in the middle of the phantoms we believe it can be concluded that at least some of the rapidly oscillating heteroscedastic scanner noise is present in vivo. In the present study, realised volatility was estimated after slice timing correction and realignment, but no further preprocessing or denoising steps were taken prior to estimation. This was done in an attempt to mirror standard multi-echo preprocessing pipelines where the echoes are normally combined prior to further preprocessing steps, such as smoothing, and independent component analysis-base denoising [34,36,58]. This meant that we were unable to examine the impact of de-noising on realised volatility. Additionally, realised volatility was estimated using only eight echo time points as this was the maximum number we were able to collect. In finance, on the other hand, it is common to use high frequency asset price data, with sub-second granularity, to estimate daily volatility. It is difficult to ascertain whether our use of lower frequency data to estimate realised volatility had an impact on the present findings. Finally, the phantom and human data were acquired using different 3 Tesla MRI units. It is possible that the volatility of fMRI noise from scanners with different field strengths might vary and further investigation of this may be of interest. Conclusions The aim of the present study was to examine the smoothness of estimated realised volatility of fMRI noise as well as to examine whether patterns identified in the phantom scans were present in human data. This was done by conducting two multi-echo scans of two phantoms using two different MRI scanner units and using publicly available multiecho resting state data. Multi-echo data were used to estimate realised volatility by T 2 *-weighted variance. Smoothness of the realised volatility data was estimated by following cutting edge methods developed in the field of financial mathematics, namely by training a CNN to predict the Hurst parameter, H. The findings showed that on average scanner noise is very rough with H ≈ 0.03 and the roughness of the volatility data varied across the spatially across the phantoms. In both phantom scans the H estimates were larger near the edges, suggesting that volatility was smoother in these regions. Similar patterns of variability, with the exception of large edge effects, were observed in the resting state data extracted from the ventricles. Thus, seems that rapidly oscillating, spatially non-constant heteroscedastic noise is present in vivo noise as well. Taken together the present findings further challenge the assumption that fMRI scanner noise has constant volatility and highlight the need for further research to investigate how to effectively model the heteroscedasticity during time series analysis. Declaration of Competing Interest None. layer, are outputs from previous layers. A layer is composed of a number of nodes, and each node in a given layer is connected to the nodes in a subsequent layer, thus forming a network; each edge in this network has a weight associated to it. The first processing unit is called the input layer, and the final processing unit is the output layer. The processing unit or units between the input layer and output layer are referred to as hidden layers; typically artificial neural networks have more than one hidden layer. Convolutional neural networks (CNNs) are a class of artificial neural networks, where the hidden layers can be grouped into different classes according to their purpose; one such class of hidden layer is the eponymous convolutional layer. Below we describe the classes of hidden layers used in our CNN. Of course, this list is not exhaustive, and there exist many classes of hidden layers that we omit for means of brevity. Note also that we describe a CNN in the context of the problem we are trying to solve, where the input data are one dimensional vectors. CNNs can of course also be used on higher dimensional input data, but the fundamental structure and different roles of the hidden layers do not change. • Convolutional Layer: In deep learning, the convolution operation is a method used to assign relative value to entries of input data, in our case one dimensional vectors of time series data, while simultaneously preserving spatial relationships between individual entries of input data. For a given kernel size k and an input vector of length m, the convolution operation takes entries 1, …, k of the input vector and multiplies by the kernel element-wise, whose length is k. The sum of the entries of the resulting vector are then the first entry of the feature map. This operation is iterated m + 1 − k times, thus incorporating every entry in the input data vector into the convolution operation. The output of the convolutional layer is the feature map. Clearly, the centre of each kernel cannot overlap with the first and final entry of the input vector. Zero-padding, sometimes referred to as samepadding, preserves the dimensions of input vectors and allows more layers to be applied in the CNN: zero-padding is simply the extension of the input vector and the setting of the first and final entries as 0, while leaving the other entries unchanged. In our example, the input vector becomes (0,1,2,1,0,0,3,0) after zero padding. • Activation Layer: The activation layer is a non-linear function σ that is applied to the output of the convolutional layer i.e. the feature map; the purpose of the activation layer is indeed to introduce non-linearity into the CNN. Examples of activation functions include the sigmoid function and the hyperbolic tangent function. In our CNN we use the 'LeakyReLU' activation function, defined as The LeakyReLU activation function allows a small positive gradient when the unit is inactive. Other pooling techniques apply the same idea, but use different functions to evaluate the neighbouring p entries in the feature map. Examples include average pooling, and L2-norm pooling, which in fact uses the Euclidean norm in mathematical nomenclature. • Dropout Layer: Dropout is a well-known technique incorporated into CNNs in order to prevent overfitting. Without the addition of a dropout layer, each node in a given layer is connected to each node in the subsequent layer; dropout temporarily removes nodes from different layers in the network. The removal of nodes is random and determined by the dropout rate d, which gives the proportion of nodes to be temporarily dropped. Note that dropout is only implemented during training; during testing the weights of each node are multiplied by the dropout rate d. • Dense Layer: Also referred to as the fully connected layer, each node in the input layer is connected to each node in the output layer as the name suggests. After being processed by the convolutional, activation, pooling, and dropout layers, the extracted features are then mapped to the final outputs via a subset of the dense layer, an activation function is then applied subsequently. This activation function is chosen specifically for the task that the CNN is required to execute, i.e. binary/multi-class classification, or regression to output a continuous value. The final output from the dense layer has the same number of nodes as the number of classes in the output data.
2020-06-25T09:09:18.012Z
2020-06-20T00:00:00.000
{ "year": 2021, "sha1": "aa486e34ad459f4c606a88133da3b49b203a8042", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.mri.2021.02.009", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "c1169024ee8abf11674f21d1cc9e8de15f4a0502", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Mathematics" ] }
43959114
pes2o/s2orc
v3-fos-license
Extended Intervened Geometric Distribution Here we develop an extended version of the modified intervened geometric distribution of Kumar and Sreeja (The Aligarh Journal of Statistics, 2014) and investigate some of its important statistical properties. Parameters of the distribution are estimated by various methods of estimation such as the method of factorial moments, the method of mixed moments and the method of maximum likelihood. The distribution has been fitted to a real life data set for illustrating its practical relevance. Introduction Intervened type distributions have found many applications in several areas such as epidemiological studies, life testing problems etc. In epidemiological studies health agencies takes various preventive actions. The information concerning the effect of such actions taken by the agencies can statistically analyzed by intervened type distributions. In life testing problem the failed items during the observational period are either replaced or repaired. This kind of actions changes the reliability of the system as only some of its components have longer life. The impact of such actions can be studied by intervened type distributions. The intervened type distributions such as intervened Poisson distribution (IPD), intervened geometric distribution (IGD) and modified intervened geometric distribution (MIGD) has been studied by several authors. For example see Shanmugan [1,2], Huang and Fung [3], Scollink [4], Dhanavanthan [5,6], Kumar and Shibu [7][8][9][10][11][12][13][14][15], Bartolucci et al [16], Kumar and Sreeja [17] etc. Through this paper we consider a new class of intervened geometric distribution suitable for multiple intervention cases and named it as the extended intervened geometric distribution (EIGD), which contains the MIGD as its special case. The paper is organized as follows. In Section 2, we present a model leading to EIGD and obtain expression for its probability mass function, mean and variance. We also obtain a recurrence relation useful for the computation of probabilities of the EIGD. In Section 3, we consider the estimation of parameters of the EIGD by the method of maximum likelihood and the distribution has been fitted to a real life data set for highlighting the usefulness of the model. We need the following series representation in the sequel where [a] represents the integer part of "a", for any a > 0 Estimation Here we discuss the estimation of the parameters of the EIGD by various methods of estimation such as the method of factorial moments, the method of mixed moments and the method of maximum likelihood. We assume that m is a fixed positive integer and the parameters ρ 1 , ρ 2 and θ of the EIGD are estimated for possible values of m. Method of factorial moments In method of factorial moments, equate the first three factorial moments of the EIGD to the corresponding sample factorial moments say / 1 m , / 2 m , and / 3 m and there by we obtain the following system of equations: where 1 δ , 2 δ and 3 δ are given in (8). Now the parameters of EIGD are estimated by solving the non-linear equations (16), (17) and (18). Method of mixed moments In method of mixed moments, the parameters are estimated by using the first two sample factorial moments and the first observed frequency of the distribution. That is, the parameters are estimated by solving the following equation together with (16) and (17). where ( ) 1 2, , Λ ρ ρ θ is as defined in (3) We present the fitting of the intervened geometric distribution (IGD) and the extended intervened geometric distribution (EIGD) for particular values of m for the data set on the count of the number of European red mites on apple leaves taken from Jani and Shah [18]. We estimate the parameters by the method of factorial moments, the method of mixed moments and the method of maximum likelihood. We have computed the values of χ 2 statistics in the case of each model and the numerical results are summarized in Table 1, Table 2 and Table 3. From the tables it is obvious that the EIGD with m=3 gives a better fit compared to IGD as well as for the case m=1, m=2(MIGD) and m=4. (IGD) m=1 m=2 m=3 m=4 1 38 27 28 26 40 29 2 17 22 23 17 16 19 3 10 14 14 15 6 10 4 9 8 7 9 10 5 5 + 6 9 8 13 8 17 Total 80 80 80
2019-04-21T13:08:41.533Z
2016-04-25T00:00:00.000
{ "year": 2016, "sha1": "d806313dff3a00aed68b89702982eb3112cf2aae", "oa_license": "CCBY", "oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ijsd.20160201.12.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "280db2b139c1528a41f425ca4f9d9138a51e6bbb", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
49187145
pes2o/s2orc
v3-fos-license
Creating Successful Campus Partnerships for Teaching Communication in Biology Courses and Labs Creating and teaching successful writing and communication assignments for biology undergraduate students can be challenging for faculty trying to balance the teaching of technical content. The growing body of published research and scholarship on effective teaching of writing and communication in biology can help inform such work, but there are also local resources available to support writing within biology courses that may be unfamiliar to science faculty and instructors. In this article, we discuss common on-campus resources biology faculty can make use of when incorporating writing and communication into their teaching. We present the missions, histories, and potential collaboration outcomes of three major on-campus writing resources: writing across the curriculum and writing in the disciplines initiatives (WAC/WID), writing programs, and writing centers. We explain some of the common misconceptions about these resources in order to help biology faculty understand their uses and limits, and we offer guiding questions faculty might ask the directors of these resources to start productive conversations. Collaboration with these resources will likely save faculty time and effort on curriculum development and, more importantly, will help biology students develop and improve their critical reading, writing, and communication skills. INTRODUCTION Communicating ideas and discoveries in biology is arguably as essential and challenging as conducting the science itself. Writing and communication tasks can be assigned both to help biology students learn area content and to prepare them to conduct and communicate research effectively. Both the 2012 American Society for Microbiology (ASM) Curriculum Guidelines (1) and the 2009 American Association for the Advancement of Science (AAAS) Vision and Change report on undergraduate biology education (2) recognize the importance of effective communication, particularly emphasizing the need to facilitate effective collaboration across disciplines. Well-designed writing assignments are correlated with reports of persistent gains in higher-order learning, integrative learning, and reflective learning (3), according to a recent survey of over 70,000 US undergraduates. This study further suggests that it is the quality of the assignment, rather than the required amount of writing, that matters most. Introducing a well-designed writing assignment, even a short one, into a microbiology or biology course can be quite challenging. Adding a writing assignment to a course typically requires taking something else out. If time for such work can be found, questions emerge regarding resources and approach: How do I develop a well-designed assignment? What can I assume my students already know about writing? What kinds of instruction or resources will biology students need to write and communicate successfully? How do I make sure students receive adequate guidance and feedback, especially if they struggle with writing? Such challenges and questions may seem daunting or even overwhelming. This special issue of the Journal of Microbiology and Biology Education offers a wealth of evidence-based ideas and guidance, and the works cited by these articles point toward the longer history of pedagogy scholarship on communication in science. However, the increasingly robust scholarship on teaching communication in biology should not lead you to ignore the valuable local resources that exist right on your campus. Unfortunately, these local resources may be partially or totally unknown to many faculty (4). This article seeks to introduce and explain the resources for the teaching and learning of communication in biology that exist on most US college campuses. More specifically, we will discuss three clusters of resources: 1) writing across the curriculum (WAC) and writing in the disciplines initiatives (WID), 2) writing programs, and 3) writing centers. Since these resources are typically housed outside of biology departments and science divisions, they may be unfamiliar to biology faculty who could benefit from them. Our goal is to help you understand what to expect and how to maximize what you and your students get out of each possible collaboration. In what follows, we detail the missions, histories, and approaches of each resource. Table 1 summarizes these characteristics and presents specific possible outcomes of collaborations between biology faculty and these resources. WRITING ACROSS THE CURRICULUM AND WRITING IN THE DISCIPLINES INITIATIVES Many universities and colleges have formal or informal initiatives to promote the teaching of academic writing in a diverse array of classes. Often, these initiatives have directors or consultants whose job responsibilities include collaborating with faculty on writing assignment design and implementation. A study of the prevalence of these programs from 2006 to 2008 revealed that 51% of US colleges had such a program and 27% had plans to build one (16). Broadly speaking, such programs have gone through three major stages: 1) the grant-funded workshop model of the 1970s and 1980s, 2) the institutionalized writing-intensive coursebased model of the 1980s and 1990s, and 3) the practice of embedding writing in as wide an array of courses as possible, which began in the 2000s and continues to develop today (17). These initiatives share the goal of promoting student writing in more courses, but their names denote different emphases. We will discuss "writing across the curriculum" and "writing in the disciplines" programs in more detail. Writing across the curriculum programs emphasize the need for students to write throughout their coursework. These programs prioritize writing as a tool for active learning, and they often encourage the inclusion of writingto-learn assignments in which the process of writing is important for learning, but the written product itself is less meaningful. Randy Moore's work from the mid-1990s offers more detailed explanations and evidence supporting this approach in science courses (18) as well as specific descriptions of how it can be incorporated into biology courses (8). Writing in the disciplines (WID) programs, in contrast, emphasize the final product of writing and seek to help students gain mastery in the specific genres that characterize professional scholarly communication in a field. Instead of writing-to-learn, they emphasize writing-as-professionalization. These programs are attuned to the ways disciplinary values, habits of mind, research questions, and methods manifest themselves in the types of writing produced in that field. They take seriously the idea that in teaching students to write like biologists, we teach them to think like biologists, and this is not incidental to their education. Writing in the disciplines programs encourage content-area experts, rather than outside lecturers, to teach writing, seeing their fluency in disciplinary discourse as fundamental to this task. An exemplar of the WID philosophy can be found in Moskovitz and Kellogg's explanation of how to effectively assign writing in laboratory courses (19). The primary goal of WAC/WID professionals is to help faculty successfully design and implement writing assignments in a wide variety of courses. The 2014 statement of WAC Principles and Practices, which elaborates on the history and goals of these initiatives, exemplifies the current tendency toward blending the two approaches. Directors of these programs are highly motivated to see that writing is not just assigned to students, but that it is also effectively taught. This can make them incredibly valuable interlocutors. They should be familiar with the most common challenges that an instructor assigning writing in a science course will be negotiating and with a host of ways to navigate them. They may have access to models of similar assignment materials or activities you can adapt or use as inspiration, and they are likely to know who on your campus is doing similar work in the classroom. At your invitation, these experts will typically be willing to review your course materials and anticipate the problems you may have before you encounter them. Writing across the curriculum and writing in the disciplines faculty and staff may be in varied locations and have a variety of disciplinary backgrounds. Many formal WAC/ WID programs will be housed in a writing or English department, but they could also be attached to a writing center or housed in an academic dean's or provost's office. The faculty and staff in charge of these initiatives may have PhDs in rhetoric and composition (also called writing studies), having trained to do research and teaching in exactly this area. Others will have come to this work after advanced training in some other field. Some are scientists or engineers who have become interested in pedagogy and communication. You are likely to find them eager to learn about your own experiences with and understanding of communication in your field, and your collaboration may well begin with such a discussion. The level of collaboration offered can vary as well. Many WAC/WID colleagues will be willing collaborators on developing assignments or other course materials, while others may prefer to restrict their support to offering feedback on materials you develop. What should be true regardless of their location and background is that they will be excited to understand and support your efforts to teach writing to biology students and able to connect you with varied resources to support those efforts. WRITING PROGRAMS One of the most fundamental questions we ask ourselves when designing any course will always be: what can I expect my students to already know on Day 1? Answering the question about prior knowledge is more complex when it comes to teaching writing and communication, but research suggests that biology instructors benefit from having explicit information about students' prior knowledge about communication and actively working to build on that knowledge (20). Research shows that faculty often take for granted that the knowledge, skills, and habits of mind that students have developed as writers and speakers in prior coursework will easily and automatically transfer into their work inside new contexts, including new disciplinary contexts. This common assumption of automatic transfer has been proven wrong by education researchers for over a century (21), and knowledge about communication is no exception. Recent studies of transfer of knowledge about academic writing show that it is neither automatic nor simple and that transfer should be actively facilitated by teachers at both ends of the transfer (22)(23)(24). One step teachers can take to establish reasonable expectations of their students' writing and communication skills is to explicitly discuss their prior experiences as writers with them or ask them to respond to a survey on that topic prior to the start of the course. You might consider seeking the following information from your students: • Does the student have prior instruction about or experience with a particular genre of academic writing that will be featured in this course? • What are the student's self-assessed writing and communication strengths and limits? • What goal(s) do students have for themselves as communicators in their field? • What writing skills do students value and/or expect to use in their future careers? Examples of Possible Outcomes of Collaboration with Faculty and Staff in Each Program Writing across the curriculum (WAC) Promotes assigning writing as a part of active learning across all disciplines and courses • Helping with developing new writing assignments to encourage active learning, such as 1) short, reflective writing assignments throughout a course to facilitate learning of complex topics in microbiology (5) or 2) a mini-review article addressed to a non-expert reader, to give students practice in identifying critical issues and putting complex biological concepts in clear, accurate terms (6) • Assisting effective incorporation of more reading into a biology course as models of academic writing (7) and/or popular writing about biology (8,9) Writing in the disciplines (WID) Encourages formal writing assignments that anticipate or mimic the real communication scholars and professionals do in the field • Developing strategies for giving more effective feedback for biology writers (10) • Helping create a new (or improve an existing) assignment to teach biologyspecific discourse in lab reports (11) or oral scientific presentations of research projects (12) • Introducing models for incorporating peer review in the professional style of a biological manuscript submission to improve student writing and increase understanding of research communication (13) Writing programs Aim to introduce students to academic writing and discourse • Gaining a better understanding of how/whether students are taught visual rhetoric in first-year writing, so that a discussion of designing effective biology figures builds on students' existing knowledge • Understanding what training students have in narrative writing, so that a discussion of how and why scientists tell stories can help students differentiate scientific and humanistic approaches to narrative Writing centers Support students as they work on varied writing projects across their college careers • Developing a partnership with a writing fellows program that brings trained peer tutors into a course to assist students with a writing assignment (14) such as a lab report (11) • Participating in tutor training and providing model biology papers to a writing center director, so that writing center tutors are well prepared to successfully assist your students (15) • Having writing center staff develop and lead in-class or supplemental instructional workshops to give students an understanding of biology-specific genres, audiences, and styles These questions and student responses may also be useful in guiding course design. For example, a survey may reveal that students are not familiar with the lab report genre or, more likely, that they do not appreciate the parallels between a lab report and a scientific journal article. In this case, the instructor might choose to build in time in the syllabus for explicit discussion of the importance of a lab report as practice for future communication in the field's accepted discourse (11). In addition to talking with and surveying your students about what they know regarding academic communication, we encourage biology instructors to reach out to directors of campus writing programs to learn more about those programs' goals and approach to teaching communication. Here, we use "writing program" to refer to an organized curriculum designed to teach writing to undergraduates, most commonly administered in an independent writing department or an English department. The National Census of Writing reports that 96% of four-year colleges in the United States have a first-year writing requirement, the hallmark of most writing programs, and most campuses designate a writing program director who oversees these courses. These directors are most likely located within an English or rhetoric and composition department, though some schools have independent writing programs. Such an overture can do a great deal to surface your own assumptions and prevent failed assignments. Currently, there are many competing approaches to teaching first-year writing. In some programs, the course may look much like an English course in which students study literary essays and attempt to write such essays themselves. Some programs take a rhetorical approach, teaching students rhetorical theories and asking them to apply them to a set of texts that varies widely by context. Other approaches attempt to teach disciplinary writing from the start, offering students a choice of very different first-year writing courses with varied disciplinary foci and corresponding writing assignments. Some newer approaches, like those based on the pedagogical movement called "writing about writing" and those emphasizing multimodal composition, will be significantly different still. A detailed discussion of these theories is outside the scope of this paper, but those seeking further details can consult Tate et al.'s A Guide to Composition Pedagogies (25). It is really your local program that will matter to you, so we encourage you to reach out to the directors of this curriculum on your campus with specific questions. We suggest the following: • What critical reading, writing, and library research skills can I expect my students to have gained in their previous courses? • What are some important aspects of academic communication these courses are not able to address? • Do students learn anything specific about academic writing in the sciences in these courses? • What writing experiences and knowledge might I be expecting my students to already have that they are actually unlikely to have gained from prior coursework? • What would you recommend I do to help students connect their prior learning about academic communication to the learning they will do in my class? Going into this conversation, you should be aware that many writing program directors are used to fielding complaints from colleagues that "our students can't write" and being asked to account for students' failures. The fact that first-year writing courses tend to frustrate many stakeholders is well understood by those who direct these programs (26). The reality these directors confront is that no course can directly prepare a student to write the grant proposal the biologist assigns, the public policy white paper a political scientist assigns, the artist's statement that a studio art professor assigns-the list goes on. Students often appear not to be able to write because they have not yet been introduced to a particular genre of writing, the intellectual situation that calls for that writing, and the audience that reads such texts. So, in entering this conversation with these colleagues, we advise that you begin in a positive way, making clear that you are interested in ensuring your teaching builds successfully on their work with students. Understanding your students' existing knowledge of academic communication yields direct benefits for your teaching. It enables you to design a writing assignment that is challenging but not overwhelming. It gives you access to students' vocabulary for discussing academic communication, so you can build on key concepts they know and introduce new ideas in a way that will make sense. It also helps you anticipate what kinds of instruction or scaffolding students may need to successfully complete an assignment. Finally, it will aid in establishing a clear, fair rubric or grading standards for the work. Overall, knowing what your students know about academic writing can help prevent a failed assignment that frustrates both instructor and students. WRITING CENTERS Even when you have designed an assignment that is welltailored to your learning goals and your students' knowledge of academic writing, you still face the challenge of bringing students through the process of successfully completing the assignment. In this task, collaboration with a campus writing center can be a great asset. Writing centers are traditionally student-facing organizations that aim to help writers navigate varied writing tasks. They help students primarily through one-to-one tutoring, though in most cases, this is not the sole resource they offer. And while writing centers happily help your struggling students, their mission is almost always to help all writers improve their work. One of the foundational texts in writing center studies, cited in nearly one-third of articles in the field's flagship (27), is Stephen North's "The Idea of a Writing Center" (28). North contends that many of his colleagues misunderstand the writing center's mission and attempts to clarify the key principles than animate most centers. The most common misconception about writing centers is that they are editing services focused on grammar and citation. This thinking leads faculty to send students to the writing center when their writing has significant sentence-level errors in order to have it "fixed." Few writing centers take an editing approach, however. As North put it, "our job is to produce better writers, not better writing" (28). The vast majority of writing centers are guided by this active-learning orientation. In other words, writing center tutorial sessions typically seek to engage students in supportive, dynamic conversations so that students can improve their own writing. This does not mean a writing center tutor will not point out or explain how to correct a particular error in a paper. Rather, it means that the emphasis of the conversation is on teaching rather than editing. Even a successful tutorial session may leave many problems "unfixed," with the expectation that writers apply learning from the session to revise their own work, and, when appropriate, that writers return for subsequent discussions. Most writing centers also emphasize the agency of the writer, asking writers to set the agenda for the conversation, which may mean that areas of a text that the tutor knows need work are not the focus of conversation. Many tutors will direct the writer's focus toward "higher-order concerns" like organization, understanding and addressing an intended audience, and the presentation of argument before attending to "lower-order concerns" like grammar and clarity. Each writing center will have a different philosophy and corresponding training about how "directive" or "non-directive" tutors should be. Other misconceptions exist about writing center staff. Some faculty assume the tutors know everything about writing in all fields, while others assume they know little beyond basic grammar rules. The reality is that staffing of writing centers varies a great deal, and these differences affect the kinds of support they offer. Writing centers may exclusively employ peer undergraduate tutors, graduate student tutors, or professional tutors; many employ a mix of all three. In terms of disciplinary training, some centers are staffed entirely by tutors with majors or degrees in English or writing, while others employ tutors from a wide variety of disciplinary backgrounds, including social scientists and scientists. There is a long history in writing centers of viewing tutors as generalists who are trained broadly in academic discourse and prepared to respond meaningfully to any kind of academic writing. However, there is increasing acknowledgement that tutors can do more for writers when they understand the content, methods, and goals of the discipline of the writing (29). We advise that you inquire about the level, disciplinary background, and training of the staff at your center and set your own and your students' expectations accordingly. You might consider offering, if you are willing, to be part of helping to train the center's staff for working with writers in biology and with your students in particular. Many writing centers do more than tutor. They may offer original resources on their websites and hold workshops and presentations for students in their centers. Some centers collaborate with faculty to develop materials or workshops tied to a specific class. Another common resource housed in writing centers is a "writing fellows" program, which embeds highly trained tutors into specific courses so that the tutor can help support the writers in that particular class. Writing center directors and staff are typically keen, as resources allow, to develop new programs and workshops in response to student or faculty need. Thus, we encourage you to talk with a writing center director not only about the center's approach to tutoring, but also about the other resources they have, or can develop, for your students. CONCLUSION While every university and college is unique, we have sought to characterize the most common campus resources that will be of use to biologists incorporating writing and communication into their teaching. Some campuses may lack one or more of these resources, while others may have resources beyond those described here. Though it takes some initial effort to make these inquires and build new relationships outside your department, over time these collaborations are very likely to save you time and energy. More importantly, they will help us produce a generation of biologists who are keen critical readers, clear writers, and compelling speakers.
2018-06-16T00:29:04.560Z
2018-03-01T00:00:00.000
{ "year": 2018, "sha1": "6fd5caa3461cfe555d87bc469836a480b6892227", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1128/jmbe.v19i1.1395", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6fd5caa3461cfe555d87bc469836a480b6892227", "s2fieldsofstudy": [ "Education", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
256199935
pes2o/s2orc
v3-fos-license
A system for inducing concurrent tactile and nociceptive sensations at the same site using electrocutaneous stimulation Studies of the interaction between mechanoception and nociception would benefit from a method for stimulation of both modalities at the same location. For this purpose, we developed an electrical stimulation device. Using two different electrode geometries, discs and needles, the device is capable of inducing two distinct stimulus qualities, dull and sharp, at the same site on hairy skin. The perceived strength of the stimuli can be varied by applying stimulus pulse trains of different lengths. We assessed the perceived stimulus qualities and intensities of the two electrode geometries at two levels of physical stimulus intensity. In a first series of experiments, ten subjects participated in two experimental sessions. The subjects reported the perceived quality and intensity of four different stimulus classes on visual analogue scales (VASs). In a second series, we added a procedure in which subjects assigned descriptive labels to the stimuli. We assessed the reproducibility of the VAS scores by calculating intraclass correlation coefficients. The results showed that subjects perceived stimuli delivered through the disc electrodes as dull and those delivered through the needles as sharp. Increasing the pulse train length increased the perceived stimulus intensities without decreasing the difference in quality between the electrode types. The intraclass correlation coefficients for the VAS scores ranged from .75 to .95. The labels that were assigned for the two electrode geometries corresponded to the descriptors for nociception and touch reported by other researchers. We concluded that our device is capable of reliably inducing tactile and nociceptive sensations of controllable intensity at the same skin site. The skin contains receptors for various sensory modalities (Hollins, 2010;McGlone & Reilly, 2010). However, the integration of information from these modalities is at present poorly understood. In particular, the interaction between mechanoceptive and nociceptive information is an interesting topic from neurophysiological, clinical, and psychophysical perspectives. Mechanoception and nociception can originate at the same skin site and can be the result of the same physical stimulus, but these types of information are processed along separate neural pathways before being integrated into a single percept. A stimulation method that could allow control of tactile and nociceptive modalities at the same site would provide insights into the relation between these modalities-for instance, by allowing for study of detection thresholds and localization accuracy. Tactile and nociceptive stimulation is commonly performed by using mechanical and laser stimulation. Using these methods to study the interaction between mechanoception and nociception can be challenging, especially if the stimulation site is to be varied during the experiment. When performing Electronic supplementary material The online version of this article (doi:10.3758/s13428-012-0216-y) contains supplementary material, which is available to authorized users. computer-controlled experiments with either of these methods, highly specialized equipment is required, such as a pneumatically driven mechanical stimulation array (Pott et al., 2010;Trojan et al., 2010) or a mirror-scanner system for laser stimulation (Trojan et al., 2006). Most importantly, mechanical stimulators would obstruct laser stimuli directed at the same stimulus site. We have developed a stimulation method that permits the stimulation of cutaneous tactile and fast nociceptive afferents at the same skin site (Steenbergen, Buitenweg, van der Heide, & Veltink, 2008). Our method employs electrocutaneous stimulation through a multichannel stimulator in combination with a compound electrode array. This approach allows for the application of complex spatiotemporal and multimodal stimulus patterns. Because electrical stimulation directly activates afferent nerve fibers, rather than their sensory end structures (Bromm & Lorenz, 1998), such stimulation is often less selective in activating a specific afferent nerve fiber population than are other methods. The lack of selectivity in electrocutaneous stimulation can be overcome by choosing suitable electrode geometries. Electrical stimulation of Aβ afferents is easily achieved by using surface electrodes (see, e.g., Inui et al., 2003;Szeto & Saunders, 1982). Activation of Aδ afferents in hairy skin can be achieved by using short needle electrodes that slightly penetrate the epidermis (Inui, Tran, Hoshiyama, & Kakigi, 2002;Inui, Tran, Qiu, Wang, Hoshiyama & Kakigi, 2002Nilsson, Levinsson, & Schouenborg, 1997). This method is selective for Aδ afferents when using limited stimulus currents, with the stimuli mostly being labeled as pricking or tingling (Mouraux, Iannetti, & Plaghki, 2010). Most commonly, the perceived stimulus strength of electrical stimulation is varied by changing the stimulus current. Unfortunately, increasing the current through surface electrodes leads to a higher probability of undesired activation of Aδ fibers, which is illustrated by the increase in the unpleasantness of stimuli with increasing amplitude (Janal, Clark, & Carroll, 1991). On the other hand, increasing the stimulus amplitude through needle electrodes leads to a higher probability of activating Aβ fibers (Mouraux et al., 2010). Instead, we chose to control the perceived stimulus strength by using pulse train modulation. Research by van der Heide, Buitenweg, Marani, and Rutten (2009) showed that it is possible to modulate perceived stimulus intensity by varying the number of applied stimulus pulses (NoP) in a pulse train. By repeatedly activating the same afferent nerve fibers, pulse train modulation mimics the way in which stimulus strength is coded in afferent nerve fibers following the regular activation of sensory end structures. By keeping the stimulus current constant, the same population of nerve fibers is activated at each stimulus level. Therefore, this method allows for varying the perceived stimulus strength while using a constant stimulus current that is at, or close to, the sensation threshold. This minimizes the probability of coactivation of cutaneous fiber populations other than the intended one. For evaluating the stimulation method described above, a method was required that could detect small differences in perceived stimulus quality-for instance, due to a gradual increase in undesired fiber population activity when increasing the stimulus strength. The scientific literature provides only a small number of methods for assessing the perceived quality of cutaneous stimuli, none of which were suitable for our study. Janal et al. (1991) performed multidimensional scaling experiments on the dimensionality of painful and nonpainful electrocutaneous stimuli, and in a later study compared these stimuli to descriptive labels for painful and nonpainful stimuli (Janal, 1996). The results of this study cannot be used as the basis for an evaluation method of stimulus quality, since the authors influenced the painfulness of the stimuli by varying the stimulus voltages through the same electrode. Therefore, the dimensions of pain and intensity cannot be separated in their results. In Nahra and Plaghki (2003), subjects were asked to assign labels to painful laser stimuli. These stimuli, which included both Aδ and C components, were reported mostly as tingling or pricking. These descriptors were greatly reduced after applying a block on all myelinated fibers; this block also increased the response latencies. This suggests that these labels are associated with perception of Aδ fiber activity. Mouraux et al. (2010) used a similar labeling procedure for assessing the perception of laser stimuli as well as surface and needle electrode stimuli. Pricking and tingling were often assigned to both the needle electrode and laser stimuli. The surface electrode stimuli were often labeled as touch or shock. The labeling procedure used in both Nahra's and Mouraux's studies does not allow for detection of small shifts in quality that might be caused by varying the perceived stimulus strength. We therefore chose to use a VAS for perceived stimulus quality. In the present study, we evaluated the perceptions elicited by the stimulation method described above. Our first aim was to determine whether our stimulus method was capable of successfully inducing qualities that could be associated with touch and nociception independently of each other. Secondly, we were interested whether pulse train modulation modifies the perceived stimulus strength without negatively affecting the quality of perception. Finally, we wanted to determine how reproducible these sensations are. In a first series of experiments, we applied one-and fivepulse stimuli through both electrode types, leading to four different stimulus classes: needle electrodes with one and five pulses and disc electrodes with one and five pulses. Subjects reported the stimulus qualities and intensities on visual analogue scales (VASs). In a second series of experiments, a labeling procedure similar to the one used by Mouraux et al. (2010) was performed, in addition to the procedure employed in the first series. This labeling procedure allowed for comparing the results from the quality VAS with those from previous research. In each of the two series of experiments (first and second), subjects participated in two experimental sessions (A and B) on different days, which allowed for the assessment of reliability by calculating intraclass correlation coefficients (ICC). Subjects For the first series of experiments, 13 subjects were recruited from the student population of the University of Twente. Three subjects were excluded because a bug in the experimental control software during one of the two sessions scrambled the order of the stimuli. The ten remaining subjects had a mean age of 26 years (standard deviation [SD] 0 4, range 22 -34 years), and two of the subjects were female. All subjects gave written informed consent prior to the first experimental session. For six of the subjects, the time between Experimental Sessions A and B was 7 days; for the other four, the intervals were 3, 14 (two subjects), and 21 days. Method Apparatus The four flat discs of the compound electrode were punched out of a stainless-steel sheet; the five needle electrodes were made from stainless-steel sewing needles. The disc and needle electrodes are spread evenly over a disk with 2.4 -cm diameter (see Fig. 1, right panel). The needles protruded 0.5 mm from the electrode surface, and the discs were embedded in the surface. The base material for the compound electrode was Sylgard 184, a two-component silicone elastomer produced by the Dow Corning Corporation, Midland, Texas. The material can be created by mixing the supplied base and curing agents, which results in a viscous fluid. This was cast in a mold in order to shape the compound electrodes. A photograph and a schematic depiction of the compound electrode are presented in Fig. 1. To facilitate the electrical contacts between the disc electrodes and the skin, the compound electrode was covered with a conducting pad that covered the disc electrodes but had holes at the sites of the needle electrodes. The electrodes of the same type (needles and discs) were wired in parallel. The stimulators used for generating the stimulus currents consisted of multiple channels that generated monophasic cathodic stimulus currents, the stimulus properties (amplitude, NoP, pulse width, and interpulse interval) of which could be configured for each channel independently. The compound stimulation electrode was fixed with tape on the dorsal side of the left lower arm, halfway between the wrist and elbow. A counter electrode (anode) was fixed to the left wrist. Stimuli During the experiment, the perceived stimulus strength was modulated using pulse train modulation; oneand five-pulse stimuli were used. This resulted in four different stimulus classes: needle electrodes with one and five pulses, and disc electrodes with one and five pulses. These stimuli will be referred to, respectively, as N1, N5, D1, and D5. All of the pulses were 0.21-ms cathodic square waves. The interpulse interval (IPI) of the five-pulse stimuli was 5 ms, making the duration of these stimuli 21 ms. Stimulation with the needle and disc electrodes was performed at 130 % of the sensation thresholds (see below), with the thresholds determined for each separate session. For one subject, the amplitude of the needle electrode stimuli was increased to 160 % of threshold during the second session because the stimuli were not perceived at the default level. A sham stimulus was included as a fifth "stimulus" class in order to determine whether subjects responded to other cues than the stimulus when the stimulus was not perceived. Determination of sensation thresholds Separate sensation threshold currents of the disc and needle electrodes were determined at the start of each session using the method of limits (Gescheider, 1985). All of the stimuli were single, square-wave cathodic pulses with a pulse width of 0.21 ms. For the disc electrodes, the stimulus current was increased in steps of 0.3 mA, starting from zero. After subjects reported feeling a sensation, the current was lowered in steps of 0.1 mA until they stopped reporting a sensation. After this, the current was increased again in steps of 0.1 mA until the subjects reported feeling a sensation; this final detection current was recorded. For the needle electrodes, a similar method was used: The current was increased in steps of 0.10 mA, then lowered in steps of 0.05 mA, then increased in steps of 0.01 mA. The sensation threshold for disc electrodes is generally higher than that for needles, so the difference in step size for the two electrode types made sure that the method did not take very long for the disc electrodes or have a large overshoot for the needles. The procedure was repeated three times (trials) for each electrode type. The sensation threshold of each electrode type was calculated by averaging the recorded final stimulus current of the three trials. VAS experimental procedure During the experiments, the subjects were seated in front of a computer monitor. Following each stimulus, they were instructed to report the perceived stimulus intensity and quality by operating two VASs: one for the perceived quality of the stimuli, and one for the intensity (see Fig. 2). While the use of an intensity scale is similar to a VAS commonly used for the assessment of pain intensities, the quality scale had not been used before. The quality VAS was presented horizontally and ranged from dull to sharp (labeled in Dutch in this series but in English in the second series). We avoided labeling the extremes using terms that could be explicitly related to touch or nociception because we did not want to bias subjects toward reporting that the stimuli were tactile or nociceptive. Before use, the quality scale was preset in the middle because presetting the scale at one of the extremes might bias the reports toward either dull or sharp. The intensity VAS ranged from no sensation to strongest sensation imaginable and was preset at the bottom (no sensation). After each trial, the reports on the VAS scales were converted to numbers ranging from 0 to 10, corresponding to no sensation and strongest sensation imaginable for the intensity scale and dull and sharp for the quality scale. The subjects were not aware of the numeric values of their scores, since they were only presented with the scales and the anchors. During the first series of experiments, the time between stimuli was fixed at 11 s. Subjects were told that they might not feel some of the stimuli and were instructed to do nothing following those. They were informed that leaving the scales at their preset value would be interpreted as the stimulus not having been perceived. In this case, the preset values were stored; during analysis, this combination of scores was used as an indicator of an undetected trial. The five stimulus classes (N1, N5, D1, D5, and sham) were each applied 30 times. The stimulus sequence consisted of 30 different blocks in which the order of the five stimuli was randomized. The same sequence was used for each subject in both experimental sessions. Procedure and material second series of experiments The second series of experiments was performed in 2011 at the Central Institute of Mental Health in Mannheim, Germany, and was approved by the Medical Ethics Committee II of the Medical Faculty Mannheim of Heidelberg University. The procedure for the second series of experiments was mostly the same as that for the first series, but aspects that differed between the second series and the first are described below. Subjects For the second series of experiments, 21 subjects were recruited from the staff and students of the Central Institute of Mental Health, two of which were excluded because they did not feel the disc electrode stimuli (for one subject, this was already the case during first threshold determination, and the other subject stopped feeling the disc stimuli shortly after the start of the VAS procedure). The remaining 19 subjects were on average 31 years old (SD 0 6, range 21-52 years), and six were male. All subjects gave written informed consent prior to the first experimental session. The average time between Sessions A and B was 2 days (range 1-6 days). Stimuli Stimulation with the needle and disc electrodes was performed at 120 % of the sensation thresholds. The sham Intensity Quality Strongest imaginable sensation No sensation Dull Sharp Fig. 2 The VASs that were used for reporting the perceived quality and intensity following each stimulus. The black triangles represent the sliders that subjects manipulated to give their responses. For the first series of experiments, the texts were presented in Dutch. In the second series, they were presented in English. After reporting, the reports on the scales were converted to numbers ranging from 0 to 10. At the start of each trial, the sliders of the scales were preset at the bottom for intensity (no sensation, corresponding to an intensity score of 0) and in the middle for quality (neither dull nor sharp, corresponding to a quality score of 5) condition was omitted in the second series of experiments, since no subjects reported on the VAS following the sham stimuli in Series 1. Determination of sensation thresholds For Series 2, the sensation threshold determination was automated. The method for Series 1 had required the researchers to ask the subjects questions, which took a lot of time and created the possibility of biasing subjects because of the way in which questions were asked. The sensation thresholds for the two electrode types were determined using a psychophysical threshold determination method consisting of multiple series of ascending stimuli. Subjects were instructed to press and hold a button; this initiated a trial consisting of a series of stimuli (one cathodic pulse with a pulse width of 0.21 ms) of ascending amplitude. The time between the stimuli was 1 s. The subjects were asked to release the button when they felt a sensation, which terminated the stimulus series. Following this, a logistic regression model was fitted to the series of detections (button releases) and misses (button not released). The sensation threshold was defined as the amplitude with a 50 % probability of detection. The threshold of each electrode was determined over ten trials. For the first trial at each of the two thresholds, the starting value was 0 mA; the increments were 0.1 mA for the needle electrodes and 0.5 mA for the disc electrodes. For the remaining trials, the starting value was half of the estimated threshold, and the increment was one eighth of this threshold. During each experimental session, the disc electrode threshold was determined first. VAS experimental procedure For the second series of experiments, the sham stimulus class was omitted from the VAS procedure. The procedure thus consisted of four stimulus classes (N1, N5, D1, and D5), each of which was applied in 30 randomized blocks. The randomization for this series was performed for each subject (the sequence was the same for both Sessions A and B in the same subject). Each stimulus was preceded by a uniformly random waiting time of between 4 and 5 s. The subjects reported the sensations on the VASs after detecting each stimulus. After this, they clicked a "ready" button, which started the next stimulus cycle. When a subject failed to respond, the experimenter asked the subject to press the "ready" button without performing any reports. The new procedure allowed each subject as much time as needed to respond, without introducing an unnecessary waiting time. Quality assessment procedure using labels As a final part of the second series of experiments, each of the four stimuli that had been used during the VAS procedure was presented again. Each of the stimuli was repeated five times, after which the subject was asked to fill in a questionnaire based on the labels used by Mouraux et al. (2010) and Nahra and Plaghki (2003). Contrary to the VASs, which were presented with English labels, the questions were presented in German. For each stimulus class, subjects were asked whether they had detected any of the five stimulus presentations. If they had, they were asked to report the quality by assigning one or more of the following labels: Leichte Berührung (light touch), Berührung (touch), elektrischer Schock (shock), prickelnd (tingling), stechend (pricking), warm (warm), and brennend (burning). Data analysis The data of the first and second experimental series were analyzed together, resulting in a data set containing 29 subjects. To correct for skewness, the sensation thresholds and stimulus currents were log transformed (using the natural logarithm). The effects of electrode type, (experimental) session, and series (of experiments) were analyzed by fitting linear mixed models (LMMs) using the Mixed procedure in SPSS 16.0. LMMs have a number of advantages as compared to a repeated measures ANOVA. The method allows for accounting for intersubject differences by including random effects; see West, Welch and Galecki (2007) for an introduction to this method. The factors Electrode Type and Session were modeled as repeated measures with a scaled identity covariance structure, and Series as a between-subjects factor. A random intercept for subjects was included in the model. The VAS scores of undetected stimuli (5 for quality and 0 for intensity) were discarded. This included all sham stimuli in the first series of experiments, since none of those had been detected, and some of the other stimuli. The remaining scores were averaged by subject, electrode type, number of pulses (NoP), and session, resulting in four quality and four intensity scores for each session. The intensity scores were log transformed to correct for skewness. We assessed the effects of electrode type, NoP, session, and series by fitting an LMM using the Mixed procedure in SPSS 16.0. Electrode Type, NoP, and Session were modeled as repeated factors with a scaled identity covariance structure, and Series was modeled as a between-subjects factor. The model included a random intercept for subjects. Besides the four main effects, interaction effects were modeled for Series × Electrode Type, Series × NoP, Electrode Type × NoP, and Series × Electrode Type × NoP. Interaction effects were followed up by splitting the data over one of the interacting factors and fitting separate LMMs. Reproducibility Reproducibility of the sensation thresholds and of the quality and intensity VAS scores was assessed using intraclass correlation coefficients (ICCs) for each stimulus type. The appropriate ICC for the present study was ICC(1, k) (Shrout & Fleiss, 1979), which is calculated as follows: Here, BMS is the between-subjects mean squares, and WMS is the within-subjects mean squares. An ICC of 1 (all variance is accounted for by differences between subjects) is interpreted as perfect reproducibility. If there are equal amounts of between-and within-subjects variability, the ICC(1, k) will be 0, which is interpreted as poor reproducibility. There is no objective limit above which an ICC represents good reproducibility; we will use .75 as a rule of thumb (Portney & Watkins, 2009). ICCs of the session-averaged quality VAS scores as well as the session-averaged intensity VAS scores and thresholds were calculated, resulting in ten ICCs (two thresholds, four intensity scores, and four quality scores). Because the sensation threshold determination method for the first series of experiments was not based on a documented method, we assessed the reproducibility of this method by calculating an ICC using the three trials that were used. Each experimental session was considered as independent. This resulted in 20 sets of three repeated threshold determination trials for the disc and needle electrodes. Since each trial consisted of a staircase procedure in which subjects gave multiple responses, we can again use the ICC(1, k). All calculations were performed in SPSS 16.0. Sensation thresholds The sensation thresholds varied significantly between electrode types [F(1, 85) 0 p < 0.001], with the needle electrode sensation thresholds being 0.66 ± 0.37 (M ± SD) mA and the disc electrode thresholds being 2.82 ± 1.12 mA. The ICC of the threshold determination procedure of the first series of experiments, which was calculated over the three trials that were used for each threshold, was .91 (confidence interval .80-.96) for the needle electrodes and 1.00 (confidence interval .99-1.00) for the disc electrodes. Quality and intensity scores The quality and intensity scores for each subject, electrode type, NoP, and session are presented in Fig. 3, along with the mean scores of the whole subject population. Table 1 shows the results of the LMM analysis on these scores. Means and confidence intervals of the VAS scores of individual subjects are provided as supplementary materials. For the quality scores, there was a significant effect of electrode type, with the needle electrodes scoring more toward to the sharp end of the quality scale than did the disc electrodes. In addition, we found a significant NoP effect, as well as a significant Electrode Type × NoP interaction. We followed up on these effects by assessing the effect of NoP separately for each of the two electrode types using LMMs. All of the effects except electrode type were modeled, but we only tested the effect of NoP for each of the two electrode types. The effect of NoP on reported quality was significant for the needle electrodes [F(1, 81) 0 114, p < .001] but not for the disc electrodes [F(1.81) 0 1.08, p 0 .30]. For the needle electrodes, the reported quality was higher (sharper) for the N5 stimuli than for the N1 stimuli. All subjects except one (for the disc electrodes of Session A) on average rated the five-pulse stimuli as being more intense than the one-pulse stimuli of the same electrode type. There was a significant effect of NoP on the reported intensity, with NoP 0 5 stimuli being rated higher than the NoP 0 1 stimuli. In addition, there was a significant effect of electrode type and an Electrode Type × NoP interaction effect. We followed up on this finding by analyzing the effect of electrode type for NoP 0 1 and NoP 0 5 separately using LMMs. The effect of electrode type on reported intensity was significant for the NoP 0 5 stimuli [F(1, 81) 0 8.73, p 0 .004], but not for NoP 0 1 [F(1, 81) < 1.0, p 0 .87]. The N5 stimuli were rated with a higher intensity than were the D5 stimuli, but there was no difference in this respect between the N1 and D1 stimuli. Finally, we followed up the Series × NoP interaction effect on intensity scores by analyzing the effect of NoP for each series with separate LMMs. In both series of experiments, reported intensity was significantly influenced by NoP [F(1, 63) 0 92.0, p < .001, for Series 1, and F(1, 126) 0 340, p < .001, for Series 2], but the increase in intensity score between NoP 0 1 and NoP 0 5 was higher for Series 2 than for Series 1 (an increase of 1.01 for Series 1 and 1.89 for Series 2). Labels The results of the labeling procedure of the second series of experiments are presented in Figs. 4A-4D. Nineteen subjects participated twice (Sessions A and B) in this part of the experiment, resulting in a total of 38 sessions in which the labeling procedure was performed. In all of these sessions, the N5 and D5 stimuli were detected. The N1 and D1 stimuli were missed in some cases, because the subjects sometimes had stopped feeling these stimuli in the course of the preceding VAS experiment. In order to determine in how many sessions labels that represented tactile or nociceptive sensations were assigned, we aggregated the label pairs light touch/touch, tingling/pricking, and warm/burning. We counted the number of times that either of the two labels of one category was reported. These aggregated scores showed that the majority of subjects reported the needle electrode stimuli as tingling/ pricking and the disc electrode stimuli as light touch/touch. The number of assignments of these scores increased with increasing NoP. The warm/burning category was rarely reported. The shock label was reported for all stimulus classes in a small number of sessions. Reproducibility The ICCs for the thresholds and the quality and intensity scores are listed in Table 2. All ICCs except for the disc electrode sensation threshold were .75 or higher. The VAS scores for each electrode type had higher ICCs than the respective sensation thresholds. Although most of the ICCs had lower confidence boundaries beneath .75, the consistently high ICC estimates suggest good reproducibility overall. Separate ICCs for both series of experiments are provided in the supplementary materials. Discussion We collected quality and intensity VAS scores for stimuli applied with our compound electrode array. Needle and disc electrode stimuli with two intensity levels were delivered. Subjects participated in two experimental sessions, which enabled analysis of the reproducibility of the outcome measures. The reports on the quality VAS showed that stimuli applied through the needle and disc electrodes elicited clearly distinguishable dull and sharp sensations. For the intensity scores, 0 corresponds to no sensation and 10 to strongest sensation imaginable. See Fig. 2 for a description of these scales A larger number of pulses in the stimuli was demonstrated to increase the reported intensity of the stimuli without any detrimental effects on the quality scores. The ICCs of the quality and intensity scores indicated good reproducibility of the perceived stimulus qualities and intensities induced by our stimulation method. Our subjects reported the perceived stimulus quality on a continuous scale ranging from dull to sharp. They generally reported disc electrode stimuli to be on the dull half of the scale and needle electrode stimuli to be on the sharp half. These scores by themselves do not provide evidence that the disc electrode stimuli led to tactile and the needle-induced stimuli to nociceptive sensations. However, the assignments of the qualitative labels (predominantly tingling and pricking to the needle electrode stimuli and light touch and touch to the disc electrode stimuli) strongly suggest that subjects associated sensations at the dull anchor point in our quality VAS with a tactile quality and the sharp anchor with nociception. The present study is the first in which the effect of pulse train modulation (PT) is explored for preferential electrical stimulation of nociceptive and tactile afferents. Although van der Heide et al. (2009) studied PT in detail, the stimulation electrode that they used recruited a mixed population of afferents. Our results suggest that PT is capable of modulating the perceived intensity of both nociceptive and tactile stimuli. We did not study the effect of NoP over the range that van der Heide et al. had used, and therefore we do not know whether the saturation in intensity for NoP > 7 that they found exists for both nerve fiber populations. In the present study, PT influenced the intensity scores of the needle electrode stimuli more strongly than those of the disc electrode stimuli. This may be attributed to differences in which action potential frequencies code for stimulus strength in tactile and fast nociceptive afferents. Since the dull and sharp qualities were presented on the same VAS, subjects were not given the opportunity to report a quality containing both a dull and a sharp component. The small number of nociceptive labels assigned to the disc electrode stimuli-and, vice versa, of tactile labels assigned to the needles-suggests that this situation was rare in our study. Recording dullness and sharpness of stimuli using a VAS without the attributes being mutually exclusive would be possible if separate scales were used for dullness and sharpness. This procedure could be extended to include any number of qualitative attributes. This would combine the advantage of using a continuous quality scale, which records small shifts in perceived quality, with the advantage of a labeling procedure, which gives the possibility of recording multiple qualitative aspects of the same stimulus. Before designing a method like this, it would be useful to gather more knowledge on the parameter space of the quality of cutaneous perception, for instance by using a multidimensional scaling procedure. Because we wanted to determine the reproducibility of the reported qualities and sensations of the stimuli, each subject participated in two experimental sessions, and ICCs were calculated. The VAS-score ICCs demonstrate that the stimuli through the electrode array led to highly reproducible sensations. The ICCs of the VAS scores were all higher than the ICCs of the sensation thresholds. This suggests that the reproducibility of the quality and intensity of the sensations appears to be quite robust to small changes in the stimulus currents. Although the stimulus currents used in the two series of experiments were significantly different, nine out of the ten ICCs for the pooled data lie between the ICCs calculated separately for both series of experiments (see the supplementary materials). This indicates that pooling the data did not lead to inflated ICCs through increased intersubject variability caused by differences in the experimental procedures. Although only single phasic stimuli were generated in this study, the multichannel stimulators that were used would allow for the generation of more complicated stimulus patterns. Stimulators with any number of channels can be built and used with multiple compound electrode arrays. This system can be used for a range of experimental paradigms. First of all, the tactile and nociceptive content in a stimulus can be varied in a controlled manner by applying pulse trains containing a mixture of needle and disc electrode pulses, the proportion of which can be varied. Secondly, when multiple electrode arrays are used, a comparison of the spatial perception of touch and nociception can be made in a single experiment. Mancini, Longo, Iannetti, and Haggard (2010) performed a within-subjects comparison of the reported locations of touch and fast and slow nociception. Because of the stimulus methods employed (mechanical and laser), each modality had to be applied in a separate experiment. The use of compound electrode arrays in combination with multichannel stimulators allows for comparisons within a single experiment in which the stimuli of the two modalities can be randomized. A third application would be the study of spatiotemporal, multimodal stimulus patterns. Any real-life stimulus involves multiple modalities over a length of time, but it is poorly understood how these aspects are integrated into a single percept. Studying spatiotemporal sensory phenomena may provide important insights on this topic, for instance through studying the saltation effect (Trojan et al., 2006). Although our results show that the stimuli delivered through our compound electrode array correspond well to tactile and nociceptive sensations, we do not have proof that the two electrode geometries activate tactile and nociceptive afferents selectively. Our needle electrodes were similar to the one used by Mouraux et al. (2010), which was demonstrated to be selective for stimulus currents comparable to ours in magnitude. For the disc electrodes, we do not have this kind of information, and we therefore have to take into account the possibility that they may activate some nociceptive afferents besides the intended tactile afferents. Our compound electrode array offers the possibility of studying touch and nociception arising from the same site. However, this is only a small fraction of the cutaneous sensory modalities in existence. Some of these modalities are not stimulated by our method at all; this includes all modalities whose information is transmitted through C-fiber afferents, which are activated by stimulus currents higher than those of the myelinated cutaneous afferents (Malmivuo & Plonsey, 1995). Furthermore, our activation of tactile fibers does not discriminate between afferents connected to different types of receptors. We conclude that the use of disc surface electrodes and needle electrodes in combination with our multichannel stimulators is capable of eliciting two distinguishable sensations at the same skin site. These sensations correspond to tactile and nociceptive modalities, and the perceived quality of them is reproducible. The perceived strength of the stimuli can be varied without detrimental effects on the perceived qualities. Ours is therefore a promising method for studying the interaction between touch and nociception arising from the same skin site, for instance by studying spatial perception of cutaneous stimuli and sensation thresholds. This may give rise to new insights about the ways in which the various cutaneous sensory modalities interact. Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
2023-01-25T14:35:23.223Z
2012-07-18T00:00:00.000
{ "year": 2012, "sha1": "d55e9fe8768b2851f43433d3bae5cce1dd4a0d10", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.3758/s13428-012-0216-y.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "d55e9fe8768b2851f43433d3bae5cce1dd4a0d10", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
20593979
pes2o/s2orc
v3-fos-license
Calcified pulmonary consolidations in pulmonary alveolar microlithiasis : Uncommon computed tomographic appearance of a rare disease Lung India • Volume 34 • Issue 3 • May June 2017 297 Mantoux test was negative. Pulmonary function test studies revealed severe restrictive lung disease with a forced vital capacity (FVC) of 1.4 L (43% predicted) and a forced expiratory volume in 1 s (FEV1) of 1.3 L (46% predicted). The FEV1/FVC ratio was 93%. The total lung capacity was 2.25 L (51% predicted). Diffusing capacity of the lung for carbon monoxide was 29%. The patient desaturated on a 6-minute walk test with the pulse oximetry showing a fall in arterial oxygen saturation from 94% at rest to 86% on walking. The 6-minute walk distance was 60 m. Serum Angiotensin-converting enzyme was elevated (99 U/ml). Fiberoptic bronchoscopic examination revealed a normal appearing tracheobronchial tree. Bronchoalveolar lavage (BAL) fluid was negative for acid-fast bacilli and other pathogens with an unremarkable cytology. A transbronchial lung biopsy from the right lower lobe was noncontributory due to inadequate tissue. Two-dimensional and color Doppler echocardiography showed normal wall motion, chamber sizes, and cardiac valves, a left ventricular ejection fraction of 61% and an estimated high mean pulmonary artery pressure of 39 mm Hg. Mantoux test was negative. Pulmonary function test studies revealed severe restrictive lung disease with a forced vital capacity (FVC) of 1.4 L (43% predicted) and a forced expiratory volume in 1 s (FEV 1 ) of 1.3 L (46% predicted). The FEV 1 /FVC ratio was 93%. The total lung capacity was 2.25 L (51% predicted). Diffusing capacity of the lung for carbon monoxide was 29%. The patient desaturated on a 6-minute walk test with the pulse oximetry showing a fall in arterial oxygen saturation from 94% at rest to 86% on walking. The 6-minute walk distance was 60 m. Serum Angiotensin-converting enzyme was elevated (99 U/ml). Fiberoptic bronchoscopic examination revealed a normal appearing tracheobronchial tree. Bronchoalveolar lavage (BAL) fluid was negative for acid-fast bacilli and other pathogens with an unremarkable cytology. A transbronchial lung biopsy from the right lower lobe was noncontributory due to inadequate tissue. Two-dimensional and color Doppler echocardiography showed normal wall motion, chamber sizes, and cardiac valves, a left ventricular ejection fraction of 61% and an estimated high mean pulmonary artery pressure of 39 mm Hg. The radiological appearance narrowed down the differential diagnosis to PAM and metastatic pulmonary calcification. Calcium metabolism indices were investigated that showed normal serum calcium and phosphorus levels. Serum Sir, Pulmonary alveolar microlithiasis (PAM) is a rare disease of unknown etiology that affects young adults and is characterized by intra-alveolar accumulation of microliths consisting of calcium phosphate. [1,2] Although the etiology remains unclear, PAM is considered an autosomal recessive disease caused by mutations of SLC34A2 gene, which encodes a sodium-dependent phosphate cotransporter. [3] Familial occurrence has been described. [4] It is characterized by a clinical-radiological dissociation. The disease is usually slowly progressive and may worsen over time with the development of respiratory failure and cor-pulmonale. The classical radiographic picture is one of the numerous symmetrical micronodular shadows obliterating the pulmonary vasculature, heart, and the diaphragm, described as a "sand-storm" appearance along with a thin area of linear hyperlucency along the ribs, the "black pleura" sign. This radiological picture is considered virtually diagnostic precluding the need for a biopsy. [5][6][7][8][9] Calcified pleural has been described in addition to the characteristic parenchymal involvement. [10] Dense confluent calcifications causing consolidation of the lungs in the lower lobes are however unusual. We recently observed this uncommon radiological manifestation of PAM on high-resolution computed tomographic (HRCT) scan in a 27-year-old nonsmoker female patient. The patient presented with the complaints of exertional shortness of breath and dry cough of 1-year duration. There was no history of fever, chest pain, hemoptysis, or weight loss. Examination revealed clubbing and bilateral basal end-inspiratory crackles. Blood counts, serum biochemistry, and urine analysis were within normal limits. Chest radiograph (posteroanterior view [ Figure 1]) revealed bilateral symmetrical reticulonodular shadows with a relative sparing of the upper zones and without any obvious loss of lung volume. The nodular shadows were confluent at places with density suggesting calcification with ill-defined cardiac borders and diaphragm ("sand-storm appearance"). Contrast-enhanced HRCT of the thorax showed the presence of bilateral upper lobe ground glass attenuation with diffuse calcification along the interlobular and intralobular septae, and in the mediastinal pleura. The pulmonary artery was enlarged [ Figure 2]. The lower lobes showed symmetrical, dense confluent calcified opacities, more so in the posterior regions causing a complete consolidation. A thin area of linear hyperlucency along the ribs, the "black pleura" sign was visible [ Figure 3]. Serology for HIV was nonreactive. Calcified pulmonary consolidations in pulmonary alveolar microlithiasis: Uncommon computed tomographic appearance of a rare disease Figure 1: Chest radiograph (postero-anterior view) showing bilateral symmetrical reticulonodular shadows with a relative sparing of the upper zones and without any obvious loss of lung volume. The nodular shadows were confluent at places with density suggesting calcification obliterating the cardiac borders and diaphragm ("sand-storm" appearance). A thin area of linear hyperlucency along the ribs, the "black pleura" sign is seen on the left side (arrow) Vitamin D 25-hydroxy levels were reduced (17 ng/ml), and parathormone level was elevated (121.70 pg/ml). The absence of any obvious etiology and hypercalcemia and a radiological picture on plain chest radiograph and HRCT scan that was pathognomonic for PAM ruled out metastatic calcification in the present case. The hyperparathyroidism observed in our case was likely secondary to hypovitaminosis D. A diagnosis of PAM was thus established on clinical and radiological grounds. PAM is a rare autosomal-recessive pulmonary disease characterized by intra-alveolar accumulation of microliths consisting of calcium phosphate. [1] It is found worldwide though it predominates in Italy, Turkey, and the USA. [7] The hallmark of this disease is a clinical-radiological dissociation with few symptoms at least in the initial stages. The disease may be discovered at any age from childhood to middle age and is most often diagnosed incidentally during radiography of the chest for other reasons. [8] In symptomatic patients, the usual presentation is with exertional dyspnea, nonproductive cough, and chest pain. The disease is usually slowly progressive. Chest radiographic findings in PAM are a bilateral diffuse, sand-like micronodular infiltration, particularly in the middle and lower lung zones that may be confluent leaving a thin area of linear hyperlucency along the ribs or mediastinum caused by small, thin-walled subpleural cysts, the "black pleura" sign. [1] Described as a "sand-storm" appearance obliterating the heart and the diaphragms, this picture is highly characteristic and together with the clinical picture is sufficient for a diagnosis. Common HRCT findings are ground-glass opacities, subpleural linear calcifications, subpleural cysts, parenchymal nodules, and calcification along the interlobular septa. [8] Dense and confluent calcification to form consolidations as seen in the present case is uncommon. [9] As calcium metabolism is normal, serum calcium, phosphate, and parathyroid hormone remain within normal range. [11] The differential diagnosis of PAM includes metastatic pulmonary calcification related to chronically elevated calcium levels that may occur in chronic renal failure, primary hyperparathyroidism, hypervitaminosis D, and milk-alkali syndrome. [12] Metastatic pulmonary calcification usually occurs in normal pulmonary parenchyma and is secondary to abnormal calcium metabolism without any prior soft tissue damage. Calcium deposits can be found in interstitium of the alveolar septae, bronchiolar walls, in the large airways and in the walls of the pulmonary vessels. [13] The most common imaging features are poorly marginated nodular opacities in the upper lobes of lungs. Ground-glass opacities, subpleural linear calcifications, and calcification along the interlobular septa that are the most frequent HRCT findings in PAM [8] are not seen in metastatic calcification. The absence of a known cause, normal calcium levels, and the radiological appearance ruled out metastatic calcification in the present case. The radiographic appearance of a sand-storm with a "black pleura" sign is pathognomonic of PAM. Fibreoptic bronchoscopy with BAL showing microliths or histological examination of a lung biopsy is only occasionally required to confirm the diagnosis. [7] In the present case, fibreoptic bronchoscopy was done but was inconclusive. However, the diagnosis of PAM was confirmed by the characteristic radiological findings. The dense and confluent calcifications to form consolidations seen in the present case are distinctly uncommon and add to the known radiological manifestations of PAM. Currently, the only effective therapy is lung transplantation. Systemic glucocorticoids, whole lung BAL, or disodium etidronate have been tried but are not effective to prevent the progression of PAM. [1] Long-term oxygen therapy is necessary for the patients with hypoxemia and chronic respiratory failure. showing the presence of bilateral upper lobes ground glass attenuation with diffuse calcification along the interlobular and intralobular septae, and in the mediastinal pleura (arrow). The pulmonary artery is enlarged. The "black pleura" sign is seen on the right side (arrow) Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2018-04-03T05:30:58.160Z
2017-05-01T00:00:00.000
{ "year": 2017, "sha1": "09dbc073df29aea1721297ca73fe00fa786056a4", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "MergedPDFExtraction", "pdf_hash": "7dd866433c4f0906aed5aa6f7b49f58f2adab290", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
126324974
pes2o/s2orc
v3-fos-license
Viscoelasticity in Foot-Ground Interaction Mechanical properties of the plantar soft tissue, which acts as the interface between the skeleton and the ground, play an important role in distributing the force underneath the foot and in influencing the load transfer to the entire body during weight-bearing activities. Hence, understanding the mechanical behaviour of the plantar soft tissue and the mathematical equations that govern such behaviour can have important applica‐ tions in investigating the effect of disease and injuries on soft tissue function. The plantar soft tissue of the foot shows a viscoelastic behaviour, where the reaction force is not only dependent on the amount of deformation but also influenced by the deformation rate. This chapter provides an insight into the mechanical behaviour of plantar soft tissue during loading with specific emphasis on heel pad, which is the first point of contact during normal gait. Furthermore, the methods of assessing the mechanical behaviour including the in vitro/in situ and in vivo are discussed, and examples of creep, stress relaxation, rate dependency and hysteresis behaviour of the heel pad are shown. In addition, the viscoelastic models that represent the mechanical behaviour of the plantar soft tissue under load along with the equations that govern this behaviour are elaborated and discussed. Introduction The human heel pad is usually the first part of foot that contacts the ground during normal gait. The soft tissue structure, which is located underneath the calcaneus (heel bone) consists of fat pad and skin. While this fat pad, also known as corpus adiposum, works as shock absorber and dampens the ground reaction forces during weight-bearing activities like standing and walking, the skin has another important role to prevent tear and to work as an impermeable barrier to protect the underlying soft tissue [1]. Reported heel pad thickness varies from 12.5 to 24.5mm in different studies [2][3][4][5][6][7][8] using different imaging modalities including ultrasound [2][3][4], magnetic resonance imaging (MRI) [5] and radiography [6,7]. The plantar soft tissue structure is designed to bear large loads. Similar type of adipose tissue is found in other parts of body that normally need to bear compressive and shear stress, such as fingertips. The structure of the plantar adipocytes consists of a dense network of septa, which prevents free movement of fat cells while allows the lateral movement [9]. The unique structure of the plantar soft tissue enables it to bear large strain in reaction to the ground force. In each heel strike, vertical loading roughly equal to 110% body weight is applied on the heel, whereas 25% of the body weight is applied in anteroposterior and 10% in mediolateral directions [10]. Normalising the loading over an area results in an average pressure around 100-400 kPa for healthy individuals depending on the site of the foot [11]. These plantar pressure values can increase as a result of increase in the stiffness of plantar soft tissue, i.e., due to diabetes that results in a decrease in the contact area [11]. Furthermore, understanding the effect of different pathologies such as diabetes on the mechanical properties of human plantar soft tissue is paramount. While these pathological conditions may not affect the structure (geometry) of the plantar soft tissue they would affect the mechanical properties of the plantar soft tissue [12]. The knowledge of these mechanical properties that determine the behaviour of tissue under load can be utilised for diagnosis of foot pathologies as well as for treatment interventions such as foot orthoses and footwear. In order to understand the mechanical behaviour of the plantar soft tissue, it is necessary to have an overview of the basic mechanical definitions. Principles of mechanics of materials Stress is a quantity which expresses the force that neighbouring particles apply on each other. Stress can also be defined as the amount of force, which is required for a unit of cross-section area to compress or extend in the normal direction. F A s = (1) where σ represents stress, F is the load and A is the cross-section area. The unit of the stress is Nm −2 . Strain is the change in the length of the object in the axial direction which is normal to the surface of applied load. Strain can be defined as the amount of change in the length of the object over the original length as a result of applying load. where ε is strain, dL is the change in length and L is the original length ( Figure 1); hence, strain does not have a unit. Shear stress strain is the response of a material to the force, which is applied parallel to the specific surface. This force makes the geometry of the structure to deform but not to stretch/ compress. Parallel surface to force where τ is the shear stress, F is the applied force parallel to the surface and A is the cross-section area (Figure 2). The gradient of the force-deformation graph describes the stiffness of the material, and it is the quantification of the rigidity of the material and is expressed in Nm −1 . dF Stiffness dL = (4) where dF and dL are changes in load and displacement, respectively. Figure 2. Normal deformation as a result of tensile and compressive forces applied to cylindrical specimens along with the shear deformation as a result of shear force applied to a cubic specimen. Principles of elastic solid material Elasticity is the ability of a material to resist force and return to its original shape when the force is removed. Elastic solid materials are divided into two main groups: Hookean and non-Hookean [13]. In the Hookean material where stress increases linearly with increase in strain, the slope of the stress-strain graph is defined as Young's modulus or modulus of elasticity (E) expressed in Pa. where σ is stress and ε is the strain. However, in the non-Hookean material stress is not linearly proportional to strain. The relationship between stress and strain changes during different stages of loading. Principles of viscous fluid materials A fluid material is defined as a material that bears shear deformation and consists of liquids and solids [14]. Liquids consist of atoms with interatomic connections and molecules with weak intermolecular connections. The shear force can be applied to break the weak intermolecular bonds to allow the material to flow [15]. In continuum mechanics, a Newtonian fluid is a fluid in which the arising viscous stresses are linearly proportional to the local strain rate [16]. In other words, the shear stress is proportional to the rate of change of the fluid's velocity vector. . dv dy t m = (6) where τ represents the shear stress, μ represents viscosity and dv/dy represents the velocity gradient in the direction perpendicular to the velocity. The simplest mathematical models that take viscosity into account can be applied for Newtonian fluids. Although there is no real fluid that fits this definition properly, many common liquids and gases, such as water and air, are assumed to be Newtonian under ordinary conditions. On the other hand, a non-Newtonian fluid is the one with different properties from a Newtonian fluid. To be more specific, the viscosity, which is the quantity of a fluid's ability to resist gradual deformation by shear or tensile stresses, depends on shear rate or shear rate history [16]. Viscoelastic behaviour A viscoelastic material combines properties of the elastic solid with viscous fluid. The elastic solid can be Hookean or non-Hookean and the viscous fluid can be Newtonian or non-Newtonian [17]. There is a variety of behaviour within different viscoelastic materials which ranges from completely elastic solid behaviour to completely viscous fluid behaviour [17]. In addition, the viscoelastic material has the distinctive characteristics which show both viscous and elastic behaviour when exposed to loading. The soft tissue exhibits a viscoelastic behaviour under compression which means that the force-deformation behaviour depends on the amount of deformation and deformation rate [18]. Viscoelastic materials behave in different ways under various types of loading exhibiting differences in deformation/force rate dependency, creep, stress relaxation and hysteresis [18]. For example, a viscoelastic material under cyclic loading behaves differently during loading and unloading. In a sense, the stiffness of the material decreases during unloading when compared to the stiffness that material shows during loading. The area between force-deformation graph in loading and unloading is called hysteresis that represents dissipated energy [18] (Figure 3). The load deformation graph of a heel pad tested using ultrasound indentation technique [19]. The different colours represent data gathered at different deformation rates as presented in the legend. The area surrounded between the loading and unloading curves represents hysteresis. The force-deformation behaviour of the plantar soft tissue like other biological materials is influenced by the loading velocity [18] (Figure 3). Stress relaxation characteristics indicate that when a viscoelastic material is deformed suddenly and the deformation is kept constant for a specific time, the force decreases with time [18] (Figure 4). The creep characteristics indicate that when a specific amount of load is suddenly applied to the viscoelastic material and kept constant for a specific time, there would be an increase in deformation over time [18] (Figure 5). Mechanics of the heel pad Because of the liquid content of the heel pad tissue along with the arrangement of solid components which has an influence on regulating the fluid movement, the heel pad behaviour is mostly assumed as that of a nearly incompressible material [20]. The structure The fat pad consists of two layers: microchambers and macrochambers. Microchambers shape the plantar layer of heel pad, which protects the fat pad from excessive bulging during loading [12]. The deeper layer is composed of sparser, fibro-adipose structure called a macrochamber [21]. The microchambers is the thinner layer of the two and contains mainly elastic fibres, but the thicker layer (macrochambers) contains roughly equal amount of elastic fibres and collagen [22]. Therefore, a different behaviour between two layers is expected [12], which is discussed under the mechanical behaviour section (Figure 6). The heel pad atrophy is a clinical condition, which is usually linked with diabetes, collagen disorders and peripheral neuropathy [22]. The results of histological studies have revealed that the adipocytes in the subcutaneous layers closer to the foot surface of the fat pad were 25% smaller in the mean cell area and 10% smaller in mean maximum diameter in an atrophic heel compared to a normal heel [22]. Additionally, the adipocytes in the deep subcutaneous layers are 45% smaller in the mean cell area and 25% smaller in mean maximum diameter in an atrophic heel compared to a normal heel [22]. Septa in both deep and superficial subcutaneous layers in the atrophic heel are 25% thicker than in a normal heel and include more percentage of elastic tissue which seems to be uneven in some cases [23]. Furthermore, in atrophic feet, collagen septa are found to be thicker, and also the adipose cells are smaller than healthy feet [22,24]. The amount of internal stress and strain depends on the structure of the heel (geometry) as well as on the material properties of the tissue and also the value and direction of force. Hence, the above-mentioned changes in the structure of the heel pad can increase the internal stress and strain that are claimed to be the main causes of tissue injuries [25][26][27]. In addition, the interface between the soft tissue structure and underlying bony prominence can be the area of high stress concentration and it is claimed that tissue damage starts in deep tissue close to the bony prominences and then develops up to the skin surface [25,26,28,29]. The function The foot supports stabilising the whole body during standing and is the interface between the body and the ground during walking. The heel pad as the first part of the foot which has contact with the ground during locomotion acts as a shock absorber and shock reducer to protect the foot from local stress [2]. The strain and pressure applied to the heel during gait can be withstood by the honeycomb structure of the heel pad. The heel pad as a structure shows viscoelastic behaviour and provides cushioning during heel strike and absorbs shocks by dissipating energy [2]. This mechanical energy dissipates as heat just after heel strike and decreases the possibility of mechanical trauma to the foot [2]. The heel pad can absorb the impact shock during heel strike by carrying out deformation under loading and by distributing the force over a wider area of the skin to prevent stress concentration [30]. The chambered structure helps to spread the compressive force over the whole area of plantar surface of the bone and prevents any injuries on the calcaneus bone during heel strike [2]. Under compression, the heel pad expands easily as a result of low stiffness, but later the tension on the collagen fibres of the fat pad and skin limits the movement of the tissue and increases the stiffness of the heel pad gradually in the loading direction [31]. This justifies the nonlinear force-deformation behaviour of the heel pad and causes the strain stiffening inherent in the heel pad's mechanical behaviour. Additionally, it was reported that the mechanical behaviour of microchambers and macrochambers is different under compression [12]. The microchamber layer experiences less strain compared to the macrochamber layer [12]. Hsu et al. [12] indicated that the stiffness of the microchambers is 10 times greater than macrochambers, and concluded that the observed difference plays an important role in the heel pad mechanical behaviour. It appears that the macrochamber layer is responsible for large deformation and cushioning behaviour of the heel pad during gait; however, the microchamber layer is responsible in preventing the heel pad from excessive bulging [12]. The main role of the heel pad is to decrease the impact shock during heel strike and to distribute pressure during the foot-ground contact through undergoing deformation. The deformability of the heel pad may reduce through tearing the fibrous septa or atrophy of heel pad due to a trauma or ageing [32,33]. After severe injuries such as tearing or breaking of the honeycomb structure, the heel pad does not have the ability to remodel itself [32,33]. As the fat pad is a semi-liquid structure with hydrostatic properties of fluids [33], a decrease in water content of the heel as well as a decrease in the elastic fibrous tissue and loss of collagen are the main reasons for the gradual weakening of the tissue due to ageing [32]. Overall, it can be concluded that the loss of soft tissue substances due to ageing, atrophy or any previous injury prevents the tissue from responding to load in an optimal way [33]. Mechanical behaviour of heel pad Plantar soft tissue like other biological tissues has a nonlinear elastic behaviour. Initially by applying small amount of load, the tissue deforms easily (low stiffness) and by increasing the loads the stiffness increases gradually [34]. The heel pad expands easily as a result of low stiffness but afterwards the tension on the collagen fibres of the fat pad and skin limits the movement of the tissue and increases the stiffness of the heel pad in the loading direction [31]. Pathological changes in foot may not be detectable from the structure of the foot but normally correlate with the alteration in the mechanical behaviour of the tissue [12]. Therefore, quanti-fying the mechanical properties of the plantar soft tissue is important to assess the risk of mechanical trauma to the foot. Method of assessing mechanical behaviour of heel pad Several methods have been used to extract the mechanical behaviour of the heel pad that can be divided into three main groups: in vitro, in situ and in vivo. In vitro tests In some studies, the heel pad behaviour was characterised using an in vitro or in situ method to quantify the material properties of the plantar soft tissue [32,[35][36][37][38][39][40]. Miller-Young et al. [35] performed a series of tests on the cylindrical samples of plantar soft tissue extracted from cadaveric feet. Three series of tests that were performed on the samples included the quasistatic test to obtain the hyperelastic mechanical properties of the soft tissue; the stressrelaxation tests to calculate the viscoelastic time constants; and the dynamic compression test to extract the viscoelastic relaxation material coefficients. The reported results clearly showed the time dependency and viscoelastic behaviour of the heel pad [35]. Some other studies [39,41] used 2 × 2 cm samples from different sites of plantar soft tissue of cadaveric feet to calculate the elastic and viscoelastic coefficients of a mathematical model for plantar fat pad to compare the properties of the plantar soft tissue between a healthy and a diabetic foot. The results showed that stiffness and energy dissipation increase with loading frequency [39,41]. This was completely in contrast with Bennet and Ker's results, which indicated no changes in energy dissipation and stiffness with different testing frequencies [36]. The results also showed that frequency dependency is higher in younger subjects and this was attributed to the difference in the heel pad's water content in younger versus older tissue specimen [32,37]. The water content of the soft tissue can be considered as the main reason for viscoelastic behaviour of the soft tissue. Therefore, a decrease in water content of the soft tissue in older subjects can lead to a decrease in the viscoelastic characteristic of the soft tissue such as loading frequency dependency. While there was no significant difference in the values of viscoelastic coefficients between diabetic and non-diabetic feet, it was claimed that changes in plantar soft tissue were mainly recognised at the structural level, and not reflected effectively at the material level [38,39]. In situ tests Aerts et al. [40] compared the results of in vitro tests with the in situ tests in which the heel bone was fixed to the wall and a pendulum was used to impact the heel region. In this study, differences between the test results in two conditions were observed and they were attributed to the differences in soft tissue behaviour at structural and material level [40]. In another study, Bennet and Ker [36] compared the results of in vitro versus in situ tests. The researchers performed two series of tests in which one group of specimens was tested while attached to the calcaneus and the surrounding tissues, while the other group of specimens were tested after complete removal from the calcaneus [36]. In this study, the specimens were tested using a dynamic loading machine and load-displacement data were recorded during the test. The energy dissipation ratios and stiffness were higher in isolated heel pads compared to the heel pads attached to the calcaneus [36]. Therefore, it was concluded that the results of the in vitro test show the properties of the isolated heel pad material, while the in situ test results represent the behaviour of the heel pad structure [40]. The fact that the mechanical properties of heel pad extracted from the in vitro and in situ results are different can be explained by the indications that structural factors such as heel pad skin and geometry of the calcaneus have an influence on the heel pad behaviour. Although the in vitro test can provide more repeatable data due to eliminating geometrical complexities of the plantar soft tissue, it cannot provide a realistic assessment of the mechanical behaviour of the plantar soft tissue during weight-bearing activities [38,39]. Furthermore, the in situ tests cannot be used to assess the mechanical behaviour of the heel pad in different individuals. In vivo assessments In a number of studies, the human heel pad was characterised using the force-deformation data extracted from in vivo experiments. Investigating the mechanical behaviour of the heel pad during walking can enhance our knowledge about the heel pad in realistic loading conditions. In one study, radiographic fluoroscopy was utilised to measure the thickness of the heel pad during walking [42] while the plantar pressure was also measured using optical display method. Their results showed that maximum strain of the heel pad during walking was 40% and the absorbed energy ratio was estimated as 17.8% (SD 0.8) for different velocities (0.5-0.9 m/s) [43], which is considerably lower than 35% ratio that was reported for in vitro studies of the heel pad [40,41]. While the radiographic method allows measuring the deformation of the heel pad during actual gait, the control on the direction of loading affects the results and this may lead to variation in the observed results. The device commonly consists of linear array ultrasound probe, load cell, motor and mechanical body, which is composed of a foot place perpendicular to the axial of loading (same axis to the probe head). The ultrasound probe is necessary to measure the deformation of the soft tissue. The force that is applied to the foot to compress/indent the foot can be measured by a load cell. It can be mounted at the back of the probe to measure the axial force. The mechanical part should be mounted in such a way that the probe can be compressed/indented to different feet sizes. The motor can generate uniform movement of the probe. A number of studies which conducted experiments with ultrasound indentation device [52,53] used a custom loading device that consists of a linear array ultrasound probe which was in series with a dynamometer (load cell) and mounted on a rigid frame (Figure 7). Figure 7. The ultrasound indentation device and a schematic representation of the procedure followed to create the tissue's force/deformation curve [52]. In addition to the use of indentation device in experimental analyses of the plantar soft tissue, this device has been commonly used to quantify the mathematical and finite element (FE) models that govern the behaviour of soft tissue during loading. This is discussed in the next section under mechanical behaviour model. Ultrasound strain elastography has been used for assessment of plantar soft tissue stiffness in patients with diabetic neuropathy and was recognised to have potentials for diagnosing tissue mechanical malfunction in clinical setting [54]. Plantar soft tissue stiffness and measurement methods The mechanical properties of the plantar soft tissue show a high level of dependency on the measurement method [40,55]. For example, it was reported that the stiffness of the heel pad of humans using in vitro method is almost six times higher compared to the stiffness measured during in vivo tests, while the absorbed energy ratio is about three times lower using the in vitro method [40]. This can be explained by the indications that structural factors such as heel pad thickness and geometry of the calcaneus have a significant influence on the heel pad mechanical behaviour [3,23,56]. Aerts et al. [40] compared the energy dissipation and stiffness of the soft tissue in an amputated leg and in an isolated heel pad. They showed that the whole lower leg which is involved during in vivo test affects the test results in different ways and influences stiffness and energy dissipation in in situ test. The presence of lower leg makes a difference in terms of limiting expansion in some directions and dissipating the energy [40]. Furthermore, while indentation seems to provide more realistic and reliable assessment of the mechanical behaviour of the plantar soft tissue during in vivo conditions, the effects of indenter's shape (i.e. probe's head geometry) together with the effects of calcaneus bone geometry need to be taken into account in analysing the results from these type of studies. Changes to mechanical behaviour The mechanical behaviour of the plantar soft tissue can be changed by ageing, heel pain and other pathologies; some of which are described below. Effect of ageing on the mechanical behaviour Hsu et al. [57] compared the average unloaded heel pad thickness in a young group and an elderly group of participants. The average unloaded heel pad thickness was 20.1 (±2.4) mm in the elderly group and 17.6 (±2.0) mm in the young group. These results are in line with the findings by Kwan et al. [58] who showed that the thickness of the soft tissue was higher in the elderly group and that the stiffness increases with age [58]. The stiffening of the soft tissue by ageing may reduce the adaptability of the tissue in responding to the stress, which may lead to foot diseases in elderly people [59]. The shock absorption of the heel pad was determined in two different age groups for two different impact velocities by Kinoshita et al. [60]. It was reported that the absorbed energy in younger adults is significantly higher than in elderly [60]. The energy absorbed density depends on the viscoelastic properties of the plantar soft tissue and can be calculated by subtracting the energy return density from the energy input density [53]. The energy dissipation of soft tissue is directly linked to its viscoelastic characteristic. Therefore, it can be concluded that the viscoelastic properties of the soft tissue alter with age. Effect of heel pain on mechanical behaviour Physical activity and repetitive high-impact loading during sports activity can cause some micro-damage in the heel pad and consequently lead to heel pain. The sports that involve running or jumping apply repetitive impact loading on the heel pad and may cause collapse of the heel pad and finally cause a greater force impact on the calcaneus. The inflammatory oedema may be a sign of changes in the structure of the heel pad that can lead to decrease in shock absorption capability during heel strike [61]. To compare the mechanical behaviour of the heel pad, a parameter known as heel pad compressibility index (HPCI) was defined as the ratio of the loaded tissue thickness to unloaded tissue thickness in percentage [46]. This parameter was utilised by Prichasuk et al. [62] to compare between normal and painful heel and revealed that the compressibility index increases in the patients with heel pain. Tong et al. [50] compared the plantar tissue thickness and HPCI in normal, plantar heel pain and participants with diabetes. The results showed that compressibility in patients with diabetes and heel pain was less than the corresponding value in healthy volunteers. While the findings of Tong et al. [50] on the effect of ageing on HPCI contradict Prichasuk et al.'s findings [62], in another study Ozdemir et al. [63] reported the increase in heel pad thickness and HPCI with ageing and increase in body weight. This was attributed to the gradual loss of collagen, water content and elastic fibrous tissue of the heel pad as result of ageing [60], all of which can lead to a change in the viscoelastic behaviour of the heel pad. Effect of other pathologies on mechanical behaviour The structural changes in the soft tissue of diabetic patients will cause some changes in the macroscopic and microscopic behaviour of plantar soft tissue and make it more vulnerable to mechanical trauma which can lead to ulceration [54]. Less elastic tissue and impaired ability in distributing pressure are the other changes in diabetic tissues which can lead to a weakened cushioning effect [54]. Increase in energy dissipation during weight bearing is the other factor which can increase the risk of ulceration [54]. A tissue with increased viscosity and decreased elasticity may provide the similar amount of stiffness during loading, but during the swing phase of gait and as the tissue unloads the increased viscosity may not allow the tissue to completely return to its original thickness. This may cause the next loading cycle to start with a tissue that is partially deformed, which may increase the likelihood of tissue bottoming out. In addition, generally the stiffening and thinning of the fat pad would make the tissue more fragile that makes it more likely to damage compared to a healthy tissue [64]. This also applies to the behaviour of skin in a diabetic foot, which can be less flexible and more brittle when it becomes drier [50]. It was also found that the deformability of the heel pad is less in participants with diabetes compared to their healthy counterparts [50]. However Hsu et al. [46] measured the elastic modulus, compressibility index and unloaded tissue thickness in people with diabetes and found no difference with the respective tissue characteristics in healthy participants. Hsu et al. [48] compared the strain and elastic modulus in macrochambers and microchambers of the heel pad in people with diabetes and healthy subjects. Strain in microchambers in people with diabetes was significantly greater than that of the healthy subjects [48]. In healthy subjects, macrochambers' strain was significantly greater than that of the people with diabetes, which can be a result of uneven distribution of the collagen fibrils in a diabetic heel [65]. Furthermore, Hsu and co-workers concluded that the heel pad tissue properties are altered heterogeneously in people with diabetes, indicated by an increased stiffness in macrochambers but a decreased stiffness in microchambers, which were attributed to a diminished cushioning capacity in diabetic heels [48]. The energy dissipation or hysteresis (area between loading and unloading in force-deformation graph) was also shown to be significantly higher in people with diabetes [38,65]. Furthermore, in people with diabetes the ability of recovering the shape of the heel pad after unloading reduces which can be linked to the increase in the amount of energy dissipation [62,65]. Chatzistergos et al. [66] investigated the correlation between the mechanical behaviour of the heel pad in type-2 diabetes and blood biochemical parameters such as triglycerides and fasting blood sugar (FBS). A medium strength positive correlation was found between the stiffness of the heel pad and FBS and a strong correlation was found between the triglycerides and stiffness. In addition, a strong negative correlation was found between the triglycerides and energy absorption during loading [66]. Chatzistergos and co-workers [66] concluded that people with type-2 diabetes and high levels of triglycerides and FBS are more likely to have stiffer heel pads that may hinder the tissue's ability to evenly distribute loads that makes the tissue more vulnerable to trauma and ulceration. Quantifying mechanical behaviour As mentioned before, heel pad provides a cushioning interface between calcaneus bone and the ground and has a natural function of shock absorption. The mechanical properties of the heel pad govern the force-deformation behaviour of the heel pad during heel strike and therefore these properties affect the loading on musculoskeletal system [67]. In order to investigate the force-deformation behaviour of the heel pad in in vitro, in situ and in vivo conditions, several mathematical models have been developed and utilised [36,40,42,53,65,[68][69][70][71][72][73][74]. Additionally, a number of FE analyses were used to quantify the mechanical behaviour of the heel pad [45,52,[75][76][77][78][79][80]. Mathematical models A number of studies developed mathematical models to quantify the mechanical behaviour of the heel pad in vitro [35,41], while others utilised mathematical models to describe the in vivo mechanical behaviour of the heel pad [42,44,49,51,53,65,70,81,82]. Most of the studies measured the elastic behaviour of the heel pad and just a few of them represented the viscoelastic behaviour of the heel during dynamic loading. Ledoux and Belvis [41] performed in vitro test on a freshly frozen cadaver's plantar soft tissue and used the following equation to characterise the mechanical behaviour of the plantar soft tissue: A e e s = - (7) The elastic modulus and absorbed energy of different plantar sites were calculated in which the subcalcaneal heel pad showed higher elastic modulus and absorbed energy compared to other sites of the plantar surface. This can be partly related to the fact that the subcalcaneal fat pad is designed to absorb energy during heel strike, while the fat pad underneath other plantar sites is mainly adapted to provide functions such as even pressure distribution. In another study by Pai and Ledoux [39], quasilinear viscoelastic (QLV) model was used in two approaches: traditional frequency-sensitive and indirect frequency-sensitive: where G is the time-dependent function, τ is the relaxation time, σ e is the elastic stress, ε is the strain, ∂ σ e (ε) ∂ ε is the derivative of elastic strain over strain and ∂ ε ∂ τ is the derivative of strain over time. QLV theory normally assumes that the elastic and time-dependent properties can be separated and a linear combination of the elastic behaviour and viscous behaviour can describe the mechanical behaviour of the system. The coefficients of stress-relaxation response for diabetic and non-diabetic subjects were compared and significant differences were found in the value of B between diabetes versus healthy subjects. However, no significant differences were found in viscous coefficients between diabetic and non-diabetic specimens. The lack of differences between diabetic and non-diabetic tissue was attributed to the changes at the structural level that have not been reflected effectively at the material level [38,39]. A number of studies characterise the mechanical behaviour of the heel pad during in situ tests [40,83,84]. Ker [83] employed a nonlinear equation to characterise the force-deformation behaviour of the heel pad. However, the stiffness values determined from this equation were dependent on the stage of loading cycle. Although a number of studies utilised the in vitro and in situ data for mathematical modelling of the plantar soft tissue, in vivo assessment of the biomechanical behaviour of the plantar soft tissue has been a more common method for obtaining data for mathematical model. In order to quantify the in vivo mechanical properties of the heel pad, the following function was utilised to represent the force-deformation data from in vivo test on the heel pad [49]: where a and b are the constants that are calculated from fitting the function to the in vivo data. The parameters were extracted from two groups of subjects with and without heel pain. Although the value of b was significantly lower in the group with heel pain compared to the group without heel pain, there was no significant difference in the value of a between the two groups [49]. While the a value that represents the slope of the force-deformation graph is more related to the stiffness, the value of b relates to the rate of the changes in the stiffness by loading and represents the curvature of the force-deformation graph. Challis et al. [70] used the same formula as proposed by Ledoux and Belvis [41] for modelling the in vivo force deformation relationship of the heel pad during indentation. They compared the thickness, strain, energy loss and stiffness of the heel pad in cyclists versus runners. Although the heel pad stiffness was found to be significantly less in runners compared to cyclists, the percentage of absorbed energy was not found to be significantly different. A general formula with separate terms for geometry and material parameters was utilised in a number of other studies in order to introduce coefficients which reflect the material characteristic of the heel pad. where E is the elastic modulus of the heel pad, a is the radius of the indenter, ν is the Poisson ratio, h is the thickness of the soft tissue and K is the function of ratio of the radius of the indenter to the thickness of the tissue. Zheng et al. [51] calculated Young's modulus for different regions of the plantar soft tissue which was different from 40 to 50 kPa in healthy subjects, while the values were 160% more in average for the same sites of the plantar soft tissue in diabetic feet [51]. While there was no indication about how the K value is different between the two groups, limiting the maximum deformation to 10% of the initial thickness of soft tissue may have an influence on the calculated coefficients. As the heel pad shows nonlinear elastic behaviour, the coefficients may be different with higher deformation. Chao et al. [44] used the same formulation (Eq. 10) to compare the elastic modulus between two different age groups. The air-jet indentation system was used along with non-contact optical coherence tomography in four loading-unloading cycles with 0.4 mm/s deformation rate. It was found that the modulus of elasticity under the second metatarsal head is significantly higher in older group compared to their younger counterparts. Most studies concentrated on representing the force-deformation behaviour during loading, while one of the characteristics of viscoelastic behaviour is having a different force-deformation behaviour during loading and unloading. Hsu (11) where σ is stress, ε represents the strain, σ max is the maximum stress, ε max represents the maximum strain and α is the curvature constant that is different for loading and unloading graphs. Hsu et al. [65] compared in vivo data from diabetic and healthy subjects in which the curvature constant was significantly higher in diabetics compared to healthy participants during unloading; however, there was no significant difference in the α value during loading. In addition to the importance of elasticity in the mechanical role of the heel pad, viscosity also plays an important role in dissipating energy and hysteresis. In order to identify the dissipating energy ratio, a number of studies added viscous term to the force-deformation formulas. One of the in vivo models representing both elasticity and viscosity of the heel pad was proposed by Gefen et al. [42]. They proposed Kelvin-Voigt model in which elasticity behaviour was characterised by a linear spring and viscous behaviour was represented by a nonlinear damper. . . E s e h e e = + (12) where σ represents stress, ε and ε • represents the strain and strain rate respectively, E is the elastic modulus and η is the viscosity parameter of the soft tissue. Digital fluoroscopy along with pressure plate was utilised to measure the pressure and deformation during walking. The proposed model improved the predicted force-deformation behaviour of the soft tissue significantly by adding viscosity; however, the assumption of linear spring is an oversimplification of the behaviour of the soft tissue due to the fact that the quasi-static behaviour of the soft tissue still shows nonlinear force-deformation behaviour [20,34]. In a more comprehensive approach, Natali et al. [20] developed a constitutive mathematical model for the mechanical behaviour of the heel pad based on in vitro and in vivo tests. They considered hyperelastic and viscoelastic behaviour of the plantar soft tissue and developed stress-strain curve based on the second Piola-Kirchhoff stress tensor and used Miller-Young et al. [35] and Ledoux and Belvins's [41] in vitro data and also Zheng et al. [85] and Erdemir et al.'s [45] in vivo data in order to adapt the formula for the plantar soft tissue: where S was the second Piola-Kirchhoff stress tensor, S ∞ was the elastic stress when viscous condition is completely relaxed, q i was the viscous stress, C was right Cauchy-Green strain tensor, k v and C 1 relate to initial bulk modulus and shear stiffness, respectively, whereas r and α are hyperelastic coefficients of the soft tissue. γ ∞ and γ i relate to stress-strain history and τ i represents the relaxation time. On the other hand, Sciume et al. [86] used a mathematical model, which was based on thermodynamically constrained averaging theory (TCAT) [87]. The soft tissue was modelled as a porous medium filled by an interstitial fluid. Ultrasound indentation has also been frequently used to inverse engineer the material coefficients of heel pad and also to measure heel pad stiffness and energy dissipation [20,45,88]. However, there have been only a few studies that have used mathematical modelling to characterise the elastic and viscous behaviour of the plantar soft tissue. Naemi et al. [53] developed a mathematical model, which considered both elastic and viscous behaviour of the heel pad. They also modelled the nonlinear behaviour of the heel pad using nonlinear spring and nonlinear damper. They claimed that during quasi-static tests in which only the elastic component of the heel pad plays role in resistance to compression, strain-stiffening can be observed. Therefore, they used power function for depicting the elasticity component proposed by Scott and Winter [89]. This study proposed a method to quantify the force-deformation behaviour of the heel pad under compression in cyclic loading and took into account the nonlinear viscoelastic behaviour of the heel pad. Energy input and energy return were derived as follows: By fitting the elastic and viscous energy densities to the data, Naemi et al. [53] showed that elastic energy density was much higher than absorbed energy density and elastic stress was significantly more than viscous stress. Elastic scaling factor and exponent were also 1.9 times and 14 times higher than viscous scaling factor, respectively. Despite the findings, they also reported that the deformation rates at which the tests were performed were much lower than the realistic deformations of the heel pad achieved during walking and they recommended testing the heel pad at more realistic strain rates to achieve realistic coefficients. Although a significant number of studies utilised mathematical modelling in order to model elasticity and viscosity behaviour of the soft tissue at the heel, there has been a paucity of studies in which mathematical models were utilised for quantifying the viscoelastic characteristics of the plantar soft tissue at the forefoot. Although Hsu et al. [12,48,65] found the differences between macrochamber and microchamber behaviour of the soft tissue at the heel between healthy and diabetic subjects, no study has investigated the model parameters of different layers of this soft tissue. FE models As shown earlier, in pathological conditions such as diabetes there is an increased interest in investigating the mechanical characteristics of the human plantar soft tissue. In vivo observations indicating that plantar soft tissue properties change as a result of tissue damage [49] or diabetes [12,46,65,90,91] have highlighted the clinical relevance of plantar soft tissue biomechanics. In this context, inverse FE analysis can be employed for the deterministic assessment of plantar soft tissue mechanical properties. FE analysis is a powerful numerical method that enables solving problems with complicated geometry, material properties or loading that cannot be approached using analytical solutions. In its direct application, FE analysis enables the calculation of the mechanical response to loading (e.g. internal stresses/strains) of deformable bodies that have known geometry and material properties. However, in inverse FE analysis the values of the mechanical properties that minimise the difference between the in vivo (experimentally measured) response to loading and the simulated (FE) one are calculated. Two main in vivo material testing techniques have been used to inform inverse FE analyses: indentation [45,52,[75][76][77][78] and compression [79,80]. In both cases, the plantar soft tissue is compressed between a rigid loading surface and a bony prominence but in the case of indentation, the loading surface is significantly smaller than the plantar surface of the foot. In both cases, the applied force is measured using a load sensor and tissue deformation is either inferred from the displacement of the loading surface [75,92] or directly measured using medical imaging techniques such as ultrasound [45,52] or MRI [80]. These measurements enable the calculation of an indentation/compression force-deformation graph that describes the macroscopic mechanical response to loading of the tissue. As a next step, a FE model simulating the same loading scenario is designed and used to numerically calculate the same force/deformation graph. At first, the material coefficients of the tissue are assigned random values (within predefined range of values) and the difference between the experimental in vivo graph and the numerical one is calculated. An optimisation algorithm is used to update the material coefficients of the plantar soft tissue in search of those values that minimise the difference between experiment and FE simulation. In this context, Erdemir et al. [45] combined ultrasound indentation with FE modelling to calculate the material properties of plantar soft tissue on a subject-specific basis using an axisymmetric model of the heel. This technique was used to find the hyperelastic (Ogden 1st order) coefficients of the heel pads of two age-matched groups of diabetic and non-diabetic volunteers without however revealing any statistically significant difference between them. In order to improve the level of subject specificity of the model, Chatzistergos et al. [52] developed a 2D plane stress model of the heel from frontal ultrasound images that took into account the subject-specific geometry of heel pad. This modelling technique was later enhanced with a method for the automated generation of 3D models of the heel pad from ultrasound images [19]. The aforementioned studies [19,52] assumed a bulk plantar soft tissue; however, in a more comprehensive approach Petre et al. [80] differentiated between three layers of soft tissue: skin, fat pad and muscle. For this purpose, a 3D model of the forefoot was developed based on loadbearing MRI and an optimisation-based method was used to inverse engineer the material coefficients for all three different layers. The case of a single layer of soft tissue (i.e. bulk plantar soft tissue) was also considered indicating that simulating different layers affects the value of the calculated peak pressures; however, it was noted that the location where they appear does not get affected. At this point, it has to be emphasised that there is a limit to the number of coefficients that can be calculated from inverse FE based on indentation or compression alone. To solve this limitation, Fontanella et al. [79] combined in vitro data with the information gathered from in vivo tests. They utilised the in vitro tests to estimate the values of all twelve coefficients for their visco-hyperelastic material model and then used in vivo compression tests to modify six of the coefficients on subject-specific basis. Even though FE analyses have shed new light on plantar soft tissue biomechanics [35,45,79,[93][94][95][96][97], their actual contribution for the improvement of the diagnosis and treatment of the diabetic foot or other foot-related pathologies is limited [98]. This is mainly attributed to the difficulty of using FE analysis outside the research domain and particularly within the context of clinical practice [98]. Developing reliable subject-specific FE modelling techniques that are easy to use and not computationally demanding remains the key barrier for clinically applicable FE modelling. Concluding remarks The mechanical behaviour of plantar soft tissue shows viscoelasticity characterised by the reaction force being affected by the amount of deformation and deformation rate. The behaviour of the plantar soft tissue is highly nonlinear and shows the strain stiffening effect, with the stress-strain graph showing difference between loading and unloading. In a sense, viscosity causes the reaction force at the same deformation to be less during loading compared to the force during unloading and the difference that is caused by hysteresis increases with an increase in deformation rate. As a complex structure, the plantar soft tissue's mechanical behaviour shows a high degree of dependency on the method of testing evidenced by the fact that the mechanical properties of heel pad extracted from the in vivo and in situ results were observed to be different. This, for example, can be explained by the indication that structural factors such as heel pad thickness and geometry of the calcaneus have an influence on the heel pad behaviour. The models that represent the viscoelastic behaviour of heel pad are scarce and there are few that can fully justify the features of the mechanical behaviour of the plantar soft tissue in different testing scenarios. Despite these limitations, the model parameters that show the viscoelastic behaviour of tissue under load have the potential to be used to assess the mechanical behaviour of soft tissue under load with implication in identifying its malfunction as a result of disease or injury.
2019-04-22T13:03:08.388Z
2016-09-21T00:00:00.000
{ "year": 2016, "sha1": "9fc42e5d95f3a77d04a3f3efa67fb4adc48542f4", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/51513", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "5edf983550ec3dd7439f1af034459e3f782ce71e", "s2fieldsofstudy": [ "Biology", "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
16714702
pes2o/s2orc
v3-fos-license
Monitoring and Modelling of Soil–plant Interactions: the Joint Use of Ert, Sap Flow and Eddy Covariance Data to Characterize the Volume of an Orange Tree Root Zone Mass and energy exchanges between soil, plants and atmosphere control a number of key environmental processes involving hydrology, biota and climate. The understanding of these exchanges also play a critical role for practical purposes e.g. in precision agriculture. In this paper we present a methodology based on coupling innovative data collection and models in order to obtain quantitative estimates of the key parameters of such complex flow system. In particular we propose the use of hydro-geophysical monitoring via " time-lapse " electrical resistivity tomography (ERT) in conjunction with measurements of plant transpiration via sap flow and evapotranspiration (ET) from eddy covariance (EC). This abundance of data is fed to spatially distributed soil models in order to characterize the distribution of active roots. We conducted experiments in an orange orchard in eastern Sicily (Italy), characterized by the typical Mediter-ranean semi-arid climate. The subsoil dynamics, particularly influenced by irrigation and root uptake, were characterized mainly by the ERT setup , consisting of 48 buried electrodes on 4 instrumented micro-boreholes (about 1.2 m deep) placed at the corners of a square (with about 1.3 m long sides) surrounding the orange tree, plus 24 mini-electrodes on the surface spaced 0.1 m on a square grid. During the monitoring , we collected repeated ERT and time domain reflectome-try (TDR) soil moisture measurements, soil water sampling, sap flow measurements from the orange tree and EC data. We conducted a laboratory calibration of the soil electrical properties as a function of moisture content and porewater electrical conductivity. Irrigation, precipitation, sap flow and ET data are available allowing for knowledge of the system's long-term forcing conditions on the system. This information was used to calibrate a 1-D Richards' equation model representing the dynamics of the volume monitored via 3-D ERT. Information on the soil hydraulic properties was collected from laboratory and field experiments. The successful results of the calibrated modelling exercise allow for the quan-tification of the soil volume interested by root water uptake (RWU). This volume is much smaller (with a surface area less than 2 m 2 , and about 40 cm thick) than expected and assumed in the design of classical drip irrigation schemes that prove to be losing at least half of the irrigated water which is not taken up by the plants. Introduction The system made of soil, vegetation and the adjacent atmosphere is characterized by complex patterns, structures and processes that act on a wide range of timescale and space scales.While the exchange of energy and water is continuous between compartments, the pertinent fluxes are strongly heterogeneous and variable in space and time and this makes their quantification particularly challenging.Plants are known to impact the terrestrial water cycle and underground water dynamics through evapotranspiration (ET) and root water uptake (RWU).The mechanisms of water flow in the root zone are controlled by soil physics, plant physiology and meteorological factors (S.R. Green et al., 2003).The translation of plant water use strategies into physically based models of RWU is a crucial issue in eco-hydrology and has fundamental consequence in the understanding and modelling of atmospheric as well as soil processes.Still, no consensus exists on the modelling of this process (Feddes et al., 2001;Raats, 2007).From a conceptual point of view, two main approaches exist today, which differ in the way of predicting the volumetric rate of RWU. A first approach expresses water transport in plants as a chain process based on a resistance law.Coupled with a three-dimensional soil water flow model, this approach leads to fairly accurate RWU models at the plant scale (Doussan et al., 2006;Schneider et al., 2010), also under water stress conditions.The limitations of these models are the cost of characterizing parameters, such as root system architecture and conductance to water flow, and their computational demand.A second approach, mostly used in soil-vegetationatmosphere transfer models, relies on "macroscopic parameters" and predicts RWU as a product of the potential transpiration rate, a spatially distributed root parameter (e.g.relative root length density), and a stress function, depending on soil water potential and a compensatory RWU function (Jarvis, 1989).The major drawback of this approach is the necessity to calibrate the macroscopic parameters, which introduces substantial uncertainties (Musters and Bouten, 2000).Note that the two approaches have indeed some formal links with each other (Couvreur et al., 2012;Javaux et al., 2008). The complexity of RWU modelling is highly related to the uneven root distribution in the vertical and radial directions (Gong et al., 2006).This variability is partly induced by heterogeneities in the soil and localized soil compaction caused by both cultivation and irrigation patterns (Jones and Tardieu, 1998) that in turn cause heterogeneous water and nutrient distribution.Consequently, there is a clear need for the development of novel RWU modelling approaches (Couvreur et al., 2012;Feddes et al., 2001;Raats, 2007;Jarvis, 2011), as well as for accurate measurements techniques of soil water content and RWU dynamics. In particular, soil moisture measurements are of paramount importance to calibrate RWU models.Traditionally, and especially beneath irrigated crops, soil moisture has been determined using methods such as neutron probes, time domain reflectometry (TDR) or capacitance systems.As these traditional techniques are point measurements, they do not provide sufficient information for reliable mass balance assessments; therefore, our understanding of RWU as a spatially distributed system remains fundamentally limited.In this respect the understanding of soil as a spatially heterogeneous system shares fundamental limitations with most of earth sciences.Therefore, much can be learnt looking at similar research fields. Geophysical methods have long been established for the imaging of the soil subsurface at a variety of scales, from large-scale mining exploration (e.g.Parasnis, 1973) to the very small scale of soil mapping (e.g.Allred et al., 2008).The past 20 years, in particular, have seen the fast development of techniques that are useful in identifying structure and dynamics of the near surface, with particular reference to hydrological applications.This realm of research goes under the general name of hydro-geophysics (Binley et al., 2011;Rubin and Hubbard, 2005;Vereecken et al., 2006) and covers a wide range of applications from flow and transport in aquifers (e.g.Kemna et al., 2002;Perri et al., 2012) to the vadose zone (e.g.Daily et al., 1992), from catchment (e.g.Weill et al., 2013) and hillslope characterization (Cassiani et al., 2009a) to agriculture and eco-hydrological processes (Boaga et al., 2014;Ursino et al., 2014). Possibly the most interesting results have been obtained when hydro-geophysical data have been coupled with distributed hydrological model predictions.The degree of integration of data and model range from trial and error calibration (e.g.Binley et al., 2002) to full data assimilation (e.g.Hinnell et al., 2010), but in all cases the availability of spatially extensive (and time intensive) data greatly improve the models' capability to identify within narrow ranges the relevant governing parameters, which in turn are of practical interest for hydrological predictions. Relatively few hydro-geophysical applications, though, have been focussed on plant root system characterization (e.g.al Hagrey, 2007;al Hagrey and Petersen, 2011;Javaux et al., 2008;Jayawickreme et al., 2008;Werban et al., 2008), often limiting the analysis to a tentative identification of the main root location and extent.Electrical soil properties are a clear indication of soil moisture content distribution, and electrical and electromagnetic methods have been used to identify the effect of root activity (e.g.Cassiani et al., 2012;Shanahan et al., 2015).In particular, ERT has been used to characterize RWU and root systems (Garré et al., 2011;Michot et al., 2001Michot et al., , 2003;;Srayeddin and Doussan, 2009).Amato et al. (2009Amato et al. ( , 2010) ) tested the ability of 3-D ERT for quantifying root biomass on herbaceous plants.Beff et al. (2013) used 3-D ERT for monitoring soil water content in a maize field during late growing seasons.Boaga et al. (2013) and Cassiani et al. (2015) demonstrated the reliability of the method in apple orchards. In this paper we aim at applying hydro-geophysical techniques, with a combination of measurements and modelling, to a tree root system.This approach has, to the best of our knowledge, not been presented and analysed yet.In particular, we present the application of the time-lapse non-invasive 3-D electrical resistivity tomography (ERT) to monitor soilplant interactions in the root zone of an orange tree located in the Mediterranean semi-arid Sicilian (southern Italy) context.The subsoil dynamics, particularly influenced by irrigation and RWU, have been characterized by the 3-D ERT measurements coupled with plant transpiration through sap flow measurements.The information contained in the ERT measurements in terms of vadose zone water dynamics was exploited by comparing the field results against a 1-D vadose zone model. The specific goals of this paper are 1. to study the feasibility of a small-scale monitoring of root zone processes using time-lapse 3-D ERT; 2. to assess the value of the data above for a quantitative description of hydrological processes at the tens of centimetre scale; 3. to interpret these data with the aid of a physical hydrological model, in order to also derive information on the root zone physical structure and its dynamics. Site description The experiment was conducted in a 20 hectare orange orchard, planted with about 20 year-old trees (Citrus sinensis; cv Tarocco Ippolito) (Fig. 1).The field is located in Lentini (eastern Sicily; lat.The LAI values are spatially averaged and refer to the ERT measurement period (October 2013).In the specific case of a mature orange orchard, LAI values result fairly constant in time in the region of interest. The mean PAR (photosynthetic active radiation) light interception was 80 % within rows and 50 % between rows; the canopy height (h c ) is 3.7 m. The soil characterization was performed via textural and hydraulic laboratory analyses, according to the USDA standards.The area, covered by mature orange orchards, was divided into regular grids, each having a 18 × 32 m 2 area, where undisturbed soil cores (0.05 m in height and 0.05 m in diameter) were collected at the 0-0.05 and 0.05-0.10m depths for a total of 32 sampling points and 64 soil samples.The undisturbed soil cores were used to determine the soil bulk density, ρ b (Mg m −3 ) and the initial water content, θ i (m 3 m −3 ), i.e. the θ value at the time of the field campaign.A total of 32 disturbed soil samples were also collected at the 0-0.05 m depth to determine its soil textural characteristics, using conventional methods following H 2 O 2 pre-treatment to eliminate organic matter and clay deflocculation using sodium metaphosphate and mechanical agitation (Gee and Bauder, 1986).Three textural fractions according to the USDA standards, i.e. clay (0-2 µm), silt (2-50 µm) and sand (50-2000 µm), were used in the study to characterize the soil (Gee and Bauder, 1986).Most soil textures (i.e.27 out of 32) were loamy sand and the remaining textures were sandy loam. An undisturbed soil sample was collected from the surface soil layer (0-0.05m depth) at each sampling location (sample size, N = 32), using stainless steel cylinders with an inner volume of 10 −4 m 3 to determine the soil water retention curve.For each sample, the volumetric soil water content at 11 pressure heads, h, was determined by a sandbox (h = 0.01, 0.025, 0.1, 0.32, 0.63, 1.0 m) and a pressure plate apparatus (h = 3, 10, 30, 60, 150 m).For each sample, the parameters of the van Genuchten (1980, vG) model for the water retention curve with the Burdine (1953) condition were determined (Aiello et al., 2014). Three soil water content profiles have been measured in the field using water content reflectometers (TDR) since 2009.Calibrated Campbell Scientific CS616 water content reflectometers (± 2.5 % of accuracy) were installed to monitor every 1 h the changes of volumetric soil water content ( θ ).The TDR probe installation was designed to measure soil water content variations with time in the soil volume afferent to each plant.The location of the TDR probes is considered well suited with the specific characteristics of the micro-irrigation systems used in the area and the textural soil main features.For each location the TDR equipment consists of two sensors inserted vertically at 0.20 and 0.45 m depth and two sensors inserted horizontally at 0.35 m depth, with 0.20 m space in between.The water content reflectometer consists of two stainless steel rods connected to a printed circuit board.When the probe rods were inserted vertically into the soil surface they gave an indication of the water content in the upper 20-25 cm of soil.The probes installed horizontal to the surface were used to detect the passing of wetting fronts of water fluxes. The data that are discussed here (see results section) correspond to the TDR probes located at about 1.5 m from the orange tree we monitored with ERT. Hourly meteorological data (incoming short-wave solar radiation, air temperature, air humidity, wind speed and rainfall) are acquired by an automatic weather station located about 7 km from the orchard and managed by SIAS (Agrometeorological Service of the Sicilian Region).For the dominant wind directions, the fetch is larger than 550 m; for the other sectors the minimum fetch is 400 m (SE). Micrometeorological measurements The experimental site is equipped with eddy covariance (EC) systems mounted on a micrometeorological fluxes tower (Fig. 1).Continuous energy balance measurements have been taken since 2009.In particular, net radiation (R n , W m −2 ) is measured with two CNR 1 Kipp & Zonen (Campbell Scientific Ltd) net radiometers at a height of 8 m.Soil heat flux density (G, W m −2 ) is measured with three soil heat flux plates (HFP01, Campbell Scientific Ltd) placed horizontally 0.05 m below the soil surface.Three different measurements of G were selected: in the trunk row (shaded area), at onethird of the distance to the adjacent row, and at two-thirds of the distance to the adjacent row.The soil heat flux is measured as the mean output of three soil heat flux plates.Data from the soil heat flux plates are corrected for heat storage in the soil above the plates. The air temperature and the three wind speed components are measured at two heights, 4 and 8 m, using fine wire thermocouples (76 µm diameter) and sonic anemometers (Windmaster Pro, Gill Instruments Ltd, at 4 m, and a CSAT, Campbell Sci., at 8 m).A gas analyzer (LI-7500, LI-COR) operating at 10 Hz was installed at 8 m.The raw data are recorded at a frequency of 10 Hz using two synchronized data loggers (CR3000, Campbell Sci.). The EC measurement system and the data processing followed the guidelines of the standard EUROFLUX rules (Aubinet et al., 2000).A data quality check was applied during the post processing together with some routines to remove the common errors: running means for de-trending, three angle coordinate rotations and de-spiking.Stationarity and surface energy closure were also checked (Kaimal and Finningan, 1994). Low-frequency measurements are taken for air temperature and humidity (HMP45C, Vaisala), wind speed and direction (05103 RM Young), and atmospheric pressure (CS106, Campbell Scientific Ltd) at 4, 8 and 10 m. The freely distributed TK2 package (Mauder and Foken, 2004) is used to determine the first-and second-order statistical moments and fluxes on a half-hourly basis following the protocol used as a comparison reference described in Mauder et al. (2007). Sap flow measurements Heat-pulse techniques can be used to measure sap flow in plant stems with minimal disruption to the sap stream (Cohen et al., 1981;Green and Clothier, 1988;Swanson and Whitfield, 1981).The measurements are reliable, use inexpensive technology, provide a good time resolution of sap flow and they are well suited to automatic data collection and storage.Sequential or simultaneous measurements on numerous trees are possible, permitting the estimation of transpiration from whole stands of trees. Measurements of water consumption at tree level (T SF ) have been taken using the HPV (heat pulse velocity) technique that is based on the measurement of temperature variations ( T ), produced by a heat pulse of short duration (1-2 s), in two temperature probes installed asymmetrically on either side of a linear heater that is inserted into the trunk.For HPV measurements, two 4 cm sap flow probe with four thermocouples embedded (Tranzflo NZ Ltd., Palmerston North, New Zealand) were inserted in the trunks of the trees, belonging to the area of the micrometeorological EC tower footprint.The probes were positioned at the north and south sides of the trunk at 50 cm from the ground and wired to a data logger (CR1000, Campbell Sci., USA) for heat-pulse control and measurement; the sampling interval was 30 min.The temperature measurements are obtained by means of ultrathin thermocouples that, once the probes are in place, are located at 5, 15, 25 and 45 mm within the trunk. Data have been processed according to S. Green et al. (2003) to integrate sap flow velocity over sapwood area and calculate transpiration.In particular, the volume of sap flow (Q stem ) in the tree stem is estimated by multiplying the sap flow velocity by the cross sectional area of the conducting tissue.To this purpose, fractions of wood (F M = 0.48) and water (F L = 0.33) in the sapwood were determined on the trees where sap flow probes were installed.Wound-effect correction (Consoli and Papa, 2013;S. Green et al., 2003;Motisi et al., 2012) was done on a per-tree basis.Crop transpiration data have been available at the study site since 2009. Electrical resistivity tomography The key technique used to monitor the soil moisture content distribution in the volume surrounding the orange tree is ERT (e.g.Binley and Kemna, 2005).In particular, we installed a three-dimensional ERT system, consisting of 48 buried electrodes placed on 4 instrumented micro-boreholes, with 12 electrodes each (see Fig. 2).The electrodes are made of stainless steel wound around a 1 in.PVC pipe, and are spaced 10 cm along the pipe (see inset in Fig. 2); thus, the shallowest and the deepest are respectively at 0.1 and 1.2 m below the surface.Each electrode is made of a plate 3 cm wide.The boreholes are placed at the vertices of a square (with 1.3 m wide sides) that has the orange tree at its centre, and were inserted by percussion with the help of a pre-drilling with a smaller diameter in order to avoiding the disturbance of the electrical flow.The electrical contact is excellent for all 48 buried electrodes, as checked before each measurement.The four boreholes are water tight and in tight contact with the soil; therefore, they cannot act as pathways for preferential water infiltration.We focused our attention to an area slightly smaller than the square defined by the boreholes, in order to avoid the inevitable disturbance caused by borehole installation (slightly compacting the surrounding soil).The system is completed by 24 electrodes at the ground surface, placed along a square grid of about 0.21 m per side, covering the 1.3 m × 1.3 m square at the surface (Fig. 3): this setup allows for a homogeneous coverage of the surface of the control volume.The chosen acquisition scheme was a skipzero dipole-dipole configuration, i.e. a configuration where the current dipoles and potential dipoles are both of minimal size, i.e. they consist of neighbouring electrodes, e.g.along the boreholes.This set-up ensures maximal spatial resolution (as good as the electrode spacing, at least close to electrodes themselves) provided that the signal-to-noise ratio is sufficiently high.The data quality is assessed using a full acquisition of reciprocals to estimate the data error level (see e.g.Binley et al., 1995;Monego et al., 2010).Consistently, we used for the 3-D data inversion an Occam approach as implemented in the R3 software package (Binley, 2014) accounting for the error level estimated from the data themselves.The relevant three-dimensional computational mesh is shown in Fig. 3.At each time step, about 90-95 % of the dipoles survived the 10 % reciprocal error threshold.In order to build a time-consistent data set, only the dipoles surviving this error analysis for all time steps were subsequently used, reducing the number to slightly over 90 % of the total.The absolute inversions were run using the same 10 % error level.Time-lapse inversions were run at a lower error level equal to 2 % (consistently with the literature -e.g.Cassiani et al., 2006).We conducted repeated ERT measurements using the above apparatus for about 2 days, starting on 2 October 2013 at 11:00 LT, and ending the next day at about 16:00 LT The schedule of the acquisitions and the irrigation times is reported in Table 1.Note that the background ERT survey was acquired on 2 October at 11:00 LT before the first irrigation period was started, so that all changes caused by irrigation and subsequent ET can be referred to that instant.Note that prior to 2 October 2013, irrigation had been suspended for at least 15 days.Note also that only one dripper -with a flow of about 4 l h −1 -is located at the surface of the control volume defined by the ERT set-up (Fig. 3). Results and discussion The paper presents results derived from both short-term (2 days) and long-term monitoring.The micrometeorological data set (including the measurements of the energy balance components) and the sap flow data have been available since 2009.ERT measurements were carried out only during a 2day period, but the state of the system at the time of the ERT measurements clearly depends on the past forcing acting on the system.In order to fully exploit the information content of this data set, we aimed at comparing data against simulations, as much as possible in a quantitative manner. The ERT monitoring as described in Table 1 produced two clear results: 1.The initial conditions (11:00 LT of 2 October, before irrigation starts) around the tree show a very clear difference in electrical resistivity in the top 40 cm of soil with respect to the rest of the volume (Fig. 4).Specifically, the resistivity of the top layer ranges around 40-50 Ohm m, while the lower part of the profile is about 1 order of magnitude more conductive (about 5 Ohm m). As no apparent lithological difference is present at 40 cm depth (see also laboratory results below), we attributed this difference to a marked difference in soil moisture content.This was confirmed by all following evidence (see below). 2. The resistivity changes as a function of time, during the two irrigation periods, during the night interval, and afterwards, all show essentially the same pattern, with relatively small (but still clearly measurable) changes (Fig. 5).Two zone are identifiable: (a) a shallow zone (top 10-20 cm) where resistivity decreases with respect to the initial condition, and (b) a deeper zone (20-40 cm) where resistivity increases. Qualitatively, both pieces of evidence can be easily explained in terms of water dynamics governed by precipitation, irrigation and RWU.Specifically, the shallower high resistivity zone in Fig. 4 can be correlated to a dry region where RWU manages to keep soil moisture content to minimal values, as an effect of the entire summer strong transpiration drive.The dynamics in Fig. 5, albeit small compared to the initial root uptake signal in Fig. 4, still confirm that the top 40 cm is home to strong root activity, to the point that irrigation cannot raise electrical conductivity of the shallow zone (10-20 cm) by no more than some 20 %, and the roots manage to make the soil even drier (with a resistivity increase by some 10 %) in the 20-40 cm depth layer (Fig. 5).Note that, in general, resistivity changes of the type observed here cannot be uniquely associated with soil moisture content changes, as porewater conductivity may play a key role (e.g.Boaga et al., 2013;Ursino et al., 2014).However, in the particular case at hand, care was taken to analyse the electrical conductivity of both the water used for irrigation and the porewater, purposely extracted at about 50 cm depth.1300 µS cm −1 (thus fairly high, fact that explains the overall small soil resistivity observed at the site).Therefore, in this particular case we can exclude porewater conductivity effects in the observed dynamics of the system.Once again it must be stressed that this is rather the exception than the rule.A laboratory-based method was adopted for obtaining "unaltered" soil porewater through a column displacement technique (Knight et al., 1998).In particular, Rhizon soil moisture samplers (Cabrera, 1998) were used; they represent one of the latest developments in terms of tension samplers, where it is necessary to apply a suction to withdraw porewater with a vacuum tube (Tye et al., 2003). The qualitative evidence above is, however, not very surprising and not particularly informative: the root activity dries the soil, this is not a discovery.Things become more interesting if we can translate the ERT data into quantitative estimates of soil moisture content, and if we can use these data to calibrate hydrological models of the root zone. To this end, we tested Bulgherano soil samples in the laboratory to obtain a suitable constitutive relationship linking moisture content and resistivity, given the know porewater conductivity that was reproduced for the water used in the laboratory.All measurements were conducted using cylindrical Plexiglas cells equipped with a four-electrode configuration designed to allow for sample saturation and desaturation with no sample disturbance, using an air injection apparatus at one end and a ceramic plate at the opposite end.The air entry pressure of the ceramic is 1 bar; thus, during all the experiments the plate remained under full water saturation, while allowing water outflow during de-saturation.At each de-saturation step, the electrical conductivity of the sample was measured under temperature-controlled conditions using a ZEL-SIP04 impedance metre (Zimmermann et al., 2008).A completed description of the set-up is given by Cassiani et al. (2009b). Figure 6 shows two example experimental results on samples from two different depths.Note how in a wide range of soil moisture content (roughly from 5 % to saturation) the two curves in Fig. 6 lie practically on top of each other.The same applies for all tested samples.Note also that, even though some samples show the effect of the conductivity of the solid phase (through its clay fraction), at small saturation (see sample from 0.4 m in Fig. 6) still the effect is small as it appears only at soil moisture smaller than 3-4 %.Therefore, we deemed it unnecessary to resort to constitutive laws that represent this solid phase effect, such as Waxman and Smits (1968) that has been used for similar purposes elsewhere (e.g.Cassiani et al., 2012), and we adopted the simpler formulation of Archie (1942).Consequently we translated resistivity into moisture content using the following relationship calibrated on the laboratory data, using water having the abovementioned electrical conductivity: where θ is volumetric soil moisture content (dimensionless) and ρ is electrical resistivity (in Ohm m).Equation (1) allows a direct translation of the 3-D resistivity distribution to a corresponding distribution of volumetric soil moisture content.However, it has long been established that inverted geophysical data may be accompanied by enough distortion of the true physical parameter field (Day-Lewis et al., 2005) as to induce violations of elementary physical principles, such as mass balance during tracer test monitoring experiments (e.g.Singha and Gorelick, 2005).This may cause substantial problems, particular when the use of data is expected to shift from a qualitative interpretation to a quantitative use in terms of data assimilation into hydrological models.For this reason, coupled versus uncoupled approaches have been proposed and discussed (Hinnell et al., 2010) even though their superiority seems to depend on the specific problem, as the information content of data even in a traditional, inverted approach may be sufficient (Camporese et al., 2011(Camporese et al., , 2014)).Indeed, the geometry we are considering here is very effec-tive to reconstruct the mass balance of irrigated water, as this comes as a quasi-one-dimensional infiltration front from the top, where, in addition, electrodes are located.The geometry is similar to the one used, e.g.Koestel et al. (2008), where mass balance was verified by comparison against very detailed TDR data collected in a lysimeter.In spite of these considerations, we decided to still limit ourselves to analysing the data variation principally as a function of depth, lumping the data horizontally by averaging estimated moisture content along two-dimensional horizontal planes.Note that the data set may lend itself to more complex analyses such as the one proposed by Manoli et al. (2014), especially if used in the context of a formal data assimilation, but we felt that one such an endeavour would exceed the scope of the current paper and deserves an ad hoc space.Note also that the ERT field evidence both in terms of background (Fig. 4) and time-lapse evolution (Fig. 5) of moisture content confirm the hypothesis that, within the control volume, the distribution of water in the soil is largely one-dimensional as a function of depth. The data, once condensed in this manner, lend themselves more easily to a comparison with the results of infiltration modelling.We implemented a one-dimensional finite element model based on a Richards' equation solver ("Femwater" -details of this classical model are given by Lin et al., 1997), simulating the central square metre of the ERTmonitored control volume, down to a total depth of 2 m (much below the depth of the ERT boreholes), where we assumed that the water table is located (Dirichlet boundary condition).We applied at the top of the soil column a Neumann boundary condition consistent with the flux coming from irrigation that pertains the control volume (basically, the water coming from a single dripper).As Femwater is a 3-D simulator, the soil column is also bounded laterally by no-flow conditions, with the exception of the top 40 cm where we applied laterally a Neumann condition simulating the RWU (see below for details). We considered only the central part of the ERT-controlled volume (1 m × 1 m), thus excluding the regions too close to the boreholes that, even though benefitting from the best ERT sensitivity, might have been altered from a hydraulic viewpoint by the drilling and installing operations.Correspondingly we horizontally averaged the ERT data only in this central region. A very fine vertical discretization (0.01 m) and time stepping (0.01 h) ensures solution stability.The porous medium is homogeneous along the column and parameterized according to the Van Genuchten (1980) model.The relevant parameters have been derived independently from laboratory and field measurements, the latter particularly relevant for the definition of a reliable in situ saturated hydraulic conductivity estimate.The parameters used for the simulations are residual moisture content θ r = 0.0; porosity θ s = 0.54, α = 0.12 1 m −1 , n = 1.6; and saturated hydraulic conductivity K s = 0.002 m h −1 .We acknowledge that a more complete sensitivity analysis concerning the impact of the individual parameters would be beneficial, but this should be performed in a complete Monte Carlo manner in order to exclude identification trade-offs between the Van Genuchten parameters, the depth of the water table (known with some uncertainty) and the fluxes from irrigation, precipitation and ET.However, we feel that this endeavour shall be conducted also with regard to the effective 3-D spatial distribution of active roots, and is currently the subject of ongoing research. The remaining elements of the predictive modelling exercise are initial and boundary conditions.As we focused primarily our attention on reproducing the state of the system at background conditions, we set the start of the simulation at the beginning of the year (1 January 2013), and we assumed for that time a condition drained to equilibrium.Given the van Genuchten parameters we used and the depth of the water table, this corresponds to a fairly wet initial condition.We verified a posteriori that moving the initial time back of 1 year or more did not alter the predicted results at the date of interest (3 October 2013).The dynamics during the year are sufficient to bring the system to the real, much drier condition in October.The forcing conditions on the system are all known: (a) irrigation is recorded, and only one dripper pertains to the considered square metre; (b) precipitation is measured; (c) sap flow is measured.Direct evaporation from the square metre of soil around the stem is neglected, considering the dense canopy cover and the consequent limited radiation received.Only 1 degree of freedom is left to be calibrated, i.e. the volume from which the roots uptake water.Thickness of the active root zone was estimated from the time-lapse observations (Fig. 5), and fixed to the top 0.4 cm after checking that limiting the root uptake to the 0.2 m to 0.4 m zone would produce results inconsistent with observations in the top 0.2 m.Therefore only the surface area of the root uptake zone remains to be estimated.We used the predictive model as a tool to identify the extent of this zone, that is of critical interest also for irrigation purposes. Figure 7 shows the results of the calibration exercise.It is apparent that the total areal extent of the root uptake zone has a dramatic impact on the predicted moisture content profiles, as it scales the amount of water subtracted from the monitored square metre considered in the calibration.Even relatively small changes (±15 %) of the root uptake area produce very different soil moisture profiles.The value that allows for a good match of the observed profile is 1.75 m 2 , while for areas equal to 1.5 m 2 and 2 m 2 the match is already unsatisfactory, leading respectively to underestimation and overestimation of the moisture content in the profile. Another important fact that is apparent from Fig. 7 is that the estimated soil moisture in the shallow zone (roughly down to 0.4 m) is very small as an effect of RWU.However, this dry zone must have a limited areal extent (1.75 m 2 , corresponding to a radius of about 0.75 m from the stem of the tree).Indeed this is indirectly confirmed by the soil moisture evolution measured by TDR. Figure 8 shows the TDR data is the area that allows one to match the observed real profile with good accuracy.Note also the high sensitivity of the results to the estimated root uptake area.from three probes located about 1.5 m from the monitored tree (thus outside our estimated root uptake zone).The signal coming from the irrigation experiment of 2 October 2013 is very apparent with an increase in moisture content of all three probes, located at different depths.Note that before this experiment the system had been left without irrigation for about a fortnight.The corresponding effect on the TDR data is ap- parent: all three probes show a decline of moisture content during the day, with pauses overnight.The decline is more pronounced in the 0.35 m TDR probe, that lies at a depth we estimated to be nearly at the bottom of the RWU zone, and less pronounced above (0.2 m) and below (0.45 m).Note also that the TDR probes are close to another dripper, lying outside of the ERT-controlled volume (the drippers are spaced 1 m along the orange trees line, with the trees about 4 m from each other); thus, they reflect directly the infiltration from that dripper.However, at all three depths the moisture content is much higher than measured in the ERT-controlled block closer to the tree.This can be explained with the fact that in that region the root uptake is minimal or totally absent, while the decline of moisture content in time may well be an effect of water being drawn to the root zone by lateral movement, which is induced by the very strong capillary forces exerted by the dry fine grained soil in the active root zone closer to the tree.In order to clarify the impact of these results on our understanding of the system, we show the location of the trees, of the TDR probes and of the drippers in Fig. 9, where we also sketch the best estimate for the areal extent of the RWU zone.This figure clearly highlights how critical the information provided by ERT actually is.The scale at which RWU takes place is smaller (metre scale) than expected and often assumed when it comes to designing and implementing a field monitoring system.This has dramatic consequences in terms of how reliable conclusions can be drawn if such small-scale processes are neglected.Consider, e.g.what type of conclusions could be drawn on the basis of TDR data alone (Fig. 8) in light of the field situation as depicted in Fig. 9.The single, most important message that shall be conveyed by this paper is a warning to be particularly attentive to small-scale processes in soil-plant-atmosphere interactions, even in regular agricultural landscapes. Conclusions Near-surface geophysics is strongly affected by both static and dynamic soil/subsoil characteristics.This fact, if properly recognized, is potentially full of information on the soil/subsoil structure and behaviour.The information is maximized if geophysical data are collected in time-lapse mode. In the case of interactions with vegetation, its role should be properly modelled, and such models can be constrained by means (also) of geophysical data.This case study demonstrates that 3-D ERT is capable of characterizing the pathways of water distribution, and provides spatial information on root zone suction regions.The integration of modelling and data has proven, once again, a key component of this type of hydro-geophysical studies, allowing us to draw quantitative results of practical interest.In this case we had available a wealth of quantitative information about transpiration and soil moisture content that allowed the definition of the volume of soil affected by the RWU activity.This has obvious consequences for the possible improvement of irrigation strategies, as it is apparent how the monitored orange tree essentially drives water from one to two drippers out of the four in total that should pertain to its area in the plantation.This means that it is very likely that half of the irrigated water is indeed lost to deeper layers and brings no contribution to the plants.More advanced uses of this type of data are now considered, especially linking soil moisture distribution with plant physiological response and active root distribution in the soil.In the long-run studies of this type may give a fundamental contribution to our understanding of soil-plant-atmosphere interactions also in view of facing challenges coming from climatic changes. 37 • 16 N, long.14 • 53 E) in a Mediterranean semi-arid environment, characterized by an annual average precipitation of around 550 mm, very dry summers and average air temperature of 7 • C in winter and 28 • C in summer.The site presents conditions of crop homogeneity, flat slope, dominant wind speed direction for footprint analysis and quite a large fetch which are ideal for micro-meteorological measurements.The planting layout is 4.0 m × 5.5 m and the trees are drip irrigated with 4 in-line drippers per plant, spaced about 1 m apart, with 16 L h −1 of total discharge (4 L h −1 per dripper); the crop is well watered by irrigation supplied every day from May to October, with an irrigation timing of 5 h d −1 .The study area has a mean leaf area index (LAI) of about 4 m 2 m −2 , measured by a LAI-2000 digital analyser (LI-COR, Lincoln, Nebraska, USA). Figure 1 . Figure 1.Bulgherano experimental site: the eddy covariance (EC) tower and a heat-pulse (HP) sap flow installation on an orange tree. Figure 2 . Figure 2. 3-D ERT apparatus installed around one orange tree.The system is composed of four micro-boreholes carrying 12 electrodes each (see inset) and 24 surface electrodes -see text and Fig. 3 for geometry details. Figure 3 . Figure 3. Electrode geometry around the orange tree and 3-D mesh used for ERT inversion. Figure 4 . Figure 4. Cross sections of the ERT cube corresponding to the background acquisition of 2 October 2013, 11:00 LT.Note the very strong difference in electrical resistivity between the top 40 cm (above 50 Ohm) and the rest of the domain.The resistivity distribution is essentially one-dimensional with depth, with very limited horizontal variations. Figure 5. (a) Time series of sap flow (black line) and EC-derived total evapotranspiration (blue lines), both normalized in millimetres assuming an area of 20 m 2 pertaining to the orange tree monitored with ERT.Time is given in hours from midnight of 2 October.The two irrigation periods are shown by the blue bars.(b) 3-D ERT images of resistivity change with respect to background at two selected time instants shown by the arrows in (a); the volumes corresponding to increase and decrease of resistivity above and below certain thresholds (80 and 110 %) are shown in separate panels, for clarity. Figure 6 . Figure 6.Experimental relationships between resistivity and moisture content determined in the lab on samples taken at two different depths at the Bulgherano site, using water having the same electrical conductivity measured in the porewater in situ. Figure 7 . Figure 7. Results of 1-D Richards' equation simulations of the entire year 2013 up to 3 October 11:00 LT, i.e. in correspondence of the background ERT acquisition (the thick black line represents the resulting estimated moisture content profile obtained from averaging horizontally the central square metre of the ERT control volume).The different simulated curves correspond to different assumed areas of root water uptake (RWU), and show how 1.75 m 2is the area that allows one to match the observed real profile with good accuracy.Note also the high sensitivity of the results to the estimated root uptake area. Figure 8 . Figure 8. Moisture content time series from three TDR probes located about 1.5 m from the ERT-monitored tree.The signal coming from the irrigation experiment of 2 October 2013 is very clear.Before this experiment the system had been left without irrigation for about a fortnight. Figure 9 . Figure 9. Scheme of the experimental field with the location of the main sensors.The radius of the root water uptake (RWU) zone, assumed to be circular, is equal to about 0.75 m. Table 1 . Times of acquisitions and irrigation schedule Acquisition no.Starting time (LT) Ending time (LT) Irrigation schedule Date
2015-06-01T23:46:22.000Z
2014-12-08T00:00:00.000
{ "year": 2014, "sha1": "5352f01fe16f9f22326243bad7179d2dc6e4b69c", "oa_license": "CCBY", "oa_url": "https://hess.copernicus.org/articles/19/2213/2015/hess-19-2213-2015.pdf", "oa_status": "GOLD", "pdf_src": "Grobid", "pdf_hash": "c4be0c763c93f6bf40ca591661fab7d4b055127c", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
52260308
pes2o/s2orc
v3-fos-license
Engineering Properties and Compressibility Behavior of Tropical Peat Soil This paper presents the engineering properties and compressibility behavior of various types of tropical peat soil collected from several locations in Malaysia. These soils represented fibric, hemic and sapric type of tropical peat with organic content ranging from 70% to 90%. The correlations of the various basic engineering properties of the tropical peat soils have been found to be close and new equations have been established. Loss on ignition (Organic Content) appears to be a very useful parameter for the peat. It correlates well with the natural water content, liquid limit, density and specific gravity. Compressibility behavior of various type of peat soil was measured using Rowe Cell consolidation test for accuracy and conventional oedometer test for comparison purpose. Compressibility index Cc and Cα was identified as two crucial parameters to estimate settlements in peat soil. Parametric study has been carried out at the end of the study to foresee the effect of surcharge on fibric, hemic and sapric peat ground. Based on the results obtained, it shows that fibric peat recorded the highest settlement followed by hemic and sapric peat with increase in consolidation pressure. INTRODUCTION Peat is an organic soil which consists more than 70% of organic matters. Peat deposits are found where conditions are favorable for their formation. In Malaysia, some 3 million hectares of land is covered with peat. Peat poses serious problems in construction due to its long-term consolidation settlements even when subjected to a moderate load [1] . Hence, peat is considered unsuitable for supporting foundations in its natural state. Various construction techniques have been carried out to support embankments over peat deposits without risking bearing failures but settlement of these embankments remains excessively large and continues for many years. Besides settlement, stability problems during construction such as localized bearing failures and slip failures need to be considered. Experimental Design and Laboratory Work: The main objective of this study is to examine the peculiar engineering behavior of tropical peat with respect to their compressibility characteristics due to variation in fiber content and organic content. Meanwhile the index properties such as natural water content, organic content, liquid limit, specific gravity and density of various type of tropical peat were obtained to establish suitable correlation. Understanding the engineering properties and compressibility characteristics of peat will give hand for the engineers in determining suitable ground improvement method. Thus, proper construction and foundation design guide for various type of peat could be outlined for future developments in peat soils. Sample preparation: Undisturbed samples of peat soil were taken from three different locations on the West coast of Peninsular Malaysia by using the sampling tube. First the undisturbed soil samples were transferred directly from the sampling tube into the ring. A suitable auger was designed and fabricated to collect undisturbed peat samples as shown in Fig. 1. The auger enables the extraction of peat core sample of 150 mm diameter by 230 mm length. The top and bottom of the specimen was trimmed. Fibrous soil such as peat is easily disturbed therefore the trimming process was carried out carefully. Furthermore, the trimming process was carried out quickly to minimize a change in the water content of the soil sample. Sample was then tested using Rowe Cell to overcome most of the disadvantages of the conventional oedometer apparatus when performing consolidation tests on low permeability soils, including non uniform deposits. The most important features are the ability to control drainage and to measure pore water pressure during the course of consolidation tests. Fig. 2 shows the experimental set up of using Rowe Cell. Size of the cell used in this research was 150 mm in diameter and 50 mm in height. The hydraulic loading system pressure to be applied and vertical load can be applied to the sample surface either via a flexible diaphragm to give a uniformly distributed pressure(free strain) or via a rigid plate to give uniform settlement (equal strain). However for the purpose of comparison study, conventional oedometer test was carried out too. The size of the ring used was 50 mm in diameter and 20 mm in height. Load increment was applied at 20, 40, 80, 160, and 320 kN/m 2 . Additional load was placed on the soil specimen to determine the soil behavior at higher pressure. Each load increment was maintained for 24 hours. Back pressure of 200 kN/m 2 was maintained through out the test. Testing programs: Index properties of peat soil used in the classification system of peat namely the water content, organic content, specific gravity, fiber content, degree of humification and Atterberg limits were determined based on test procedures according to the British Standard BS1377: 1990, 'Methods of test for soils for civil engineering purposes'. Apart from the classifying tests, compressibility behavior of the peat soil was determined by conventional oedometer test and Rowe Cell consolidation test. RESULTS AND DISCUSSION One of the main objectives of this study is to find the relationship between the basic geotechnical properties of peat soil with index parameters such as natural water content, organic content and liquid limit. It must also be appreciated that compared with soils of mineral origin, the peat soils, in particular those of the tropical genesis, have only recently been given attention. As such even determination methods of some of the basic properties are still being researched. In some cases no consensus has been reached, either with respects to the methods, nor details of any given methods. However, for ease of comparison, the apparently most commonly used methods of determination of soil basic properties are used in this study. Soil description: Peat sample was obtained from marine and continental deposits on the West coast of Peninsular Malaysia. It consists primarily of low plasticity fines, some fine to medium sands and it is dark brown in color. Based on results of characterization tests performed it ranges from three different type of peat according to the USDA classification system and Von Post Scale. Soil characterization test were performed on each soil sample in accordance with accepted BS 1377: 1990 and ASTM procedures. The results of the characterization tests are in Table 1. The Atterberg limits were determined on the soil particles passing the 475 µm sieve. As seen in Table 1, fairly significant increase in liquid limit with the increase in natural water content. Huat [2] stated Table 1: Index properties of peat samples that the natural water content of peat in West Malaysia ranges from 200 % to 700 % and with organic content in the range of 50 % to 95 %. The recorded values in the above table fulfill this statement. Further more the liquid limit was in the range of 200 % to 500 % as reported by Huat [2] . Engineering properties such as specific gravity, dry density and bulk density of the samples were within the range as reported by Huat [2] . The bulk densities of peat are in the range of 0.8 -1. In addition to basic characterization tests, comparison study of the engineering properties obtained from laboratory works with the theoretical values were done. Correlations of index properties: As mentioned before, one of the main objectives of the paper is to study the relationship of the basic geotechnical properties of tropical peat soils with some of the easily determined parameters such as natural water content, organic content or liquid limit. Comparison is made with the published correlations of the more established organic soils of the temperate genesis. These are described below. Fig. 3 shows the empirical relationships between organic content (OC) and liquid limit (LL) which have been proposed by Skempton and Petley [3] for temperate peat. However Eq. (1) does not seem to fit well in the case of tropical peat. For the case of tropical peat studied in Malaysia, the best fit line of the samples is given in Eq. (2) (LL and OC in percent). Liquid limit of the tropical peat soils was found to range from 150 -400%. In general the liquid limit of peat increases with increase in organic content LL = 0.5 + 5.0 OC (1) [4] . Data collected from the tropical peat sample was plotted on the same figure and both the sample fit close to each other. However for identification of tropical peat, a special equation was form as Eq. (4) Specific gravity in peat soils are affected by the organic constituents, and cannot therefore simply be set to somewhere near 2.65 -2.75 as for in mineral soils. Den Haan [4] for example quoted cellulose and lignin to have specific gravity of approximately 1.58 and 1.40 respectively. These low values would as expected reduces the compounded specific gravity of organic soils. Fig. 5 shows the variation of specific gravity with organic content using correlation proposed by Kaniraj and Joseph [5] using Eq. (5), Huat [2] using Eq. (6), Skempton and Petley [3] using Eq. (7) and Den Haan [4] using Eq. (8). Experimental results plotted for specific gravity of tropical peat, fit closely with Eq.8 proposed by Den Haan [4] . Thus an equation was form to establish the identification process of tropical peat soil. values from Rowe Cell consolidation test for fibric peat was within the range of 0.0608 to 0.0985 for consolidation pressure of 40 kPa to 320 kPa. Values of Cα for hemic peat were recorded as 0.0585 to 0.0959 and sapric peat was 0.0544 to 0.0939. These values were from Rowe Cell consolidation test and measured higher than the values from conventional oedometer test results. Results from conventional oedometer test considered far less reliable because back pressure was not induced and pore water pressure was not measured during the course of the test. The coefficient of secondary compression (Cα) value from conventional oedometer test for fibric was 0.0374 to 0.0901, hemic was 0.0225 to 0.0881 and sapric was 0.014 to 0.0851. According to Mesri [6] , soil with Cα values of more than 0.064 is classified as soil with extremely high secondary compressibility. The coefficient of secondary compression C α (=∆e/∆log t) was determined from the slope of the e-log t curves during the period of 4 to 24 hours after load increment, assuming that the secondary compression would have started 4 hours after loading. The C α increases as the consolidation pressure is increased. Similar trend was observed with samples tested using oedometer. Thus peat samples used in this research will cause high secondary settlements with the increase in loading over the time period. Compression ratio: Parameter C C /1+e o is called compression ratio. According to O'Loughlin and Lehane [7] , compression ratio for peat in the range of 0 to 0.05 is classified as very slightly compressible followed by slightly compressible for anything in between 0.05 to 0.10. Moderately compressible peat lies in the range of 0.10 to 0.20 and very compressible peat has ratio in between 0.20 to 0.35. Based on the compression ratios given in Table 2, all the three type of tropical peat regardless of type of test used was classified as very compressible (> 0.20). Law of compressibility: Mesri and Castro [8] reported that the value of C α /C c law for peat and muskeg lies in the range of 0.05 to 0.07. Based on Table 2, value of C α /C c law for tropical peat determined from conventional oedometer test was about 0.027 for fibric, hemic and sapric. Whereas samples tested using Rowe Cell consolidation was recorded in the range of 0.02 to 0.04. These values are generally not in agreement with the values reported in literature. Since the value of C α /C c law for tropical peat is lower than the value of C α /C c law reported in the literature, less creep settlement develops when the tropical peat is loaded. However, this need to be verified with further research works involving long-term consolidation test. A parametric study was conducted assuming a normally consolidated 5m depth peat ground with embankment. Using one dimensional consolidation theory and Anglo Saxon method, ultimate consolidation settlements were predicted as in Table 2. Based on the results obtained, it shows that fibric peat recorded the highest settlements followed by hemic and sapric. CONCLUSION The following are the main observations drawn from the index test and consolidation test described in this paper: 1. Based on the experimental data obtained in the laboratory, dry density and specific gravity values of tropical peat correlate well with the equation proposed by Den Haan [4] . 2. Coefficient of secondary compression (Cα) values of fibric, hemic and sapric of tropical peat are in the range of 0.08 to 0.09. These values are indicative of soil with extremely high secondary compressibility. 3. Compression ratio (C C /1+e o ) of fibric, hemic and sapric are in the range of 0.2 to 0.4 and classified as very compressible. 4. The C α /C c law value of fibric, hemic and sapric peat of tropical are in the range of 0.02 -0.04. It is smaller than the value 0.05 to 0.07 suggested by Mesri and Castro [8] 5. Tropical fibric peat will cause highest settlements followed by hemic and sapric when subjected to any load over the time period.
2019-01-23T23:20:43.766Z
2007-10-31T00:00:00.000
{ "year": 2007, "sha1": "5adf0c6cf53dce5ddc14815ce4908cd9e7e1b2c1", "oa_license": "CCBY", "oa_url": "http://thescipub.com/pdf/10.3844/ajassp.2007.768.773", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "696d0a9dd4bc9fd601a7ed9e204b9997193b590a", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Environmental Science" ] }
255704505
pes2o/s2orc
v3-fos-license
Hepatic Plasmacytoma With DEL13q14 Positive on Fluorescent In Situ Hybridization (FISH) on Tissue Biopsy A 40-year-old male presented with abdominal distension and dyspnea. On evaluation found to have hepatic plasmacytoma without marrow clonal plasma cells. Fluorescent in situ hybridization (FISH) on tissue biopsy revealed myeloma-defining cytogenetics. After treating with novel agents, the patient had a complete response to therapy. Introduction Extramedullary plasmacytoma (EMP) is a rare presenting feature of multiple myeloma. EMP occurs in 7-18% of newly diagnosed myeloma and 20% of relapsed myeloma [1]. It commonly presents in the head and neck region. The cytogenetic abnormalities in multiple myeloma affect the disease evolution from malignancy to clinical presentation, response to therapy, and prognosis [2]. Here, we report a rare presentation of multiple myeloma presented as hepatic mass. Case Presentation A 40-year-old male presented with 1.5 years of fatigue and dyspnea, and abdominal distension for six months associated with significant weight loss. Eight months ago, the patient had a blunt injury over the abdomen and was diagnosed as hepatic contusion, and underwent surgery for the same. No other systemic complaints. On examination the patient was pale. Abdominal examination revealed a 5x5 cm mass in right hypochondrium and epigastrium without localizing signs ( Table 1). Ultrasonography showed a large heterogeneous hypoechoic lesion in the epigastric region abutting the liver with few periportal lymph nodes and a cystic lesion with septation at the splenic hilar region. Computed tomography showed a large lobulated altered signal intensity solid-cystic lesion measuring 122x76x148 mm in the epigastric region with non-separate visualization of the pancreas and left liver lobe and the lesion abutting the urinary bladder ( Figure 1, panel A). The picture was more in favor of hepatocellular carcinoma. Serum alpha-fetoprotein (AFP) was 2 ng/mL (range: 0-8 ng/mL). The patient underwent a biopsy, which showed sheets of mature plasma cells with abundant dense cytoplasm, perinuclear hof, round eccentric nuclei, clock face chromatin, and indiscernible nucleoli ( Figure 2, panel A) that are immunoreactive for cluster of differentiation (CD)38 ( Figure 2, panel B), CD138 ( Figure 2, panel C) and MUM1, and negative for pan-cytokeratin, carcinoembryonic antigen (CEA), CD34, CD20, desmin, and synaptophysin. With this clinical picture, plasma cell dyscrasia was considered. The workup for multiple myeloma was performed which included the bone marrow examination, fluorodeoxyglucose positron emission tomography (FDG PET-CT), and cytogenetic study. FIGURE 1: CT abdomen and MRI brain of the patient. The image shows a large lobulated altered signal intensity lesion measuring 122x76x148 mm in the epigastric region with non-separate visualization of pancreas and left liver lobe (A). An active lesion in the right frontal region of the brain with lytic changes in the frontal bone (B). CD: cluster of differentiation The bone marrow aspiration and biopsy showed normal cellular marrow with no abnormal plasma cells. The myeloma biochemical report revealed an M spike of 9.6 g/dL, IgG of 1234 mg/dL, kappa/lambda ratio of 42 (468/11), beta-2 microglobulin of 11.2 mg/L, and immunofixation showed IgG kappa band. With this, the diagnosis of hepatic plasmacytoma was confirmed. The FDG PET-CT was done, which showed multiple lesions in the abdomen involving the left lobe of the liver and other lesions in the periportal region abutting the duodenum and left hemidiaphragm abutting the spleen, and an active lesion in the right frontal region of the brain with lytic changes in the frontal bone. The same finding was confirmed with a contrast MRI brain and it measured 4.6x2.5x3.7 cm (Figure 1, panel B). The prognosis of myeloma depends on cytogenetics, and since the bone marrow didn't show abnormal plasma cells, the interphase Fluorescent in situ hybridization (FISH) performed on the liver biopsy revealed DEL13q14 in 84% of the cells (Figure 3) [2]. FISH: fluorescent in situ hybridization He was started on triplet therapy with proteasome inhibitors (PI) and immunomodulators (IMiDs) (RVdlenalidomide, bortezomib, and dexamethasone), and post 12 cycles of therapy, he attained a stringent complete response (no M spike, normalization of serum free light chain ratio, negative monoclonal immunofixation, and disappearance of plasmacytomas). Discussion Multiple myeloma is defined by the International Myeloma Working Group (IMWG) as clonal bone marrow plasma cells ≥10% or biopsy-proven plasmacytoma plus one or more of the following myeloma-defining events: clonal plasma cells ≥60%, involved/uninvolved serum free light chain (SFLC) ratio ≥100, >1 focal lesion on MRI studies >5 mm; and hypercalcemia, renal failure, anemia, bone lytic lesions (CRAB) events (hypercalcemia, renal failure, anemia, and lytic bone lesions) [3]. Isolated EMP is a rare presenting feature of multiple myeloma. Very few cases of hepatic plasmacytoma have been reported [4][5][6][7]. The cytogenetic abnormalities in multiple myeloma affect the disease evolution from malignancy to clinical presentation, response to therapy, and prognosis. Since myeloma cells have a less proliferative capacity, interphase FISH is commonly used to detect cytogenetic abnormalities in myeloma rather than conventional metaphase karyotyping [8]. The challenge that remains in the diagnosis of isolated EMP's are the identification of primary and secondary cytogenetic abnormality in the tissue as the number of plasma cells can be less in the tissue biopsy as compared to bone marrow clonal plasma cells [9]. A minimum of 50 plasma cells (ideally 100 plasma cells) are required to detect the primary and secondary genetic abnormality. Interphase FISH is usually performed by sorting purified plasma cells (by CD138 magnetic beads) or labeling cytoplasmic immunoglobulin light chains to increase the number. As per the European Myeloma Network (EMN), the cut-off positivity for FISH is 10% for fusion or break-apart probes and 20% for numerical abnormalities [9]. Extramedullary involvement is one of the poor prognoses in multiple myeloma with high mortality and high relapse risk due to the presence of high-risk cytogenetics. EMP with newly diagnosed myeloma, EMP at relapse, CNS involvement, no high dose melphalan therapy followed by autologous stem cell transplant (ASCT), and International Staging System (ISS) II and III are associated with poor prognosis with EMP [10]. Our patient had hepatic plasmacytoma with CNS involvement with DEL13q14 abnormality which is nonhigh-risk myelomas cytogenetics and had a complete response to IMiDs and PI-based therapy. Conclusions Isolated multiple EMP without bone marrow (BM) involvement is a rare presenting feature with multiple myeloma. The prognosis in these cases depends not only on the tumor biology but also on the presence of high-risk cytogenetics and the response to novel agents. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2023-01-12T18:54:32.791Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "78acda8c4dcc69473047f8139b32c9eb1af8bd01", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/128838-hepatic-plasmacytoma-with-del13q14-positive-on-fluorescent-in-situ-hybridization-fish-on-tissue-biopsy.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0415722f9499687245277b03c65d8491519dd41e", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
18828864
pes2o/s2orc
v3-fos-license
A recessive homozygous p.Asp92Gly SDHD mutation causes prenatal cardiomyopathy and a severe mitochondrial complex II deficiency Succinate dehydrogenase (SDH) is a crucial metabolic enzyme complex that is involved in ATP production, playing roles in both the tricarboxylic cycle and the mitochondrial respiratory chain (complex II). Isolated complex II deficiency is one of the rarest oxidative phosphorylation disorders with mutations described in three structural subunits and one of the assembly factors; just one case is attributed to recessively inherited SDHD mutations. We report the pathological, biochemical, histochemical and molecular genetic investigations of a male neonate who had left ventricular hypertrophy detected on antenatal scan and died on day one of life. Subsequent postmortem examination confirmed hypertrophic cardiomyopathy with left ventricular non-compaction. Biochemical analysis of his skeletal muscle biopsy revealed evidence of a severe isolated complex II deficiency and candidate gene sequencing revealed a novel homozygous c.275A>G, p.(Asp92Gly) SDHD mutation which was shown to be recessively inherited through segregation studies. The affected amino acid has been reported as a Dutch founder mutation p.(Asp92Tyr) in families with hereditary head and neck paraganglioma. By introducing both mutations into Saccharomyces cerevisiae, we were able to confirm that the p.(Asp92Gly) mutation causes a more severe oxidative growth phenotype than the p.(Asp92Tyr) mutant, and provides functional evidence to support the pathogenicity of the patient’s SDHD mutation. This is only the second case of mitochondrial complex II deficiency due to inherited SDHD mutations and highlights the importance of sequencing all SDH genes in patients with biochemical and histochemical evidence of isolated mitochondrial complex II deficiency. Introduction Mitochondrial respiratory chain disease arises from defective oxidative phosphorylation (OXPHOS) and represents a common cause of metabolic disease with an estimated prevalence of 1:4300 (Gorman et al. 2015;Skladal et al. 2003). Under aerobic conditions, metabolised glucose, fatty acids and ketones are the OXPHOS substrates, shuttling electrons along the respiratory chain whilst concomitantly creating a proton gradient by actively transporting protons across the mitochondrial membrane. The resultant proton gradient is exploited by ATP synthase to drive ATP production. Under anaerobic conditions, for example where atmospheric oxygen is scarce or during periods of exertion, ATP synthesis is produced primarily during glycolysis (Horscroft and Murray 2014). The mitoproteome consists of an estimated 1400 proteins (Pagliarini et al. 2008), including the 13 polypeptides and 24 non-coding tRNA and rRNA genes encoded by the Abstract Succinate dehydrogenase (SDH) is a crucial metabolic enzyme complex that is involved in ATP production, playing roles in both the tricarboxylic cycle and the mitochondrial respiratory chain (complex II). Isolated complex II deficiency is one of the rarest oxidative phosphorylation disorders with mutations described in three structural subunits and one of the assembly factors; just one case is attributed to recessively inherited SDHD mutations. We report the pathological, biochemical, histochemical and molecular genetic investigations of a male neonate who had left ventricular hypertrophy detected on antenatal scan and died on day one of life. Subsequent postmortem examination confirmed hypertrophic cardiomyopathy with left ventricular non-compaction. Biochemical analysis of his skeletal muscle biopsy revealed evidence of a severe isolated complex II deficiency and candidate gene sequencing revealed a novel homozygous c.275A>G, p.(Asp92Gly) SDHD mutation which was shown to be mitochondria's own genetic material (mtDNA) that are exclusively maternally transmitted. The remaining genes of the mitoproteome are located on either the autosomes or sex chromosomes and as such are transmitted from parent to child in a Mendelian fashion. Defects in a number of mtDNA and nuclear-encoded genes have been linked to human disease, often associated with a vast genetic and clinical heterogeneity and further compounded by few genotype-phenotype correlations which help guide molecular genetic investigations. Succinate dehydrogenase is a crucial metabolic enzyme complex that is involved in both the Krebs cycle and the mitochondrial respiratory chain. It is composed of two catalytic subunits (the flavoprotein SDHA, and Fe-S-containing SDHB) anchored to the inner mitochondrial membrane by the SDHC and SDHD subunits. All four subunits and the two known assembly factors are encoded by autosomal genes (SDHA, SDHB, SDHC, SDHD, SDHAF1 and SDHAF2, hereafter referred to as SDHx). Congenital recessive defects involving SDHx genes are associated with diverse clinical presentations, including leukodystrophy and cardiomyopathy (Alston et al. 2012). A recent review describes SDHA mutations as the most common cause of isolated complex II deficiency, with 16 unique mutations reported in 30 patients (Ma et al. 2014;Renkema et al. 2014); the next most common cause are mutations in SDHAF1, 4 mutations have been reported in 13 patients (Ghezzi et al. 2009;Ohlenbusch et al. 2012). Just one mitochondrial disease patient is reported to harbour either SDHB (Alston et al. 2012) or SDHD (Jackson et al. 2014) mutations and metabolic presentations have yet to be reported in association with SDHC or SDHAF2. In addition to their role in primary respiratory chain disease, SDHx mutations can act as drivers of neoplastic transformation following loss of heterozygosity (LOH). One of the most common causes of head and neck paraganglioma (HNPGL) is LOH at the SDHD locus. These mutations are inherited in a dominant manner with a parent of origin effect; typically only paternally inherited SDHD mutations are associated with HNPGL development (Hensen et al. 2011). Here, we report a neonate who presented prenatally with cardiomyopathy due to a novel homozygous SDHD mutation. This is the second report of recessive SDHD mutations resulting in a primary mitochondrial disease presentation and serves to characterise the biochemical, histochemical and functional consequences of our patient's molecular genetic defect. Moreover, the affected amino acid, p.Asp92, has been reported as a Dutch founder mutation in families with hereditary PGL, albeit the substituted residue differs. We have used the yeast, Saccharomyces cerevisiae, which has proven to be a useful model system to study the effects of SDHx gene mutations Panizza et al. 2013), to provide functional evidence supporting the pathogenicity of the SDHD mutation identified in our patient and, to a lesser extent, that of the PGL-associated p.Asp92Tyr mutation. Patient and methods The patient is the third child born to unrelated Irish parents. Foetal heart abnormalities were identified on an anomaly scan at 31-weeks gestation, which prompted foetal echocardiography. A normally situated heart with normal systemic and pulmonary venous drainage was reported. Right to left shunting was noted at the patent foramen ovale and ductus arteriosus, consistent with gestational age. The left ventricle and left atrium were severely dilated with moderate-severe mitral regurgitation. There was severe left ventricular systolic dysfunction, but no evidence of pericardial effusion or ascites. Rhythm was normal sinus with a foetal heart rate between 100 and 120 beats per minute and subsequent weekly foetal echocardiogram showed no further progression of cardiac dysfunction or development of hydrops. Cardiac MRI at 32-weeks gestation showed marked left ventricular hypertrophy and dilation. A clinical diagnosis of dilated cardiomyopathy was considered and the parents were counselled that the prognosis for postpartum survival was poor. The proband was born by elective caesarean section at 37 + 6 weeks gestation with a birth weight of 2620 g (9th-25th centile) and occipital circumference of 34.5 cm (50th-75th centile). He had no dysmorphic features. He was transferred to neonatal intensive care on 100 % oxygen to maintain his saturations in the low 90 s. An additional heart sound and loud murmur were noted, along with hepatomegaly (4 cm below costal margin) but without splenomegaly. By 12 h of age, his condition had deteriorated significantly; echocardiogram showed dilation of the inferior vena cava, hepatic veins, right atrium and interatrial septal bowing. He had moderate tricuspid regurgitation and very poor biventricular function with noncompaction hypertrophy. The proband died the following evening following withdrawal of life support with parental consent. At postmortem examination, the heart weighed 43 g (normal = 13.9 ± 5.8 g). The right atrium was particularly enlarged. The endocardium of the right atrium but particularly also the right ventricle showed fibroelastosis. The right ventricle was remarkably diminutive and underdeveloped. The right atrium had a 7-mm-diameter patent foramen ovale. There was obvious non-compaction of the hypertrophic left ventricular myocardium (Fig. 1). Cardiac muscle, skeletal muscle and a skin biopsy were referred for laboratory investigations. Informed consent was obtained from the parents for the clinical and laboratory investigations and publication of the results. Histochemical and biochemical assessment of metabolic function Histological and histochemical assay of 10 μm serial sections of patient muscle biopsy was performed according to standard protocols. The measurement of respiratory chain enzyme activities was determined spectrophotometrically as described previously (Kirby et al. 2007). Fibroblast culture and measurement of β-oxidation flux in cultured fibroblasts using [9, 10-3 H] myristate, [9, 10-3 H]palmitate and [9, 10-3 H]oleate was performed as described elsewhere (Manning et al. 1990;Olpin et al. 1992Olpin et al. , 1997. Cytogenetic and molecular genetic investigations Karyotyping of cultured fibroblasts and DNA extraction from patient muscle were performed according to standard protocols. Primers were designed to amplify each coding exon, plus intron-exon boundaries, of the SDHA, SDHB, SDHC, SDHD, SDHAF1 and SDHAF2 genes. PCR amplicons were Sanger sequenced using BigDye3.1 chemistry (Applied Biosystems) and capillary electrophoresed on an ABI3130xl bioanalyser (Applied Biosystems) using standard methodologies. Resultant sequencing chromatograms were compared to the Genbank reference sequences: (NM_001042631.2) and SDHAF2 (NM_017841.2). All gene variants were annotated using dbSNP build 138 whilst ESP6500 and 10 k genome project data allowed determination of allele frequencies. Parental DNA samples were screened to investigate allele transmission. In silico pathogenicity prediction tools and structural modelling The effect of the p.(Asp92Gly) substitution on SDHD function was predicted using the in silico tools SIFT (Ng and Henikoff 2003), Align GVGD (Tavtigian et al. 2006) and Polyphen (Adzhubei et al. 2010), all running recommended parameters. To determine whether the tertiary structure of the protein was affected by the mutation, the wild-type (NP_002993) and mutant SDHD protein sequences were input to PSIPRED (Jones 1999) and I-TASSER ); I-TASSER output was visualised using UCSF Chimera (Pettersen et al. 2004). Western blot Mitochondria-enriched pellets prepared as above were lysed on ice in 50 mM Tris pH 7.5, 130 mM NaCl, 2 mM MgCl 2 , 1 mM PMSF and 1 % NP-40. Protein concentration was calculated using the Bradford method (Bradford 1976). 13 µg of enriched mitochondrial proteins was loaded on a 12 % sodium dodecyl sulphate polyacrylamide gel with 1× dissociation buffer, electrophoretically separated and subsequently transferred onto a PVDF membrane. Immunodetection was performed using primary antibodies raised against complex II SDHA (Mito-Sciences, MS204) and SDHD (Merck Millipore, ABT110) and a mitochondrial marker protein, Porin (Abcam, ab14734). Following secondary antibody application (Dako), detection was undertaken using the ECL ® plus chemiluminescence reagent (GE Healthcare Life Sciences, Buckinghamshire, UK) and ChemiDoc MP imager (Bio-Rad Laboratories). Yeast strains and media Yeast strains used in this study were BY4741 (MATa; his3∆1 leu2∆0 met15∆0 ura3∆0) and its isogenic sdh4:kanMX4. Cells were cultured in yeast nitrogen base (YNB) medium: 0.67 % yeast nitrogen base without amino acids (ForMediumTM), supplemented with 1 g/l of drop-out powder (Kaiser et al. 1994) containing all amino acids and bases, except those required for plasmid maintenance. Various carbon sources (Carlo Erba Reagents) were added at the indicated concentration. For the respiration and mitochondria extraction, cells were grown to late-log phase in the YNB medium supplemented with 0.6 % glucose. Media were solidified with 20 g/l agar (ForMediumTM) and strains were incubated at 28 or 37 °C. Construction of yeast mutant alleles The sdh4Asp98Gly and sdh4Asp98Tyr mutant alleles were obtained by site-directed mutagenesis using the overlap extension technique (Ho et al. 1989). In the first set of PCR reactions, the SDH4 region was obtained using the forward primer ESDH4F and the following reverse mutagenic primer sdh4R98G 5′-CATGACAG AAAAGAAAGAACCAGCTGCAGTGGATAACGG AC-3′ and sdh4R98Y 5′-CATGACAGAAAAGAAAG AGTAAGCTGCAGTGGATAACGGAC-3′ where base changes are indicated in bold. The second SDH4 region was obtained using the forward mutagenic primer sdh4F98G and sdh4F98Y, complementary to sdh4R98G and sdh4R98Y, and the reverse primer XSDH4R. The final mutagenized products were obtained using the overlapping PCR fragments as template with ESDH4F and XSDH4R as external primers. The products were then digested with EcoRI and XbaI and cloned in EcoRI-XbaI-digested pFL38 centromeric plasmid (Bonneaud et al. 1991). The mutagenized inserts were verified by sequencing and the pFL38 plasmid-borne SDH4 and SDH4 mutant alleles were transformed in the BY4741 using the lithium-acetate method (Gietz and Schiestl 2007). Isolation of mitochondria, enzyme assay and respiration Oxygen uptake was measured at 28 °C using a Clark-type oxygen electrode in a 1-ml stirred chamber containing 1 ml of air-saturated respiration buffer (0.1 M phthalate-KOH, pH 5.0) and 10 mM glucose (Oxygraph System, Hansatech Instruments, England). The reaction was initiated with the addition of 20 mg of wet weight of cells, as previously described ). Preparation of mitochondria and succinate dehydrogenase DCPIP assay was conducted as described ). The succinate:decylubiquinone DCPIP reductase assay was conducted as previously described (Jarreta et al. 2000;Oyedotun and Lemire 2001). Protein concentration was determined by the Bradford method using the Bio-Rad protein assay following the manufacturer's instructions (Bradford 1976). Pathological, histochemical and biochemical analysis Histopathological examination of the patient's heart revealed non-compaction of the left ventricular myocardium (Fig. 1a, b). Histological investigations reported normal skeletal muscle morphology whilst histochemical analysis of fresh-frozen muscle biopsy sections revealed a global reduction of succinate dehydrogenase (complex II) activity compared to aged-matched control samples (Fig. 1d, e). Spectrophotometric analysis of respiratory chain function in patient muscle homogenate revealed a marked defect in complex II activity (patient 0.042 nmols/min/unit citrate synthase activity; controls 0.145 ± 0.047 nmols/min/unit citrate synthase activity (n = 25) representing ~30 % residual enzyme activity; the activities of complex I, complex III and complex IV were all normal (not shown). Fatty acid oxidation flux studies on cultured fibroblasts gave normal results which excluded virtually all primary defects of long-and medium-chain fatty acid oxidation as the cause of the underlying cardiac pathology. There was no evidence of an underlying aminoacidopathy and serum urea and electrolytes were within normal limits. Investigations of glucose and lactate levels were not performed. The monolysocardiolipin/cardiolipin (ML/CL) ratio on a postmortem sample was 0.03 and the neutrophil count was within normal limits. Cytogenetic and molecular genetic investigations Karyotyping reported a normal 46 XY profile, consistent with no large genomic rearrangements. Following identification of an isolated complex II deficiency, Sanger sequencing of all six SDHx genes was undertaken and a novel homozygous c.275A>G, p.(Asp92Gly) variant was identified in SDHD (ClinVar Reference ID: SCV000196921). Results from parental carrier testing were consistent with an autosomal recessive inheritance pattern, with each parent harbouring a heterozygous c.275A>G, p.(Asp92Gly) SDHD variant (Fig. 2a). The p.Asp92 SDHD residue is highly conserved (Fig. 2d) and the c.275A>G variant is not reported on either the ESP6500 or 1KGP suggesting that it is rare in the general population. Whilst the c.275A>G, p.(Asp92Gly) SDHD variant has not been previously reported, another mutation affecting the same residue-c.274G>T, p.(Asp92Tyr)has been reported in association with familial PGL and PCC (Hensen et al. 2011). In silico predictions were strongly supportive of a deleterious effect; 100 % sensitivity and 100 % specificity were reported by SIFT and polyphen for both the p.(Asp92Gly) and p.(Asp92Tyr) variants. Both variants were assigned an aGVGD class of C65 (highly likely to be detrimental to protein function)-the p.(Asp92Gly) variant was reported to have a grantham difference (GD) value of 93.77, whilst the GD for the p.(Asp92Tyr) variant was 159.94 (GD > 70 is associated with C55/C65 variant classes). SDHD tertiary structure was not predicted to be markedly impacted by the patient's p.(Asp92Gly) substitution; no gross conformational change was reported by I-TASSER (Fig. 2b), whilst PSIPRED predicted only a mild alteration to the helix structure (Fig. 2c). Functional effect of the c.275A>G, p.(Asp92Gly) SDHD mutation on protein expression and complex assembly Having identified an excellent candidate mutation, assessment of respiratory chain complex assembly by onedimensional BN-PAGE revealed a marked decrease in fully assembled Complex II, whilst levels of fully assembled complexes I, III, IV and V were comparable to controls (Fig. 3a). Western blot of mitochondrial proteins in patient muscle was performed which confirmed a significant reduction of the SDHD and SDHA proteins compared to both equally loaded control muscle samples and Porin, a mitochondrial marker protein (Fig. 3b). Functional studies in a yeast model To further assess the pathogenicity of the patient's novel p.(Asp92Gly) SDHD variant, we performed complementation studies using a strain of S. cerevisiae lacking the SDH4 gene hereafter referred to as Δsdh4. The SDH4 gene is the yeast orthologue of human SDHD and although the human and yeast protein have a low degree of conservation (16 % identity and 36 % similarity) the p.Asp92 residue is conserved between the two species, corresponding to p.Asp98 in yeast (Fig. 2d). We then introduced the change equivalent to the human p.(Asp92Gly) variant into the yeast SDH4 wild-type gene cloned in a centromeric vector thus obtaining the sdh4 D98G mutant allele. Since another mutation involving the same residue, p.(Asp92Tyr), has been reported as a cause of paraganglioma, a second mutant allele, sdh4 D98Y , was also constructed to compare the phenotype between the two different amino acid substitutions. The SDH4, sdh4 D98G and sdh4 D98Y constructs and the empty plasmid pFL38 were then transformed into the Δsdh4 strain. To test the possible effects on mitochondrial function, we first evaluated the oxidative growth by spot assay analysis on mineral medium supplemented with either glucose or ethanol, at 28 and 37 °C. A clear growth defect was observed for the Δsdh4/ sdh4 D98G strain in ethanol-containing plates incubated both at 28 and 37 °C (Fig. 4a), with growth similar to that of the sdh4 null mutant. Contrariwise, the Δsdh4/sdh4 D98Y strain did not exhibit an OXPHOS-deficient phenotype at either temperature tested (Fig. 4b) or in either oxidative carbon source analysed (not shown). To further investigate the OXPHOS defect, oxygen consumption and SDH activity were measured. The oxygen consumption rate of the Δsdh4/sdh4 D98G mutant was 55 % less than that of the parental strain Δsdh4/SDH4 (Fig. 5a), likewise, succinate dehydrogenase enzyme activities (PMS/ DCPIP reductase and decylubiquinone reductase) were both severely reduced, with levels similar to those of the null mutant (Fig. 5b). Consistent with the results obtained from growth experiments the oxygen consumption rate of the Δsdh4/sdh4 D98Y mutant was not impaired (Fig. 5a) but both SDH activities (PMS/DCPIP reductase and decylubiquinone reductase) were partially reduced (80 and 75 % residual activity) in Δsdh4/sdh4 D98Y mitochondria (Fig. 5b). Together, these data support the pathogenicity of our patient's novel p.(Asp92Gly) SDHD variant. Discussion Mitochondrial complex II deficiency is one of the rarest disorders of the OXPHOS system, accounting for between 2 and 8 % of mitochondrial disease cases (Ghezzi et al. 2009;Parfait et al. 2000) with only ~45 cases reported in (Asp92Gly) SDHD mutation was identified in the proband, with parental DNA screening supporting recessive inheritance. The mutation affects a highly conserved p.Asp92 residue in the SDHDencoded subunit of succinate dehydrogenase (SDH). b Structural modelling. I-TASSER prediction of control and patient SDHD tertiary structure shows the p.Asp92 residue located within a transmembrane helix domain and the p.Asp92Gly substitution is predicted to have little impact on SDHD tertiary structure. c PSIPRED output predicts minor alterations to two of the SDHD helices from the patient p.(Asp92Gly) and HNPGL p.(Asp92Tyr) substitutions compared to control sequence. Predicted helix residues shown in pink; unshaded residues are located in coil domains. d Multiple sequence alignment of this region of the SDHD subunit was performed using ClustalW and confirms that the p.(Asp92Gly) mutation affects an evolutionary conserved residue (shaded). Alignments were manually corrected on the basis of the pairwise alignment obtained with PSI-BLAST the literature. We report a newborn boy presenting with left ventricular hypertrophy on foetal ultrasound at 32-weeks gestation who rapidly deteriorated after delivery due to cardiopulmonary insufficiency, dying on day one of life. Postmortem examination confirmed a non-compacted hypertrophic left ventricle but assessment of monolysocardiolipin and cardiolipin levels excluded a diagnosis of Barth syndrome. Biochemical analysis of his muscle biopsy revealed evidence of a marked isolated complex II deficiency. Sequencing the genes involved in succinate dehydrogenase structure and assembly was undertaken and revealed a novel homozygous c.275A>G, p.(Asp92Gly) SDHD mutation which was shown to be recessively inherited through segregation studies. Fig. 3 Investigation of OXPHOS complex activities and protein expression in patient and controls. a BN-PAGE analysis of mitochondria isolated from patient and control muscle homogenates revealed a reduction of assembled complex II in patient muscle with normal assembly of all other OXPHOS complexes. b SDS-PAGE analysis of patient and control proteins probed with antibodies against Porin (a loading control) and the SDHA and SDHD subunits of succinate dehydrogenase revealed a stark reduction in SDH steady-state protein levels in patient muscle, consistent with subunit degradation thereby supporting the pathogenicity of the p.(Asp92Gly) variant Fig. 4 Oxidative growth phenotype in yeast. The strain BY4741 Δsdh4 was transformed with a pFL38 plasmid carrying either the wild-type SDH4, the empty vector or the mutant alleles sdh4 D98G and sdh4 D98Y . Equal amounts of serially diluted cells from exponentially grown cultures (10 5 , 10 4 , 10 3 , 10 2 , 10 1 ) were spotted onto yeast nitrogen base (YNB) plates supplemented with either 2 % glucose or 2 % ethanol. The growth was scored after 3-day incubation at 28 °C (a) and 37 °C (b) Patients with an isolated complex II deficiency harbour either compound heterozygous or homozygous mutations in an SDH structural or assembly factor gene. The resultant loss of OXPHOS-driven ATP synthesis is associated with clinical presentations including Leigh syndrome, cardiomyopathy and leukodystrophy, that often present during infancy, though adult cases are reported (Taylor et al. 1996;Birch-Machin et al. 2000). Complex II deficiency is very rare, perhaps reflecting an incompatibility with life for many cases and our patient's clinical history with prenatal cardiomyopathy and rapid deterioration postpartum supports this hypothesis. Although the published cohort of patients with complex II deficiency is small, mutations which affect the ability of complex II to bind to the mitochondrial membrane are evolving to be the most deleterious. The only other SDHD-deficient patient reported in the literature harboured compound heterozygous variants, one missense and one that extended the protein by three amino acids (Jackson et al. 2014). The clinical presentation of this individual differed from that of our patient who presented in utero with a cardiomyopathy that was incompatible with life. The previously described case was delivered at term after a normal pregnancy and presented at age 3 months with developmental regression following a viral infection with progressive neurological deterioration (epileptic seizures, ataxia, dystonia and continuous intractable myoclonic movements) and died at the age of 10 years. The patient described by Jackson et al. also had comparably low levels of SDHD protein on Western blot with greatly reduced levels of fully assembled complex II; a residual level of complex activity is therefore unlikely to account for the difference in presentation. We hypothesised that the p.(Asp92Gly) variant might have caused a conformational change given the location of the conserved acidic p.Asp92 residue at the N-terminus of one of the protein's helical domains. With this in mind, we modelled the predicted impact of the patient's p.(Asp92Gly) SDHD mutation on tertiary structure using in silico methodologies. Contrary to our expectations, neither I-TASSER nor PSIPRED predicted gross tertiary structural anomalies due to the substitution, despite being situated between two conserved cysteine residues; the pathogenicity is therefore assumed to lie in the nature of the amino acid properties as opposed to consequential protein misfolding. The location of the p.Asp92 residue at the helical N-termini may explain the discrepancy between the predicted Grantham scores and the functional data obtained following yeast modelling; leucine-tyrosine interactions are reported to act as stabilisers within alpha helices (Padmanabhan and Baldwin 1994) meaning the p.Asp92Tyr substitution (with a higher GD score) may therefore be less deleterious than the p.(Asp92Gly) substitution harboured by our patient. Moreover, the location of the substitution may also be important in capping the positive helical dipole, and replacement with a non-polar residue such as glycine would fail to provide the same charge stabilization. There was slight discordance between the helix predictions from Fig. 5 a Oxygen consumption rates. Respiration was measured in cells grown in YNB supplemented with 0.6 % glucose at 28 °C. The values observed for the sdh4 mutant cells are reported as a percentage of the wild-type SDH4 cell respiratory rate, 40.46 ± 1.54 nmol min −1 mg −1 . b Complex II activity. PMS/ DCPIP reductase and decylubiquinone reductase activities were measured in mitochondria extracted from cells grown exponentially at 28 °C in YNB supplemented with 0.6 % glucose. The values of the sdh4 mutants are expressed as percentage of the activities obtained in the wild-type strain I-TASSER and PSIPRED (Fig. 2b, c) but on closer inspection of the discordant residues, there was low confidence in the predictions. Mutations in SDHD and other SDHx genes have been implicated not only in primary metabolic dysfunction, but also as drivers of neoplastic transformation in various tumour types. There is a wealth of information in the literature describing the involvement of SDHx gene mutations in cases of hereditary and sporadic cancers including head and neck paraganglioma, pheochromocytoma and gastrointestinal stromal tumours (Miettinen and Lasota 2014). In the context of hereditary cancer, each somatic cell harbours one heterozygous germline mutation either inherited from a parent or occurring de novo. This single loss-of-function allele, alone, is insufficient to cause neoplastic transformation but if a "second hit" affects the wild-type allele, the loss of SDH activity disrupts ATP production. The inability of SDH to metabolise succinate causes a build-up of substrate, with elevated succinate levels stabilizing HIF1α. This in turn creates a pseudo-hypoxic state, prompting a switch to glycolytic respiration consistent with neoplasia (Hanahan and Weinberg 2011;Pollard et al. 2005). The metabolic stalling due to SDH dysfunction also acts to inhibit multiple 2-oxoglutarate-dependent histone and DNA demethylase enzymes resulting in widespread histone and DNA methylation, further adding to the tumorigenic burden of these already respiratory-deficient cells (Xiao et al. 2012). To date, mutations reported in the SDHx genes are loss of function, either as tumour suppressors or in metabolic enzymes. The mutation harboured by our patient transcends these fields in that, although manifesting as a primary metabolic condition in our case, the p.Asp92 residue is recognised as a Dutch founder HNPGL mutation, p.(Asp92Tyr). Given the established link between tumorigenesis and this residue, further functional investigations were undertaken to determine whether the p.(Asp92Gly) variant-associated with primary metabolic dysfunction-was as deleterious, or indeed more so, than the founder HNPGL mutation. Functional investigations were supportive of a deleterious effect, Western blotting of patient muscle homogenates revealed a reduction in the steady-state levels evident for not only SDHD, but also for SDHA. This was supported by one-directional BN-PAGE, which confirmed a decrease in fully assembled complex II, consistent with the hypothesis that an inability to anchor the unstable complex within the mitochondrial membrane triggers the recycling of intermediates to prevent aggregation. This turnover is seen in other cases of mitochondrial complex dysfunction and prevents accumulation and aggregation of assembly intermediates and surplus complex subunits (Alston et al. 2012). To assess the pathogenic role of the novel p.(Asp92Gly) SDHD substitution, we carried out a series of experiments in yeast devoid of SDH4, the yeast SDHD orthologue. The use of ethanol or glucose as a carbon source tested the strains' ability to rely upon either OXPHOS or fermentation for ATP synthesis. The SDHD residue p.Asp92 shows high evolutionary conservation and corresponds to p.Asp98 in yeast. Given that a germline mutation involving the same amino acid has been reported as a cause of paraganglioma [p.(Asp92Tyr)], two mutant alleles-sdh4 D98G and sdh4 D98Y -were constructed to compare the phenotypes associated with the different substitutions. Consistent with the reduction of SDHD steady-state levels and fully assembled complex II found in our patient, the p.(Asp92Gly) mutation was detrimental to both oxidative growth and succinate dehydrogenase activity in yeast. Contrariwise, the p.(Asp98Tyr) HNPGL-associated substitution did not affect oxidative growth and showed a mild, albeit significant, reduction of SDH activity. Altogether the results obtained in the yeast model provide compelling functional evidence supporting the pathogenic role of the p.(Asp92Gly) mutation and show that this substitution conveys a more severe phenotype than the founder HNPGL SDHD mutation, this finding is not unique as other PGL-associated SDHD mutations were found to cause a milder phenotype when modelled in yeast (Panizza et al. 2013). Whilst our modelling suggests that the wellcharacterised p.(Asp92Tyr) PGL mutation is associated with what might be considered a mild phenotype in yeast, the phenotype in question is not a primary metabolic one and indeed it is only associated with oncogenesis in tandem with a second mutation, which is often a large-scale deletion or other null allele. There were no reports of potential SDHD-associated cancers in the immediate family although further information from extended family members was unavailable. We previously reported inherited recessive SDHB mutations in association with a paediatric primary mitochondrial phenotype and this case also lacked a history of hereditary cancer (Alston et al. 2012). It is unclear whether germline carriers of the p.(Asp92Gly) SDHD mutation are at elevated risk of HNPGL and despite no tumours having been reported in the family, it is the opinion of their clinicians that surveillance was advisable and is ongoing. Left ventricular non-compaction is a rare form of cardiomyopathy characterised by abnormal trabeculations in the left ventricle and associated with either ventricular hypertrophy or dilation. In some patients, LVNC arises from a failure to complete the final stage of myocardial morphogenesis, but this is not a satisfactory explanation for all cases, particularly those associated with congenital heart defects or arrhythmias. LVNC is genetically heterogeneous with many cases remaining genetically undiagnosed, but metabolic derangements are common and this form of cardiomyopathy is typical of Barth Syndrome, a disorder of mitochondrial cardiolipin typically accompanied by neutropenia (Chen et al. 2002) and has also been observed in other mitochondrial disorders including those due to mutations in mtDNA (Pignatelli et al. 2003). In conclusion, our case further expands the clinical and genetic heterogeneity associated with isolated complex II deficiency and demonstrates that sequencing analysis of all SDH subunits and assembly factors should be undertaken for patients in whom an isolated succinate dehydrogenase defect has been identified.
2017-08-02T20:13:23.731Z
2015-05-26T00:00:00.000
{ "year": 2015, "sha1": "8db5856e088fabce13829ba6e558fe45f312c0b1", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00439-015-1568-z.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "ec2945e0a61c57b9d9f70bb58587fe8aebc64257", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
252830549
pes2o/s2orc
v3-fos-license
Improvement method for cervical cancer detection: A comparative analysis Cervical cancer is a prevalent and deadly cancer that affects women all over the world. It affects about 0.5 million women anually and results in over 0.3 million fatalities. Diagnosis of this cancer was previously done manually, which could result in false positives or negatives. The researchers are still contemplating how to detect cervical cancer automatically and how to evaluate Pap smear images. Hence, this paper has reviewed several detection methods from the previous researches that has been done before. This paper reviews pre-processing, detection method framework for nucleus detection, and analysis performance of the method selected. There are four methods based on a reviewed technique from previous studies that have been running through the experimental procedure using Matlab, and the dataset used is established Herlev Dataset. The results show that the highest performance assessment metric values obtain from Method 1: Thresholding and Trace region boundaries in a binary image with the values of precision 1.0, sensitivity 98.77%, specificity 98.76%, accuracy 98.77% and PSNR 25.74% for a single type of cell. Meanwhile, the average values of precision were 0.99, sensitivity 90.71%, specificity 96.55%, accuracy 92.91% and PSNR 16.22%. The experimental results are then compared to the existing methods from previous studies. They show that the improvement method is able to detect the nucleus of the cell with higher performance assessment values. On the other hand, the majority of current approaches can be used with either a single or a large number of cervical cancer smear images. This study might persuade other researchers to recognize the value of some of the existing detection techniques and offer a strong approach for developing and implementing new solutions. Introduction Cervical cancer is one of the primary causes of gynecologic cancer and one of the most common and dangerous diseases for women, even though it can be treated if detected early. Cervical cancer is cancer that forms when cells on the cervix grow abnormally. There is a large volume of published studies utilizing the Pap smear test to detect pre-cancer in the uterine cervix [1,2]. This type of cancer also remains one of the major public health challenges in several countries, especially countries with low and middle income, in terms of the financial aspect and logistical issues [3]. Previous studies have reported that this cancer is the fourth most pervasive cancer type, which affects the life of many people worldwide [4][5][6]. A large and growing body of literature has investigated the main cause of cervical cancer, stating that the long-lasting infection with a certain type of human papillomavirus (HPV) is passed from one person to another during sex. Non-human papillomavirus-associated adenocarcinomas (NHPVAs) are uncommon uterine cervix tumors with a deceptive appearance [7][8][9]. HPV will affect at least half of all sexually active persons at some point in their lives, but only a small percentage of women will develop cervical cancer. A pap smear test is used to detect cervical cancer in most cases and is known as a widely used screening procedure for cervical cancer. However, in recent years, practitioners have executed this evaluation manually, and the results are still controversial due to the accuracy of the diagnosis in detecting cervical cancer cells. In addition, the evaluation is done using the naked eye to determine the type of cervical cell. Furthermore, due to human error, this manual screening approach has a high rate of false-positive results [10]. However, far too little attention has been paid to the occurrence of cervical cancer that can be effectively reduced with preventive clinical management strategies, including vaccines and regular screening examinations [11]. It has previously been observed that early diagnosis and classification of cervical lesions greatly boost the chance of successful treatments of patients [12]. The main objective of the initial diagnosis and classification of cervical cancer is to reduce the mortality rate [13,14]. This cancer can be successfully treated with earlier detection. The findings from existing research recognize the critical role played by the screening test in reducing the mortality rate caused by cervical cancer. In the past years, the Pap smear test has attracted much attention and is best known as a preventive approach used in the current medical field for detecting cervical cancer [15,16]. This test demands a specialized and labor-intensive analysis of cytological preparations to trace potentially malignant cells from both the internal and external cervix surfaces. The cytopathologist analyzes the microscopic fields by screening for abnormal cells. The use of slide digital cytology imaging to increase cytological diagnosis accuracy could be beneficial. Recent evidence suggests that screening diseases, including cervical cancer, breast cancer, and colorectal cancer, using cell images from slide cells has been widely applied in recent years [17][18][19]. However, poor image quality due to the uneven staining, complex backgrounds and overlapped cell clusters poses a greater challenge in nuclei segmentation [20]. In addition, biomedical signal processing, which entails analyzing, improving, and presenting pictures obtained via x-ray, ultrasound, MRI, and other methods, has the same concept as biomedical image processing. Image processing is a technique for performing operations on an image to improve or extract relevant information. It is one type of signal processing that processes an input of a picture and turns the output maybe into an image or characteristics/ features associated with that image. Most recent attention has focused on image processing for classifying cervical cells. However, the nature of the accurate classification of Pap smear images is still in the improvement stage for better performance. It is still one of the challenging tasks in medical image processing, and its performance can be enhanced by extracting and selecting well-defined features and classifiers [21]. Computer-assisted cervical cancer screening based on automated recognition of cervical cells offers the potential to minimize errors and increase the accuracy of the test when compared to manual screening. Traditional approaches rely heavily on cell segmentation accuracy and discriminative hand-crafted feature extraction [22]. The purpose of this paper is to review recent research on automated detection methods available for the classification of cervical cancer. Review of Study Numerous studies have attempted to suggest that image preprocessing may have a dramatic positive effect on the quality of feature extraction and image analysis results [23][24][25][26]. For example, Jahan et al. [27] have demonstrated that pre-processing outlines the methods such as cleaning, integration, transformation, and reduction. The main goals of data preparation are to reduce data size, establish data correlations, standardize data, remove outliers, and extract features. Before adopting Machine Learning (ML) models, the basic six steps for coping with the intended dataset must be performed. The process of importing the library, importing the dataset, working with missing data in the dataset, encoding categorical data, and splitting the dataset into training and test sets are all done in a methodical way [27]. A number of studies have found that image segmentation is a common approach used in various pre-processing image applications. Medical imaging, video surveillance, and object detection are some practical applications of image segmentation. The segmentation approach is the method for automatically or semi-automatically extracting the Region of Interest (ROI) from an image [28]. Thus, it enables the suggested method to engage with the image Region of Interest (ROI) rather than pixels on a grid. After that, the Simple Liner Iterative Clustering (SLIC) output then advances to the second stage, the Density-based Spatial Clustering of Application with noises (DBSCAN) clustering algorithm for similar grouping of super pixels based on their density. DBSCAN produces a clustered image, with each cluster being a nucleus candidate. There are fewer image regions to evaluate at this step, which reduces computing time and prevents a non-nucleus image from being classified as a nucleus. DBSCANs only input parameter is a threshold, which determines to cluster using a density distance function. In a different study, an Artificial Intelligence Accurate Diagnosis Solution (AIATBS) is developed to improve cervical liquid-based thin layer cell smear diagnosis according to clinical (The Bethesda System) TBS criteria [29]. The Darknet53 framework was used to coordinate the target detection training, and a YOLOv3 detection model was obtained. Then, integration of XGBoost and a logical decision tree is applied to optimize the parameters provided by the learning process, in which a full cervical liquid-based cytology smear TBS diagnosis system that includes a quality control solution is created. The 121 characteristics from the YOLOv3 detection model, Xception classification model, Patch classification model, and nucleus segmentation model were fed into an XGBoost model for diagnostic model training. Positive and negative squamous intraepithelial lesions were predicted to be positive or negative. A basic XGBoost model for squamous intraepithelial lesions TBS classification was used to further classify the positive results. The system adapts to diverse standards, staining methods, and scanners when it comes to sampling preparation. An investigation and research finding by Xue et al. [30] also point toward the application of Automatic Visual Evaluation (AVE) to predict pre-cancer based on a digital image of the cervix. This approach has been seen to be a low-cost means of enhancing human performance. However, taking AVE beyond proof-of-concept and into use as a functional complementary tool in visual screening has several challenges. Creating AVE robust across images recorded with several devices is one of them. A new deep learning-based clustering approach is being used to see whether images taken by three different devices (a standard smartphone, a custom smartphone-based handheld device for cervical imaging, and a clinical colposcope with SLR digital camerabased imaging capability) can be distinguished from one another in terms of visual appearance/content within their respective cervix regions. Two established ImageNet pretrained networks, known as ResNet50 and Vgg16, are used in the study. The representative deep learning classification network is a classification network that has attained excellent performance on the ImageNet dataset to extract features, allowing authors to use the transfer learning technique. The findings and analysis show a need to design a system that reduces the variance between photos acquired from different devices. It also emphasizes the importance of a vast number of training images from various sources for reliable deviceindependent AVE performance around the world [30]. In addition, Stacked Denoising Autoencoders (SDAEs) are applied to improve the performance of normal Stacked Autoencoders (SAE). However, when examining a larger number of input samples, SDAE's convergence rate takes longer, given by 2′16, 2′18, and 2′14, s since each sample will be taken into account. The suggested h6, h8, and h4 systems add the Fine-tuned Stacked Denoising Autoencoder (FSDAE), which denoises using a minibatch of samples rather than the entire data from supplied input images. The proposed FSOD-second GAN phase will augment the collected images with segregated classes, related types, and stages to minimize overfitting due to the subsequent detection and classification. Several data augmentation procedures, such as rotation, flip, shift, and zoom, have been used to increase the overall quantity of data. Resizing and cropping the input photographs to a width and height of 100 Â 100 pixels, as well as recoloring the grayscale color channel, were used to enhance and pre-process them. The final image will be a matrix with each row consisting of abnormally flattened grayscale pixels [31]. One of the more significant findings to emerge from this review is that pre-processing of images plays an important phase in image processing techniques, specifically for the detection method of cervical cancer cells. Although the previous researcher has used several techniques, the most common method is augmentation. The augmentation method is one method that is able to increase the cardinality of the training dataset and avoid fitting. Apart from that, this helps in increasing the accuracy of the overall network of the convolutional layer structure. Furthermore, the total number of images can be increased with the application of techniques like rotation, flipping, shifting and zooming for data augmentation. Detection method based on cells/pap smear images Deep learning is a Computer-Aided Diagnostics (CAD) based system investigated widely to classify cervical Pap cells. However, deep learning may provide poor performance for a multiclass classification task when there is an uneven distribution of data which is prevalent in the cervical cell dataset. A study has been conducted by Rahaman et al. [32] to address those limitations by proposing DeepCervix, a hybrid deep feature fusion (HDFF) technique. A hybrid ensemble technique comprising 15 different machine learning algorithms such as random forest, bagging, rotation forest, and J48 is able to perform better than an individual algorithm. Various pre-trained deep learning models in this study, including VGGNet, ResNet, ResNetV2, Inception-Net, InceptionResNetV2, XceptionNet, DenseNet, and NasNet has been trained. Results obtained have indicated that a combination of VGG16, VGG19, ResNet50 and XceptionNet provides the best results for this task [32]. More recent studies have confirmed that current discussions in biomedical technology relate to the method for detecting cervical cancer. There have been several studies in the literature reviews related to the detection of cervical cancer cells with the objective of aiding pathologists. In the year 2021, Cao et al. [33] proposed a method in which they describe a novel deep learning method named Attention Feature Pyramid Network (AttFPN) for abnormal cervical cells. The AttPFN method consists of two main components. It comprises an attention module mimicking the way pathologists read a cervical cytology image as well as a multi-scale region-based feature fusion network guided by clinical knowledge to fuse the refined structure for detecting abnormal cervical cells at different scales. The proposed method outperformed the other related deep learning methods of Faster R-CNN with Feature Pyramid Network (FPN), worthy of comparison to experienced pathologists with a 10-year of experience on an independent dataset. The findings are consistent with the study by several researchers, which proposed the utilization of the Faster R-CNN method for the detection of cervical cancer cells [34,35]. Besides, Tang et al. [36] proposed the comparison detector based on a proposal-based detection framework which often consists of a backbone network for feature extraction, an RPN for generating proposals and a head for the proposed classification and bounding box regression. The overall structure of the comparison detector proposed is shown in Fig. 1. The research study by Chen et al. [37] has proposed a study that focuses on improving the accuracy of cervical cell classification by considering resource limitations. A compact and effective model that meets design requirements for embedded devices is built with a lightweight Convolutional Neural Network (CNN) architecture to create a highly efficient model with fewer parameters and calculations. The proposed method's basic steps are as follows: Prepare the image samples by pre-processing the datasets. On the target dataset, use transfer learning to train different teacher models. First, download the already trained CNNs, then fine-tune CNNs to the target dataset. Then, based on the training results, determine the final teacher model. From the final instructor model, get the soft labels. On the target dataset, train the lightweight student CNN models using dark knowledge loss, cross-entropy loss, as well as soft and hard labels. Only use the traditional cross-entropy loss and hard labels to test the lightweight models on the target dataset [37]. A Multi-Task Network (MTN) is one of the methods proposed in a study based on Y-Net's architecture and performs two tasks: nuclear segmentation and classification. The network's segmentation component features an encoderdecoder structure. The fundamental feature extraction activities in the encoder are handled by Efficient Spatial Pyramid (ESP) modules. The decoder receives the encoder's final feature representation and constructs a nuclear mask with the same spatial resolution as the input using up sampling and pyramid spatial pooling (PSP) modules. Information can be shared between the encoder and the decoder by concatenating skip links from the encoder to the decoder. The diagnostic component of the MTN is made up of more ESP modules, which lead to an average global pooling module and two completely connected layers. A single convolution conducts down sampling processes, halving the spatial resolution of the feature maps. Bilinear interpolation is used for up sampling. After each down sampling, up sampling, ESP, and PSP module, batch normalization and ReLU activation are implemented. The modules that make up the MTN and the learning via proxy labels are described in the following sections [38]. In a different study, Diniz et al. [39] have shown an efficient ensemble to classify the segmented regions (nucleus candidate) returned from the pre-processing phase. The ensemble method consists of the Decision Tree (DT), Nearest Centroid (NC) and k-Nearest Neighbors (k-NN). The findings of the study show that this ensemble method achieved the best result concerning the F1 and recall values. With the same objective, three segmentation strategies for automated segmentation of cervical cell nuclei in the presence of debris are described by Arya et al. [40]. Automated Seed Region Growing, Extended Edge Based Detection, and Modified Moving Segmentation is three segmentation approaches. Extraction of the nuclei of cervical cells, k-means approaches are presented. Using the morphological trait of a nucleus, these techniques extract the area of nuclei from smear images. Some debris has an area that matches the nucleus of normal cells, which can cause interference and false-positive results. This study describes three strategies for automated segmentation of cervical cell nuclei in the presence of debris. It comprises Automated Seed Region Growing, Extended Edge Based Detection, and Modified Moving Segmentation approaches. These techniques extract the area of nuclei from smear images using the morphological attribute of the nucleus. Some debris has a nucleus that matches regular cells, causing interference and false-positive results. Research demonstrates that Modified Moving k-means are more accurate in identifying dysplastic in the presence of debris [40]. A workload-reducing algorithm for analysis of cell nuclei features from Pap smear images. An investigation has been done with the involvement of eight traditional machine learning methods to perform a hierarchical classification [41]. The classifier involved were: AdaBoost, Decision Tree (DT), Gaussian Naive-Bayes (GNB), k-Nearest Neighbors Morphological and other features are also included in the study. The study found that hierarchical classification provided better findings than those without it. In 2021, Pirovano et al. [42] explained in a study how to apply the suggested method (classifier with regression constraint) to the novel task of categorizing tiles from cytology images in the context of cervical cancer. In that paper, with the application of an attribution strategy, a demonstration has been made to the model learned to discover the cells responsible for the anticipated label under weak supervision. The three suggested architecture (Resnet-101 classifier, Resnet-101 Regressor and Resnet-101{Classifier + Regressor}) surpasses a simple classifier and other state-ofthe-art approaches for ordinal classification in terms of overall accuracy and severity prediction. Furthermore, the suggested method is successfully tuned to achieve a higher sensitivity as a tool that can help practitioners [42]. Recently, a convolutional neural network-based detector has been used to lessen the reliance on hand-crafted features and eliminate the need for segmentation. These strategies, on the other hand, tend to produce an excessive number of false-positive predictions. Therefore, to resolve this issue, a global context-aware framework was created with the use of an image-level classification branch and a weighted loss to incorporate global context information. A global context- aware network with Soft-Scale Anchor Matching (SSAM) is proposed to optimize the parameters. This method involves a backbone network, Image-Level Classification Branch (ILCB) and cervical cell detection branch. This branch's prediction is paired with cell detection to filter out erroneous positive predictions. The backbone network provides shared features for image-level categorization and cervical cell detection. DarkNet is used as the backbone network. Abnormal cervical image existence is catered to using the application of ILCB, which is directly attached to the top of the backbone network. The cervical cell detection branch consists of a three-level FPN, and the detection head attached to each feature level of FPN is used to anticipate where cervical cells will be spotted and which class they will belong to. Table 1 will illustrate the summary of past studies related to nucleus detection. Dataset Herlev is a widely used dataset, and this image database has been used to design the detection technique. In addition, most researchers used the Herlev University image datasets to improve the design and development process. Herlev Pap database is compiled by Herlev University Hospital (Denmark) and the Technical University of Denmark. The database contains 917 pictures manually sorted into groups by professional cytotechnicians and physicians. Surface squamous, intermediate squamous, columnar, mild dysplasia, moderate dysplasia, extreme dysplasia, and in situ cancer are among the seven cervical cell classifications in the database. In addition, various cell and nucleus properties are extracted [2]. In this study, 105 pap smear images were used. The database falls under the category of NiSIS or Nature inspired Smart Information System (EU coordination action, contract 13569), with a particular focus on the group "Nature-Inspired Data Technology". The data is accessible over the internet (http://mde-lab.aegean.gr/index.php/downloads). Table 2 provides the details of the dataset used for the nucleus detection method. Seven types of cells fall under the category of normal cells and abnormal cells. The normal cells consist of normal superficial, normal intermediate and normal columnar types. In contrast, the abnormal cells consist of mild dysplastic, moderate dysplastic, severe dysplastic and carcinoma in situ type of cells. The total numbers of images in this dataset are 917. Experimental procedure The experiment is done based on several approaches used by previous researchers in past studies. This study considers the method for nucleus detection for cervical cells based on pap smear test images. The dataset used is the established Herlev dataset. The image is processed based on the suggested approach for improving existing segmentation techniques such as thresholding, trace region boundary, contrast enhancement, edge detection, as well as a morphological and watershed approach using Matlab R2021a. The processed image is then compared to the ground truth using image quality assessment for performance analysis. Finally, the values calculated are compared to determine the better performance approach for nucleus detection. Performance analysis Performance analysis is a series of heterogeneous computeraided tools that assess a system's performance at several levels of abstraction, making the task more difficult. The performance of the analysis process can be enhanced through the extraction of a common object model. Five regularly used performance metrics from the literature were used in the reviewed studies. Accuracy, precision, recall, geometric mean and F1-score are the performance measurements. These performance measures were calculated using mathematical equations [31,[50][51][52]. Details of the performance metric as per shown in Table 3. Furthermore, Jia et al. [53] showed that Tue Positives (TP), False Positives (FP), True Negatives (TN), and False Negatives (FN) are used to create indicators in the confusion matrix. Accuracy, precision, sensitivity, specificity, F-Index, and Negative Predictive Value (NPV) are common measures in biomedical segmentation. Precision is often used in conjunction with sensitivity and refers to the ratio of correctly splatted foreground pixels. The ratio of pixels in ground truth that match the separated ones is referred to as sensitivity. The average harmonic value of precision and sensitivity is known as the F-Index. The NPV is a metric for how comprehensive a set of results is. Other metrics, such as the Dice coefficient and the Jaccard Index, provide a more comprehensive assessment of segmentation. The extracted contours are estimated fairly using Volumetric Similarity (VS). Visual Accuracy (VA) is a visual evaluation of segmentation. The following is a list of the metrics referenced in [53] that lead to false-positive results. The histogram of an image in image processing usually refers to a histogram of pixel intensity values. Qualitative results One of the most broadly utilized performance analysis methods is qualitative analysis. Probabilistic statements about the algorithm's performance and weaknesses are based on human visual perception [40,54]. In a study by Arya et al. [40], the first step in analyzing the results of three segmentation techniques to predict the dysplastic in cervical cells in the presence of debris is extracting the Region of Interest (ROI). Normal cell nuclei, abnormal cell nuclei, and debris are detected in ROI, and the area of all the objects is computed. Some debris has a region that matches the nucleus of normal cells, which could interfere with the outcome and lead to false-positive results. The histogram of an image in image processing usually refers to a histogram of pixel intensity values. Quantitative analysis Quantitative analysis is a numerical-based way of obtaining information on an algorithm's performance without involving any human interaction. Smear photos have a lot of debris in the background, and the nucleus and cytoplasm are in the foreground. The number of items, precision, sensitivity, Fmeasure, specificity, accuracy and PSNR are calculated in a complicated context. The calculated values can be calculated based on the segmented images. These findings demonstrate the importance of validating image quality using the suggested techniques on the Pap smear dataset [40]. Results A reviewed method based on several segmentation techniques has been tested using Matlab. Table 4 has tabulated the processed images for different types of cervical cells using four reviewed methods. Methods used are (1) Edge detection and morphological approach, (2) Watershed Approach, (3) Thresholding and trace region boundaries in the binary image and (4) Enhance Grayscale Images using Contrast Enhancement Technique. Nuclei are displayed in the final image for normal cells in the segmented images, while a blank image is formed for aberrant cells. Overall, according to the qualitative analysis, the thresholding and trace region boundaries in binary images outperform the other traditional algorithms in terms of segmentation performance, regardless of the number of objects employed [40]. Several methods have been introduced for cervical cancer detection in the area of nucleus detection. In this study, the image database is processed based on the four methods reviewed and was written as a new image. Table 4 shows that method 1 shows favorable results compared to the other methods. However, the other methods are also able to detect nuclei but are limited to certain types of cells. Performance analysis is a series of heterogeneous computer-aided tools Table 5 until Table 8 has demonstrated the calculated values of precision, sensitivity, f-measure, specificity, accuracy and PSNR for each approach tested. Based on the result shown in Table 5, the approach of thresholding and tracing region boundaries in binary mage showed a favorable result in the ability to detect the nucleus of the cervical cancer cells for all different types of cells. This approach is able to detect all nuclei from seven types of cells with a high value of precision, sensitivity, f-measure, specificity, accuracy and PSNR. The highest values obtained are for a severe dysplastic cell which shows the consistent highest values of precision 1, sensitivity 98.77%, F-measure 99.37%, specificity 98.41%, accuracy 98.77% and PSNR 25.74%. Thus, the tabulated data has proved this method is able to perform a good image pre-processing for the nucleus detection of a cervical cancer cell. The calculated data of precision, sensitivity, F-measure, specificity, accuracy and PSNR has been recorded in Table 6 for method 2: Enhance grayscale images using contrast enhancement technique. This approach has fluctuating values for all calculated values. Values of sensitivity, Fmeasure and accuracy are in the high range of around 50% to 100%. The PSNR value might not be the highest compared to method 1. However, the values are still more than 3% and up to 14.80%. This method yields high values for accuracy of 95.41% and 94.07% for the cell type of moderate dysplastic and severe dysplastic, respectively. The accuracy value is quite high and in the range of other existing techniques reviewed. Next, for method 3: Edge detection and morphological approach, the calculated data of precision, sensitivity, Fmeasure, specificity, accuracy and PSNR have been recorded in Table 7. This approach has fluctuated values for all calculated values, which possess quite a similar pattern to Method 2. Values of sensitivity, F-measure and accuracy are in the high range of around 50% to 100%. The PSNR value might not be the highest compared to Method 1. However, the values are still more than 3% and up to 14.39%. This method yields high values for accuracy of 96.31%, 94.98% and 90.52% for cell types of severe dysplastic, moderate dysplastic and carcinoma in situ, respectively. The value of accuracy is quite high and in the range of other existing techniques reviewed. This method is a better option compared to Method 2 but still a less likely option compared to Method 1. Lastly, for Method 4: watershed approach, although this method has resulted in the lowest values of all calculated data as in Table 8, it still promotes an opportunity to be improved for better performance. Therefore, compared to the other proposed algorithms, Thresholding and trace region boundaries in binary images outperform them all. The related qualitative analysis demonstrates good image segmentation ability [40]. Based on the values calculated, it is easier to choose a better option for image pre-processing in detecting the nucleus of cervical cancer cells. This shows that this method can provide good data for further classification of the cervical cancer cell type. Conclusion Several approaches and analysis methods are studied and reviewed in this paper that has been developed to create an end-to-end framework for cervical cancer diagnosis and classification. All of the strategies proposed have been designed to work with multivariate datasets. The recommended Furthermore, there is little evidence that these algorithms will work in clinical situations. Furthermore, there seems to be no evidence that these algorithms will perform in clinical settings in developing countries (where 85% of cervical cancer cases occur) since competent cytologists and funds to purchase commercial segmentation software are limited. In conclusion, this research may motivate other field researchers to recognize the potential of some of the methodologies investigated, as well as give a solid platform for creating and implementing new ways.
2022-10-12T16:02:17.020Z
2022-10-10T00:00:00.000
{ "year": 2022, "sha1": "67146dabd4547e8fd1050781495068848f9dcee4", "oa_license": "CCBY", "oa_url": "https://file.techscience.com/ueditor/files/or/TSP_OR-29-5/TSP_OR_25897/TSP_OR_25897.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "f3071eb2f0ec71b61a0b1ed2253ce2998734c184", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
260969460
pes2o/s2orc
v3-fos-license
New York State, New York City, New Jersey, Puerto Rico, and the US Virgin Islands' Health Department Experiences Promoting Health Equity During the Initial COVID-19 Omicron Variant Period, 2021-2022 In this case study, we aim to understand how health departments in 5 US jurisdictions addressed health inequities and implemented strategies to reach populations disproportionately affected by COVID-19 during the initial Omicron variant period. We used qualitative methods to examine health department experiences during the initial Omicron surge, from November 2021 to April 2022, assessing successful interventions, barriers, and lessons learned from efforts to promote health equity. Our findings indicate that government leadership supported prioritizing health equity from the beginning of the pandemic, seeing it as a need and vital part of the response framework. All jurisdictions acknowledged the historical trauma and distrust of the government. Health departments found that collaborating and communicating with trusted community leaders helped mitigate public distrust. Having partnerships, resources, and infrastructure in place before the pandemic facilitated the establishment of equity-focused COVID-19 response activities. Finally, misinformation about COVID-19 was a challenge for all jurisdictions. Addressing the needs of diverse populations involves community-informed decisionmaking, diversity of thought, and delivery measures that are tailored to the community. It is imperative to expand efforts to reduce and eliminate health inequities to ensure that individuals and communities recover equitably from the effects of COVID-19. Introduction 2][3] Promoting health equity requires continuous commitment to valuing all people and providing resources according to needs. 2 The COVID-19 pandemic has challenged the capacity of public health agencies to advance health equity.5][6][7] Many health departments (HDs) across the United States have built upon decades of response best practices, and leveraged prior community engagement and partnerships, to attempt to reach historically disadvantaged populations experiencing the most severe COVID-19 outcomes. 7[10] In this case study, we describe how HDs in 5 US Department of Health and Human Services (HHS) Region 2 jurisdictions-New York State, New York City, New Jersey, Puerto Rico, and the US Virgin Islands-addressed health inequities and implemented strategies to reach populations disproportionately affected by COVID-19 during the initial Omicron variant period. Background Over the past 2 decades, public health agencies have developed strategies to reach populations disproportionately affected by COVID-19 based on lessons learned from past public health emergencies. 4The December 2006 Pandemic and All-Hazards Preparedness Act 11 required HHS to ensure the integration of planning for at-risk populations into emergency response policy and programs. 12In response, the US Centers for Disease Control and Prevention (CDC); federal, state, tribal, local, and territorial agencies; community-based organizations; faith-based organizations; and other partnership organizations engaged in efforts to promote health equity. 13CDC developed the Public Health Workbook to Define, Locate, and Reach Special, Vulnerable, and At-Risk Populations in an Emergency 14 and worked with the Association of State and Territorial Health Officials and the National Association of County and City Health Officials to produce At-Risk Populations and Pandemic Influenza: Planning Guidance for State, Territorial, Tribal, and Local Health Departments. 15These early efforts laid the groundwork for similar efforts during the COVID-19 pandemic. 137][18][19][20][21] When the COVID-19 pandemic began, testing, physical distancing, case investigation, contact tracing, isolation, and medical care were critical interventions implemented to reduce transmission. 22As the pandemic progressed, COVID-19 vaccines were considered the most effective intervention to ending the pandemic. 23istorically, underserved communities have experienced lower adoption of interventions. 21This has been a challenge to HDs in their efforts to advance health equity. Many underlying inequities experienced by populations disproportionately affected by COVID-19 are longstanding, such as a lack of linguistically and culturally accessible resources. 24Strategies to address these inequities involve the commitment of time and resources by HDs to develop community-driven approaches.Engaging people impacted by the circumstances of where they live in their own environment is critical to affecting health-associated behaviors and risks. 25Community-driven approaches rooted in partnerships with trusted community members to promote capacity building, power sharing, colearning, and cocreation provide a broader social justice perspective than more general approaches. 26thods HHS has 10 regional offices that collaborate with state, tribal, local, and territorial HDs to implement and support policies and programs. 27HHS Region 2-the focus of this case study-includes 5 jurisdictions: New York State, New York City, New Jersey, Puerto Rico, and US Virgin Islands.With a combined population of nearly 33 million people living across 1,500 miles, these states and territories differ in population size, economics, demographics, and environments.They include racial, ethnic, and other underserved communities with distinct public health needs and assets, providing unique collaboration opportunities to address health equity barriers. 28e used a qualitative approach to better understand health equity strategies during the initial Omicron surge, from November 2021 to April 2022.The team first conducted a limited survey, followed by key informant interviews, to examine HD experiences in HHS Region 2 jurisdictions.The aim of the initial survey was to gather information to help develop the interview guide for the key informant interviews.In June 2022, we sent an email to state/territory epidemiologists to identify the person responsible for health equity to complete the survey.The email included a link to the survey.The anonymous online survey included 21 closed-and open-ended questions to assess successful interventions and barriers encountered (see Supplemental Materials, www.liebertpub.com/doi/suppl/10.1089/hs.2023.0001).The responses were received at the CDC via secure encrypted REDCap software. 29Five people completed the survey.Based on the responses and the thematically analyzed answers to the open-ended survey questions, we created a guide for the key informant interviews. Key informant interviews were conducted using the semistructured interview guide (see Supplemental Materials), which provided in-depth insight into HDs' experiences addressing health inequities during the COVID-19 pandemic.Twelve health equity subject matter representatives from the 5 jurisdictions were purposefully sampled as key informants.From July 13 to August 1, 2022, we conducted 5 interviews (45 to 60 minutes each) with each jurisdiction using Microsoft Teams and facilitated by 1 lead interviewer and 3 notetakers.We obtained verbal consent from participants for notetaking during interviews.Interview questions focused on populations at increased risk for COVID-19, interventions, successes, gaps, and barriers.Key informant responses were transcribed into Microsoft Excel.HDs did not receive compensation for the interviews or survey.][32][33][34] Following the key informant interviews, we analyzed the transcribed interviews to examine HD perspectives on impact, barriers, and gaps related to health equity interventions during the initial Omicron variant period.6][37][38] Within common themes, we summarized associated topic areas and key lessons learned regarding health equity strategy implementation across jurisdictions. Results Experiences and insights gathered from the surveys and interviews with HHS Region 2 HDs are reported by theme.Common themes identified include (1) health equity preparedness, (2) trust building/connectedness, and (3) expanding resources/improving capacity, each of which is aligned with associated topic areas (Figure 1).The topic areas are highlighted by quotes from key informants. Health Equity Preparedness Acknowledgment of Historical Trauma and Mistrust of Government Acknowledgment of historical experiences and harm was an important initial step in addressing health inequities.''Culturally, there is a history of treating citizens as guinea pigs.There is a different political relation between [our government], that cannot be erased or ignored.''Another key informant shared that their HD made acknowledgment of harm done as part of the health equity plan, including asking the local government to take accountability for health inequities: First was acknowledging the hurt [.] we wanted to acknowledge harm done and we made this a part of the health equity plan.[Our] agency leaned on [the government leader] to make apologies, as well as the agency and partners to have accountability. Another stated: Medical investigations or studies were done [.] with different populations and that is something that you can see a lot in social media.For this reason, persons express concern over pharmaceutical experimentation. Leadership Support and Inclusion of Health Equity as Part of the Response Framework HDs found it essential for government leadership and policymakers to prioritize health equity as a vital part of the response framework from the beginning.This helped to lay the groundwork for health equity efforts and allowed better use of resources across agencies.One representative reported that the establishment of an emergency act by their jurisdiction's government leadership allowed HDs to work quickly to begin addressing health inequities.Another stated, ''[Our] vaccine equity task force was developed at beginning of COVID-19 [. our health department leadership] prioritized health equity and included it in all policy and programming.''Leadership of health equity efforts varied by jurisdiction, sometimes involving 1 individual and other times a group of people.A representative described the appointment of an equity officer as part of the centralized institutional structure, which allowed the HD to direct resources.In another jurisdiction, ongoing work with government leadership led to the establishment of a health equity working group that included influential and trusted people in their respective communities.One person described a task force comprising ''senior staff of color'' who were delegated to address inequities in their jurisdiction.Throughout the pandemic, this task force coordinated activities and leveraged seniorlevel partnerships across local agencies. Several representatives stated that health equity was a focal point for vaccination planning.One specifically stated that their leadership emphasized that ''Equity is number 1. [And that] this vision allowed for a lot of people to get vaccinated, especially people of color.''One interviewee described a minority health team established before the COVID-19 pandemic, which provided the groundwork for the development of a task force.Other efforts included the creation of a new health equity division overseeing COV-ID-19 vaccine administration and a supplemental grant program that funded mobile vaccinations; nursing; and educational interventions for people experiencing homelessness, with a mental illness, and with limited mobility. Trust Building/Connectedness Community Leaders and Engagement HDs found that working with trusted community leaders to deliver public health messages mitigated distrust.To conduct community outreach, HDs engaged with partners and leaders from faith-based organizations, communitybased organizations, entertainment industries, and private businesses.Key informants indicated higher uptake of community interventions when collaborating with trusted community members.One key informant stated: Because of our community's small size, our relationships were longstanding.We had formed preexisting relationships prior to COVID-19.We engaged with federally qualified health centers, physicians, and faith-based organizations.Most of our employees were a part of some faith-based organizations so we leaned on those relationships. Previously established partnerships enabled jurisdictions to expeditiously identify and engage populations that were disproportionately affected by COVID-19.One HD worked with partners to streamline the identification of people who had disabilities or were homebound.Another representative stated that the knowledge of their jurisdiction's community and housing occupancy allowed for interventions to focus on individuals' routine interactions with healthcare services.Others said: The [health department] was provided with a list of names and contact information from municipalities for people who were homebound, as well as leaders and people who have experience serving this [population] group. We reached out to housing authorities and found those that may be hard to reach.We made flyers beforehand to have hard copy handouts to share with people about what we were doing. [We] relied on partners for [persons experiencing homelessness], when COVID hit.[We] had to build an internal state team, then connect with partners that run shelters.They would tell us what was needed.We worked with social services to ensure people [experiencing homelessness] were connected to COVID hotels [.]We leaned heavily on partners to guide us, and as conduits for resources-staffing, vaccines, food [.] partners shared needs and implementation. Culturally responsive outreach, in which HDs activated linguistically and culturally representative outreach teams, was indicated among the most successful interventions.One representative described ''trainings for community-based leaders.It includes a collective of Black/Brown providers and physicians because the message is important, but so was the messenger.''Another action was hosting in-person community conversations where culturally appropriate messaging and vaccine testimonials were shared with attendees. Reaching migrant populations was especially challenging.Due to a change in migration patterns during COVID-19, persons transitioning between countries or states could be lost to follow-up.Furthermore, people with undocumented status were fearful of participating in vaccinations or testing services.''The undocumented community [was] afraid to get vaccinated,'' a representative explained, and ''[that their] information would be shared with immigration.''Outreach methods were tailored to reach migrants, requiring a lot of groundwork as well as the inclusion of immigrants in the outreach planning.One interviewee stated: We were able to educate and communicate to undocumented populations that we would not collect sensitive information and the government would not be after them following vac- Building Trust Through Collaboration Community-and faith-based populations in HHS Region 2 had different responses to COVID-19.Some populations were supportive of public health measures from the beginning, some were resistant, and others were open to collaboration if implemented in a setting of trust and mutual respect.Representatives spoke of low vaccination rates in certain communities, with public perceptions of COVID-19 vaccines and testing services sometimes impacted by people's religious and cultural beliefs.One person described a ''disbelief in prevention measures and non-support of [prevention measures by] religious leaders in these communities,'' and another spoke of segments of the population who were ''resistant and hesitant due to religious beliefs.''HHS Region 2 public health officials recognized the value that faith-based partners can bring in a response.''We needed to work intimately with communities of color,'' said one representative.''The faith based are the trusted source, if [they] say 'show up,' the community shows up-[this is] effective with those that don't trust the news.''Another remarked, ''We held town halls with [religious leaders].We knew if we didn't get leaders involved, vaccination would be an uphill battle.''Collaborations with faith-based leaders were described as instrumental in grassroots outreach, culturally appropriate messaging, and the facilitation of town hall meetings.In places where there may not have been previous collaboration between public health and faith-based organizations, new connections were established.Meetings with providers and timely communications were ''often led with faith-based communities [.]The faith-based leaders speak to community members, and their instruction is followed by people with noted hesitancy.We have built strong relationships with faith-based leaders.''Representatives commented about ''how powerful these spiritual leaders are,'' a ''need to maintain relationship with [religious] leaders-dialogue and relationship to prepare to be engaged,'' and an important lesson that ''faith-based communities have to remain at the forefront.''Some stated: It took a long time to get some of these church figures on board.There was a disbelief in the prevention measures.It was very challenging.wehave to get them to leave out politics and prioritize health of community.Sometimes [we have to invest in dialogue] for people to start listening, and once they do-[we] have to act. It goes back to who is trusted in communities.The Black church has a long history in the community, it has space and the ability to invite people into spaces.We had resources-so it was a natural match-building on a wheel that people had already; once we had faith-based leaders on board.Churches early on were not considered frontline establishments, but relationships were developed with the clergy members to get communications out.Some religious institutions expressed that they did not agree with vaccinations, but we have been able to develop some alliances and partnerships with churches and religious institutions; it's made our entrance into that space much easier. Health Communications and Challenges With Misinformation Misinformation about COVID-19 was a challenge.''We were competing with a lot of misinformation on social media that was big on anti-vaccination,'' said a representative.''Anti-vaccination advocates were always ahead of us, and information was always changing.''Older adults were initially the most responsive to public health recommendations, but later in the pandemic, some began to question the need for vaccination.They equated COVID-19 to a ''flu,'' which they perceived to be less serious.This was most difficult during the initial Omicron variant period.Lastly, the public began questioning why they should get tested for COVID-19 when they assumed they already knew their diagnosis. HDs used several strategies to counter the misinformation, including increased public health messaging, funding, and resources to community partners.''We made social media [using] TikTok and other platforms,'' said a representative.''We incorporated 120-character text messaging to residences to try and beat the online misinformation.''Messaging and Facebook live chats were delivered in multiple languages.HDs also took steps to share statistics in plain language and communicated via billboards, television, and radio to address hesitancies based on vaccine myths.One person described the development of a program in which people would be able to speak with a licensed [health] professional at an event about any questions or concerns.''We had culturally competent persons regularly interact with these persons and educate them on COVID-19 and vaccines.''Other key informants stated: [Opinions of populations disproportionately affected by COVID-19] is important, particularly related to vaccine rollout.We established an information monitoring group, worked with organizations to monitor social media communications, particularly in English and Spanish on vaccine hesitation, opposition [.We] worked with organizations to counter misinformation-[we provided organizations] with language to counter. The quick access to misinformation impacted pregnant individuals. Maternal mortality is discussed often [on social media]. There is a certain desperation people have [when they perceive] that there is not enough information on whether receiving vaccinations are safe. To better serve the deaf community [.] mobile vaccinations were organized in partnership with schools that have teachers who could communicate in sign language. HDs developed COVID-19 websites, hotlines, guidance, and appointment scheduling assistance in English or Spanish.HDs prioritized improving accessibility to COVID-19 information for people without reliable technology, limited English proficiency, and low literacy.Some jurisdictions recognized a need to develop materials in different languages, accessible formats, and at appropriate reading levels. Expanding Resources/Improving Capacity Building on Existing Foundations Having partnerships, resources, and infrastructure in place before the pandemic facilitated improving equity in COVID-19 response activities.One representative observed, ''Before COVID, a goal was already determined to minimize inequity,'' while another commented, ''Building a relationship [with communities] started prior to COVID-19.''Another said: So much work was done prior to 2020 [with the health department examining] how to become an antiracist organization.[The health department had an] equity officer in the system prior to COVID.Operationally, this allowed us to direct resources. We used a similar framework that was incorporated during the HIV response [.] the plan was to mirror the HIV response. Jurisdictions also built upon experiences promoting health equity during other public health responses.One representative described lessons learned from a recent 2019 measles outbreak that focused on outreach strategies and grassroots community engagement with attention to ''LGBTQIA+, adults 65+, Black people, Brown people, and certain religious communities.''Longstanding community relationships provided HDs insight that informed resource needs, community partnerships, and development of outreach strategies that would work best for specific populations.''[We did] community focused reaching out-[using] sound system [loudspeakers] on cars, church in towns,'' and in other places, ''going to the country areas and reaching out through the churches was most effective.'' Logistics While underlying causes of inequities could be complex and multifactorial, HDs reported that some challenges were logistical in nature.HD efforts to increase access to healthcare and support services were considered vital to promoting health equity among certain populations.''[We were] able to have teams employed, boots on the ground, out in field talking to individuals [.] all [team] members were linguistically and culturally competent representatives, able to use soft skills, language, and [knowledge of local] culture as gateways to community to build trust.''One representative indicated that individuals who are homebound and people with disabilities experienced transportation barriers due in part to a dependency on others to access healthcare services.In 1 jurisdiction, the HD established clinics within 5 miles of disproportionately impacted zip codes and expanded community outreach.One interviewee reported that ''rural [residents] do not have access [.] we had to address this community need and how to get [people to these services].''Mobile vaccination units were also seen as an important strategy to improve accessibility.As mass vaccination sites were reduced, 1 HD used pop-up sites with local culturally representative doctors at smaller venues such as town halls, shopping centers, and housing complexes.An interviewee observed that at-home vaccinations were the HD's most successful intervention among adults aged 65 years and older, people with disabilities, and homebound individuals. Using Data to Guide Efforts and Track Impact The use of data to promote health equity presented both challenges and opportunities.''[The health department] had to establish a centralized hub to look at and slice data by zip code and demographics,'' a representative noted.''Each zip code would create data packets, which allowed us to mobilize and operationalize what we needed to do.''Another representative described the use of social vulnerability indicators to identify neighborhoods with the greatest inequity.One jurisdictional representative explained how they used zip code and geotracking data to identify areas of need based on lower numbers of people with health insurance.In another jurisdiction, the local health commissioner made data available to a steering committee with 7 workgroups assigned to populations at increased risk of COVID-19.Others stated: [We] started to think of methodology of how to identify risk factors that increased transmissions-occupation, overcrowding, multigenerational homes-stratified list of neighborhoods with greatest inequities. We developed SAT scans [a software program that analyzes geospatial surveillance data] to identify clusters to direct testing and outreach with a hyperlocal approach.[We were able to] identify gathering spaces in neighborhoods [churches, mosques, parks, and other organizations], collaborating with different community members who have familiarity and relationships within communities. I was a part of the team working with high-risk populations.We looked at policy as we looked at data.It was crosscutting and cross-sectional.We presented 3 times a week on processes and procedures for implementation.We built a vaccine plan with health equity embedded.This impacted a lot of people of color.It was one of the fastest fast-tracked programs. Key informants viewed the success of interventions by the number of people reporting awareness of testing and vaccination options, demonstrated the use of testing and vaccination services, and increased community champions utilizing culturally responsive resources.HDs tracked the impact of interventions by multiple means, including (1) weekly at-home vaccination data shared by providers, (2) daily vaccine perception and interaction data, (3) weekly program and clinical intervention effectiveness evaluation data, (4) data collection and analysis of disparities among people disproportionately S-30 Health Security affected by COVID-19 led by a community partners taskforce to ensure investment strategies into communities were current, and (5) immunization and epidemiological databases.Key informants noted that gaps in data made it difficult to track inequities.One interviewee indicated challenges in collecting data to track vaccination status in the community, as well as data sharing among collaborators.''I believe [a major challenge is] not knowing how many we have reached.There are gaps in data.''Another interviewee said, ''We have incomplete data on people who are homebound, in long-term care facilities, and people experiencing homelessness.Our greatest challenge is knowing how to determine this gap.'' Lessons Learned and Remaining Challenges Much was accomplished in HHS Region 2 to promote health equity.As a key informant stated: The next-testing phase-was more equitable, there was more collaboration, and it was less reactive.[During] the vaccine [phase] we got to really see the equity we built .but also got to see disparity.[We] had better data, more collaborators who were honest.The journey of equity from little to none to what it is now was remarkable. However, there were remaining gaps and challenges going forward.''Collectively as a nation, we need to take the lessons learned,'' a representative noted.''There is a perception of things being done unsatisfactorily.''Protective factors identified included residing in communities where government leadership prioritized and invested in strategies to address health inequities before the pandemic, and in communities with strong social support networks that worked in partnership with local HDs.A couple of representatives stated: The government as an institution needs to be doing more community work, not just sitting at a desk.It is tiring on us to do administrative tasks than go and serve in the community.We need staff dedicated to community work.Our most successful strategy was with mobile vaccinations.We as the Department of Health should strengthen, expand, and maintain mobile services. We have been able to develop partnerships with churches and religious leaders, [to] make entrance into space easier-leadership in church welcomes us, opens doors to institutions, [which] do play a role in vaccination rates and interest of people getting vaccinated. One important lesson was to initiate health equity activities before a response rather than developing them reactively.''At the beginning, it was incredibly hard to build the equity model during the 'emergency mode' where the response was more reactive.''It is vital to integrate the lessons learned during COVID-19 ''into routine operations, support recovery, and prepare.for future emergencies.''This requires a focus on ''social determinants of health concerns, such as housing, food insecurity, access to healthcare.''Listening to communities and empowering them to develop solutions to address inequities in the social determinants of health should be prioritized.''Addressing social determinants of health,'' a representative observed, ''will lead to positive, sustainable change for individuals and communities at large.''One person suggested that jurisdictions may want to look for additional opportunities to engage with communities.Similarly stated, it is important to maintain relationships and trust that have been established (Box). Discussion The COVID-19 response highlighted current and historical health inequities in the United States.By the second year of active response, public health officials, healthcare providers, first responders, and the nation were exhausted and suffering from COVID-19 fatigue, which was further complicated by the emergence of Omicron as a variant of concern.The Omicron variant accounted for the majority of cases in jurisdictions in the months to follow and added challenges to ending the COVID-19 pandemic during a time when restrictions were being lifted in the United States. 39ur case study findings indicate that community-driven approaches based on trust and existing partner relationships helped to effectively address many of the cultural barriers to the uptake of COVID- from the public health threat.Community outreach engaged collaborators from various sectors such as government, faith-based, and healthcare sectors to effectively address health inequities.Local community partners were positioned to provide essential services and support to people with functional and access needs. 40,41HD engagement of people representing impacted communities creates the capacity for flexible and diverse strategies to meet the needs of populations disproportionately affected by COVID-19.Having community-informed insights is imperative to the acceptance of public health interventions.It is important for HDs to address misinformation early in a response through clear messaging from trusted people.Improving the acceptability of interventions involves communications in plain language, messaging that emphasizes science over misinformation, engagement and publicizing by diverse trusted local leaders, and increasing accessibility to services. 42,43e acknowledge some limitations.State and local HD in other regions also implemented interventions to improve health outcomes among people disproportionately affected by COVID-19 in communities historically underserved by government programs and healthcare systems. 26However, the experiences described here may not be representative of racial and ethnic populations or communities in different US regions.Perspectives from HD representatives may be susceptible to social desirability bias.In addition, the key informants had to rely on their memory over several months to answer questions.We did not capture the nuanced perspectives of community partners, leaders, and practitioners who collaborated with the jurisdictional HDs. There were noted gaps in public health data management systems (prior to an emergency).Areas for improvement include prioritizing and integrating health equity data to better inform interventions for populations disproportionately affected by COVID-19.We encourage continued efforts to identify and reach populations disproportionately affected by COVID-19 and improved strategies to better serve community needs.Furthermore, it is essential to take steps to understand protective factors, as well as the intersectionality, social, and structural context of populations disproportionately affected by COVID-19 that are underlying factors driving health inequities, which can compound disadvantages.Addressing the needs of diverse populations involves informed decisionmaking, diversity of thought, and delivery that are tailored to the community. Conclusion Our findings may impact decisionmaking by government officials, public health professionals, community leaders, and healthcare systems in promoting health equity in current public health initiatives and during future public health responses.It is imperative to have an equity-centered approach to reduce and eliminate inequities in disease outcomes as individuals and communities continue to be impacted by the long-term effects of the COVID-19 pandemic. Figure 1 . Figure 1.Common themes and associated topic areas. 19 interventions by populations that have been disproportionately affected by COVID-19.Public health professionals in HHS Region 2 leveraged strong community partnerships to protect communities
2023-08-19T06:16:38.304Z
2023-08-17T00:00:00.000
{ "year": 2023, "sha1": "79f73dd1c19dc6598a46f6c7a80e1b93ff48378d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1089/hs.2023.0001", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "8d035d0ac00445adaec03ea12c6c09cdb02461a5", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
56119746
pes2o/s2orc
v3-fos-license
Representation and function of characters from Greek antiquity in Benjamin Britten ’ s Death in Venice 1 Representation and function of characters from Greek antiquity in Benjamin Britten’s Death in Venice Lack of insight into Greek antiquity, more specifically the nature of classical tragedy and mythology, could be one reason for the negative reception of Benjamin Britten’s last opera Death in Venice. In the first place, this article considers Britten’s opera based on Thomas Mann’s novella as a manifestation of classical tragedy. Secondly, it is shown how mythological characters in Mann’s novella represent abstract ideas2 in Britten’s opera, thereby enhancing the dramatic impact of the opera considerably. On the one hand it is shown how the artist’s inner conflict manifests itself in a dialectic relationship between discipline and inspirat ion in Plato ’s Phaedrus dialogue that forms the basis of Aschenbach’s monologue at the end of the opera. The conflict between Aschenbach’s 1 This essay is a revised version of a paper read at the Intercongressional Symposium of the International Musicological Society held in Budapest during August 2000. 2 In classical Greek the word “idea” has several meanings. For example, one important facet of the word involves a concrete visual connotation of “seeing”. In this sense Plato referred in Protagoras to a person as being “very beautiful in idea”, meaning a person is good-looking (Urmson, 1967:118). But an idea can also have an abstract meaning. In this case “abstract” is specifically used to distinguish symbolic abstractions from traditional musical representations of phenomena such as rain, wind, the sea, birdsong, sounds of machinery, electronic effects, etc. Kant, for example, distinguished between a practical idea which represents objective reality and an aesthetic idea which represents phenomena of the beautiful and the sublime (Neumann, 1976:118). Representation & function of characters from Greek antiquity in ... Britten’s Death in Venice 2 ISSN 0258-2279 Literator 23(1) April 2002:1-18 rational consciousness and his irrational subconscious, on the other hand, is depicted by means of mythological figures, Apollo and Dionysus. Two focal points in the opera, namely the Games of Apollo at the end of Act 1 and the nightmare scene which forms the climax of the opera in Act 2, are used to illustrate the musical manifestation of this conflict. 1 This essay is a revised version of a paper read at the Intercongressional Symposium of the International Musicological Society held in Budapest during August 2000. 2 In classical Greek the word "idea" has several meanings.For example, one important facet of the word involves a concrete visual connotation of "seeing".In this sense Plato referred in Protagoras to a person as being "very beautiful in idea", meaning a person is good-looking (Urmson, 1967:118).But an idea can also have an abstract meaning.In this case "abstract" is specifically used to distinguish symbolic abstractions from traditional musical representations of phenomena such as rain, wind, the sea, birdsong, sounds of machinery, electronic effects, etc. Kant, for example, distinguished between a practical idea which represents objective reality and an aesthetic idea which represents phenomena of the beautiful and the sublime (Neumann, 1976:118). Introduction Twenty-five years after the death of Benjamin Britten in December 1976, opinions about his stature in the world of music are still divided.On the one hand he is described as the most important British composer, but on the other hand the English themselves are not really very enthusiastic about his music.Indeed, the reaction of the British has been described as "icy" (Odendaal, 2001:4) or at best "cautious" (Ashman, 2001:29). What is certain, however, is that interest in his music has been growing rapidly -over the last two years 60 different productions of his operas took place in 18 countries.Of these productions 500-plus performances were staged in non-English-speaking countries (Kettle, 2001:3). Benjamin Britten chose Thomas Mann's novella Death in Venice (1912), one of the most widely admired short novels of the twentieth century (Schmidgall, 1997:295), as text for what turned out to be his last opera.The fact that this novella deals with the doomed fascination of the internationally acclaimed author Gustav von Aschenbach for the beautiful Polish boy, Tadzio and that the composer enhanced the opposition of mythological figures to achieve this end, surely could not have furthered his cause among the English public.When this opera was premiered in 1973 it elicited considerable negative comment.For example, the composer Ned Rorem (1987:186), holds that on paper the opera looked "sterile, padded, colorless, simplistic and, yes, lazy with endless recitative on neumelike signs speckling an over-extended text".Even the composer's friend and English writer, Ronald Duncan was of the opinion that it lacked conflict and that vocal monotony was the result of the overuse of recitative.What is more, he "could not bear the public revelation of private agony.Art always surpasses life, even in immodesty" (Duncan, 1981:153). What is not understood here is that the downfall of the tragic hero, who is normally associated with an extraordinary destiny, results in his isolation from society and that this downfall constitutes a cornerstone of tragedy as genre.The central idea of tragic irony is that "whatever exceptional happens to the hero should be causally out of line with his character" (Frye, 1957:37, 41).Another reason for the negative view of Britten's opera could be found in a too close reading of the musical and literary text.Moving back a little from the musical work of art, the design, or the way in which content is shaped, comes into clearer view.Viewed even further back, the organizing design becomes even more distinct.Continuing this "moving back" strategy eventually allows a comparison between works of art in a general sense.According Northrop Frye (1957:140) "… we often have to 'stand back' from the poem to see its archetypal organization".Lack of an historical awareness as well as insight into the meanings of forgotten symbolic references, many of a mythological nature, could also be reasons for the criticism and the misunderstanding of the opera.The fact that the mythical mode (stories about gods) is the most abstract and conventionalized of all literary modes (Frye, 1957:134) further complicates matters. In Mann's novella abstract oppositions such as beauty, purity, order, simplicity, discipline on the one hand, and confusion and derangement on the other hand, are developed.The conflict is, therefore, situated in the mental domain.This kind of subject-matter does not lend itself easily to visual representation, an important characteristic of traditional opera.However, in Britten's opera abstract ideas are developed symbolically by specifically opposing two mythological figures, Apollo and Dionysus.In his novella Mann does not mention the two deities by name but they are implied by references to the sun (Mann, 1912:46, 47, 55) and "the stranger god" (Mann, 1912:75).The fact that Britten relied on mythological figures to represent abstract ideas could be regarded as a contributing factor towards the perception that the opera is "remarkably true to Mann" (Evans, 1979:523). Whereas the conflict in Aschenbach's mind is represented by mythological figures in the opera, the unfolding of the operatic plot follows the pattern of Greek tragedy."(T)ragedy continues to be a channel for embracing the whole of life in all its contradictions and ambiguities", was Nietzsche's later philosophy (Oudemans & Lardinois, 1987:228).Northrop Frye (1957:206), well-known author of Anatomy of Criticism, also believed that the authentic basis of human nature comes into literature largely through the tragedies of Greek culture.Knowledge of classical Greek tragedy and its nature could therefore assist in understanding the meaning of Benjamin Britten's opera. After showing how the structure of the opera could be understood in terms of Greek tragedy, I shall argue that the librettist Myfanwe Pijper and the composer Benjamin Britten have enhanced the contrasting role of Apollo and Dionysus in the opera as a means to concretize abstract ideas through music.I shall demonstrate how veiled allusions to mythological figures in the novella are developed through musical means to structure on the one hand, the inner landscape of the characters and on the other hand, the outer landscape, that is the plot in the opera.Within the limited scope of an essay it is impossible to do justice to the variety of ways in which Britten realized Mann's text in a musical manner.Therefore, in the second part of the essay I shall concentrate on the organization of pitch in three crucial moments of the opera, namely Aschenbach's Phaedrus monologue, the Games of Apollo and the dream. Greek tragedy The fall of the hero is a typical characteristic of tragedy (Frye, 1957:221). The basic structure of tragedy therefore represents a downward movement, "the wheel of fortune falling from innocence toward hamartia, and from hamartia to catastrophe" (Frye, 1957:162).Hamartia denotes a flaw or weakness that has an essential connection with sin or wrongdoing.In Aschenbach this flaw could be the result of the conflict between his disciplined, rational background, which suppressed the intuitive side of his nature, an inheritance from his artistic mother.Catastrophe is the result of hybris, "a proud, passionate, obsessed or soaring mind which brings about a morally intelligible downfall" (Frye, 1957:210). The binary opposition in Aschenbach's mind which fluctuates between a rational consciousness and an irrational subconscious is strengthened by the music, more specifically by the musical enhancement of mythological figures.On the whole, however, the overarching pattern is one in which he moves steadily away from the Apollonian ideal towards the Dionysian, which in the end results in the corruption of his mind and soul.An oscillation between idealism and decadence eventually settles on a downward course when Aschenbach, after having eventually decided to leave Venice because of the plague, has to stay on after his luggage has been sent to a wrong destination. The magnitude of the celebrated author's downfall is emphasized by contrasting his greatness with ordinary people at the beginning of the opera.In scenes 1 to 6 Aschenbach is juxtaposed with the traveller, the group of young people on the boat, the elderly fop, rouged and wrinkled, the old gondolier who turned out to be operating without a licence, the hotel manager, hotel guests and the strawberry seller.Scene 7 (Games of Apollo) represents the loss of innocence when Ascenbach realizes that he loves Tadzio. In scene 9 Aschenbach addresses himself: "What is this path you have taken?What would your forebears say -decent, stern men, in whose respectable name and under whose influence you, the artist made the life of art into a service, a hero's life of struggle and abstinence?" (Britten, 1973:194).A world of shock and horror, portrayed by the conflict between Apollo and Dionysus during the nightmare (the dream in scene 13), results in great agony and humiliation.On awakening after the dream Aschenbach acknowledges "it is true, it is all true, I can fall no further" (Britten, 1973:237).However, the final humiliation is reached when Aschenbach (scene 15) ironically resorts to the same rejuvenating strategies which he despised in the elderly fop at the beginning of the opera, namely requesting the barber to colour his hair and make up his face. Mythology According to Douglas Davies (1994:1) myth reflects "a characteristic search for meaning which is typically human".Heroic and noble ideas found in myths often communicate an underlying message of human suffering which is always relevant, regardless of time and place.In classical Greek tragedy, for example, mythical characters are not only known for their deeds but for their signification of universal ideas which still inform our view of human behaviour and the social environment. Myths represent intuitive wisdom expressed through images. Knowledge of mythology is of notable assistance in the search for meaning because in myth the narrative can be perceived at various levels of significance (White, 1971:45) and in different codes.Myth can therefore be regarded as a mode by which a society communicates. Myths in general have a surplus of meaning in that they embody more significance than their overt content suggests (Oudemans & Lardinois, 1987:10-12).As myth lends itself to a variety of interpretations, it can be observed in different guises.Consequently every generation of artists interprets the symbolic content of a myth according to its frame of reference.According to Lévi-Strauss "all human behaviour is based on certain unchanging patterns, whose structure is the same in all ages and in all societies" (quoted in Morford & Lenardon, 1995:10).As myths are open to changed interpretation over the course of time (Davies, 1994:1), the use of mythological figures allows the artist to describe the modern world by means of a readily available set of models. Because myths are derived ultimately from the structure of the mind, myth could represent patterns of behaviour and therefore establish archetypes (Morford & Lenardon, 1995:9).As the concept of the archetype could be regarded as a generic abstraction which signifies something timeless (White, 1971:44) it could assist in the communication of meaning.Frye (1957:99) refers to the idea of the archetype as a "communicable unit" but uses it in its widest sense, namely as signifying "a typical or recurring image".By an archetype Frye means … a symbol which connects one poem with another and thereby helps to unify and integrate our literary experience.And as the archetype is a communicable symbol, archetypal criticism is primarily concerned with literature as a social fact and as a mode of communication. The capacity "to express in story form the primary emotional and imaginative workings of the human mind" (Kirkwood, 1958:22) is the distinctive quality of myth that gives it its peculiar value for literature. According to the psycho-analytical school of Jung myth represents the psychological processes of the human subconscious, more specifically he interprets myth as the projection of the collective unconscious of the race, that is, a revelation of the continuing psychic tendencies of society (Morford & Lenardon, 1995:9). The typical in humanity and the characteristic of the idea interested Thomas Mann much more than the person as specific individual in his / her uniqueness (Marcus-Tar, 1982:83).As myth describes typical actions, being more philosophical than history (Frye, 1957:83), it stands to reason that Mann would be attracted to mythology.He is well-known for veiled allusions to mythological motifs in his novels (White, 1971:49-50).More than two decades after the publication of Death in Venice in 1912, he wrote as follows to the Hungarian philologist and later compiler of his letters, Karl Kerényi (Kerényi, 1975:37): [I]n my case, the gradually expanding interest in myth and religious history is a 'sign of old age'.It corresponds to a taste that has, in the course of years moved away from the bourgeois-individualistic toward the typical, the general, the universally human. Referring to Kerényi's plea (1975:38) for a return of the European spirit to the "highest, the mythic realities", Mann replied in a letter that it is "in truth a great and positive cultural movement, and I may claim that my own work has to some extent played a part in it".He connected his own mythological bent with the maternal sphere of nature (Kerényi, 1975:17). Myth as representation of the subconscious in Britten's opera Universal ideas represented by mythological figures situate the discussion within the context of metaphysics.Expressing the metaphysical in musical terms is natural for Britten, an aspect which, according to Mitchell (1984:249) until now has received perhaps less attention than it should. According to Morford and Lenardon (1995:10) the mind has a binary character insofar as it constantly deals with pairs of contradictions or opposites.Therefore, myth as a reflection of the mind mediates between opposing extremes such as nature and culture.Lévi-Strauss believes that "[m]ythical thought always progresses from the awareness of oppositions towards their resolution" (Morford & Lenardon, 1995:11). Although Apollo and Dionysus are not mentioned by name in the novella, Mann explicitly stated that Death in Venice is based upon the distinction between an epic Apollonian and a lyric Dionysian spirit (Mann, 1962: 317).In the novella the creative artist, Aschenbach, is caught between Apollonian order and Dionysian licence.As the opposition of these two Greek deities plays an important role in the development of the plot on a psychological level, understanding of their respective natures is crucial to the understanding of the opera.(See Spies, 2001:39-57 for a discussion of the way in which the music expresses ambiguities caused by the opposition of the two mythological figures.) According to Nietzsche the names of Apollo and Dionysus are borrowed from the Greeks who taught their view of art not through concepts but through clear figures of their world of gods (White, 1971:48).Apollo and Dionysus were regarded as personifications of qualities found both in art and life.These two deities each represents a certain mental attitude.The Apollonian ideal is associated with balance, the avoidance of extremes (Calarco, 1968:7), clarity and purity, order and harmony (Kerényi, 1976: 209).Although Apollo was, in all probability, not originally a sun-god, he came to be considered as such (Morford & Lenardon, 1995:46).On the other hand, Dionysus was regarded as the "representative of the productive and intoxicating power of nature" (Smith, 1952:110), the god of ecstacy and the most enraptured love.He was also known as the raving god whose presence makes man mad and incites him to savagery (Otto,1933:49, 70).For Walter F. Otto, the German theologian and philosopher, Dionysus is an enigmatic god, born of a deity (Zeus) and a human mother (Semele) and therefore already by birth a native of two realms: "Any study of him will inevitably lead to a statement of paradox and a realization that there will always be something beyond, which can never be explained adequately in any language other than the symbolic" (Palmer,1965:xix-xx). For Nietzsche there is a further opposition to the one represented by Apollo and Dionysus, an opposition between power and order which is situated within the internally conflicting nature of Dionysus himself (Oudemans & Lardinois, 1987:227).Contrary to a mainly one-dimensional view of Dionysus as representing irrational emotion only, his ambivalent nature also contains a positive side.According to Oudemans and Lardinois (1987:96, 142, 277), Dionysus, the "many-named" was a paradox symbolizing life and death, peace and war, and truth and falsehood as these words appear next to his name on two fifth-century Orphic tablets."Dionysus represents power which has to be both abhorred and worshipped".On a deeper level then this opposition within Dionysus himself establishes another ambiguity which is situated in the creative tension that opposites generate. Even with regard to visual appearance, Dionysus had two images.In ancient writings on Dionysus, Diodorus Siculus wrote: "He seems to be dual in form because there are two Dionysoi: the bearded Dionyssos of the old times, since the ancients wore beards, and the younger, beautiful and exuberant Dionyssos, a youth" (Kerényi, 1976:363). Unlike Apollo, Dionysus was not native to Greece but a god whose cult was imported from the East.The "stranger god" brought something new and overpowering into Greek life (Kerényi, 1976:139).It swept through Greece like a plague in the same way as, in the opera, the cholera has reached Venice from the East.In the opera this ambiguous Dionysus does not only represent the sensuous and the irrational but also the plague which again represents the darker side of ambiguous Venice. A perspicatious observation by the Hungarian scholar Kerényi assists in making a connection that is not noticeable on the surface of the music, i.e. a connection between Aschenbach and Tadzio on the one hand, and between the opera and Greek antiquity on the other hand.Kerényi (1976: 134) points out that, as in a breakthrough, consciousness and the unconscious may well merge in a mental state called mania by the Greeks, that is a state in which man's vital powers are enhanced to the utmost.In the art of prophecy, madness is represented as secret knowledge (Otto, 1933:131).In the opera the connection with the Greek view of mania is represented by Aschenbach's Phaedrus monologue which takes place during his last visit to Venice (second last scene).In this monologue the confusion in his mind reaches a climax.In order to understand the nature of the conflict in Aschenbach's mind, knowledge of Plato's Phaedrus dialogue is therefore essential. The ambivalent nature of this mythological figure is summarized as follows by the Greek scholar E.R. Dodds: Eros represents a combination of the physiological impulse of sex and "the dynamic impulse which drives the soul forward in its quest for a satisfaction transcending earthly experience" (Hindley, 1990:520). Thomas Mann sees the role of Eros as follows: For the artist Eros is the guide to the intellectual, to spiritual beauty, for him the way to the highest goes through the senses.But it is a dangerously beautiful road, a sinful road although there is no other (Marcus-Tar, 1982:36). 3It is precisely the dualistic character of Eros that eventually leads to the disintegration of Aschenbach's mind and soul when he asks in the opera: "Does beauty lead to wisdom, Phaedrus?Yes, but through the senses … And senses lead to passion, Phaedrus, and passion to the abyss" (Britten, 1973:250-252).Eros is perceived as symbol of passion and of mania.According to Socrates mania is not an evil in every case because it "can possibly be a means, an aid, a path to a good, in fact even to the greatest blessingson condition, that is, that mania is imparted to man as a divine gift" (Pieper, 1964:49). 4 The guiding principle in determining the meaning of Eros in this opera might lie within the opposition of ideal and reality, in other words the distinction is situated within the dialectic of appearance and essence: passion could be perceived as the ideal to strive for.When Aschenbach acknowledges the power of Tadzio's beauty earlier in the opera, he reflects on the relationship between the rational and the passionate: When thought becomes feeling, feeling thought … When the mind bows low before beauty … When nature perceives the ecstatic moment … When genius leaves contemplation for one moment of reality … The Eros is in the word.(1964).Mania is subdivided into prophetic, cathartic, poetic and erotic mania -all of them may be beneficial.The demonstration that the fourth madness is "given by the gods for our greatest happiness" involves the discussion of the nature of the soul, divine and human (De Vries, 1969:26). Near the end of the first act the music that accompanies Aschenbach's view represents Eros as an unambiguous concept (see Spies, 2001:50).At this stage Aschenbach's music is a clear reference to Tadzio's beauty which, he hopes, will inspire him to overcome the writer's block mentioned during the opening scene of the opera.However, near the end of the opera the double meaning represented by Eros is emphasized by the bitonal and bimodal effects in the parts of the harp and the piano in Aschenbach's Phaedrus monologue. Example 1: Scene 16 -The last visit to Venice (also see p. 11) Example 1 contains the first of three stanzas.The text of the other two stanzas reads as follows: Should we then reject it, Phaedrus, The wisdom poets crave, Seeking only form and pure detachment Simplicity and discipline? But this is beauty, Phaedrus, Discovered through the senses And senses lead to passion, Phaedrus And passion to the abyss. The final notes of every harp-piano interjection represent the systematic disintegration of Aschenbach's mind.Aschenbach's monologue starts with the word "beauty" against a doubled C, representing unity of thought (marked 1).If the principle of octave transposition is acknowledged, the closing interval of the next five interjections demonstrates a systematic increase in tension.Passing through three consonant intervals, a major third (2), a perfect fifth (3), and another major third (4), then through the dissonant diminished fifth ( 5), the last interjection ends on a dissonant semitone clash against the sentence "to compassion with the abyss" (6). The next stanza follows the same pattern, somewhat contracted (omitting the second major third interval), but also ending with a semitone clash against Aschenbach's question "Simplicity and discipline?" The third stanza follows this pattern but with the diminished fifth replaced by chords in both hands.These chords enhance the dissonant effect because the polytonal result causes semitone clashes, ending with a single semitone clash, marked sf, against "abyss". The fact that each stanza, regardless of the line of argumentation, ends in a dissonant clash, could suggest that at the end of the opera Aschenbach realizes that his conception of beauty might not have the kind of future that he had envisaged. The meaning of myth enhanced by music In a letter to Kerényi (1975:101) Mann referred to his "own unscholarly mythological musings" in a passing remark about Tadzio in Death in Venice.If considering Mann's Death in Venice (1912) as a relatively early work (he was eighty years when he died in 1955), and his explicit interest in mythology as a phenomenon of mature age, the deduction could be made that Benjamin Britten and Myfanwe Pijper actually reinforced Mann's mature view of mythology through their explicit musical characterization of Apollo and Dionysus. In order to illustrate how the meaning of myth is intensified by the music in the opera, I shall concentrate on the Games of Apollo (scene 7) at the end of the first act and the nightmare scene (scene 13) in the second act. The Games of Apollo portrays the systematic disintegration of Aschenbach and his belief in himself as disciplined writer, rational artist and servant of Apollo.In the nightmare scene the triumph of Aschenbach's irrational subconscious over his rational consciousness is symbolized by the triumph of Dionysus over Apollo in a conflict during the dream • The Games of Apollo The idea of competition in both athletics and the arts was vital to the Greek spirit.In Greek antiquity both physical and intellectual competitions were included in these kinds of conquests (Morford & Lenardon, 1995:175).The importance of both the physical and the aesthetic also suggests a fundamental duality exemplified by the god Apollo himself. Many writers have criticized this lengthy scene in the opera that lasts for 17 minutes (Hindley, 1990:515;Carnegy, 1987:173;Northcott, 1987: 202).However, if it is realized that the Games of Apollo is not an innocent divertissement but a musical representation of the systematic disintegration of Aschenbach and his belief in himself as disciplined writer, rational artist and servant of Apollo, the games in the opera acquire new meaning.As the games progress from running to long jump, discus and javelin throwing, ending with wrestling, the dance becomes less controlled and more explicitly sensual (Corse & Corse, 1989:359) ending with a broken-down Aschenbach.Instead of regaining inspiration to write again, he is plunged into Dionysiac passion.He realizes that his love for the boy was not the Apollonian ideal of beauty but sensual love. The Games of Apollo signifies an ambiguity in that the Apollonian ideal of discipline, order and clarity is in reality corrupted: Aschenbach systematically falls under the spell of the boy's charms and this already anticipates the outcome of the dream that forms the climax of the opera in the second act.The fact that the opening strain of Apollo's music can be traced back to Tadzio's music could be regarded as a musical anticipation of the effect of the outcome of the games on Aschenbach.Consequently the outcome of the conflict between Apollonian order and Dionysian passion in Aschenbach's mind is suggested by Aschenbach's confession that "Eros is in the word" (see Spies, 2001:50). • The nightmare In the second act the conflict between Aschenbach's rational consciousness and his irrational subconscious takes a downward curve to reach a low point in scene 13 (the dream).This conflict is symbolized by a contest between Apollo and Dionysus, which represents the contest for Aschenbach's soul.Apollo implores Aschenbach to reject the abyss and to love beauty, reason, form while Dionysus and his followers lure him towards the mysteries, towards life.Dionysus's warm, earthy baritone voice is contrasted with Apollo's ethereal, countertenor voice. Just as myths provide information on the inner world of humankind, dreams can be a vehicle for the transmission of elements into imagery or symbols (Morford & Lenardon, 1995:7-8).The significance of dreamsymbols led Freud and his followers to analyse the similarity between dreams and myths.In the Freudian analysis of dreams, opposites (such as ideas represented by Apollo and Dionysus in this opera) are important because a Freudian opposite registers dissatisfaction, "the notion of what you want involves the idea that you have not got it … that you want something different in another part of your mind" (Empson, 1953:193).In the state of sleep what is repressed can no longer be held back.The dream-situation represents a wish as fulfilled, a wish "which is represented in an unrecognizable form and can only be explained when it has been traced back in analysis" (Freud, 1952:55).In his theory on dreams, formulated at the beginning of the twentieth century, Freud showed that most of the dreams of adults can be traced back by analysis to erotic wishes (Freud, 1952:66).According to this theory, Aschenbach's subconscious must have been made up already and the nightmare only confirmed the triumph of his irrational subconscious as represented by Dionysus. Example 2: Scene 13 -End of the dream (also see p. 14) The sacrifice of the bull (referred to in example 2) is a typical Dionysian rite, that can be regarded as a symbol of the sacrifice of Aschenbach's soul.In this regard one can refer to the dangerous bull game of the Minoan civilization: "The player seizes the horns, lets himself be thrown upward by the bull, turns one or more somersaults in the air and lands behind the animal which is running away" (Kerényi, 1976:12).Ritual as an imitation of nature, and as a manifestation of magic, could be regarded as a deliberate recapturing of something no longer possessed (Frye, 1957:119). Although every phrase in Apollo's and Dionysus's music has A, Tadzio's tonal area, as starting point, with regard to argumentation they then move in opposite directions.The dichotomy in the purpose of their pursuit is further accentuated by the fast interchange between F majorminor and E major in the accompaniment.At the climax, against the word "sacrifice", the leap of a perfect fifth in Dionysus's part finally connects the two fields of A and E (that represent Tadzio and Aschenbach respectively), thereby implying an idealized state.However, ending on D# could be regarded as a corruption, in a horizontal manner, of the tonal field which represents Aschenbach.The dialogue ends with Apollo yielding his tonal field as well by descending to the field of A, the tonal area which symbolizes Tadzio. The fact that Apollo withdraws ("I go now") into the field of Tadzio (the final note in example 2) is a clear indication that Tadzio, representing the senses and the passionate in the artistic endeavour, turns out to be the winner in the contest for Aschenbach's soul.However, the apparent triumph of Tadzio is vertically corrupted by the ever-present semitone clash A-G# (283) in the accompaniment.This dissonant semitone could be regarded as an ironic constriction of its inversion, the expressive major seventh leap (the Tadzio-call) formed by the outline of Tadzio's theme (see Spies, 2001:50). To conclude The opposition of the two mythological figures, Apollo and Dionysus, creates a tension which, according to Hegelian thought need not be regarded merely in a negative light as it involves a positive dialectic.In Hegel's view of dialectics -a term which also originated in Greek thinking -"thought proceeds by contradiction and the reconciliation of contradiction, the overall pattern being one of thesis, antithesis, and synthesis" (Flew, 1979:94).Kerényi (1976:204) identifies a formal similarity between the conscious, conceptual thought of Hegel's dialectic and the natural, primordial dialectic as exemplified by the Dionysian cult. The natural, primordial dialectic may be explained by the assumption that in every living being there are two innate tendencies: a tendency to build and a tendency to destroy, on the one hand a life drive and on the other a death drive.Thus, death and the destruction of life would be a part of life itself.Hegel did not think in terms of 'drives', but he pointed to the basis of the primordial dialectic when he said: 'It is the nature of the finite to have within its essence the seeds of extinction: the hour of its birth is the hour of its death'. Dionysus brought the primeval world along with him.His onslaught stripped mortals of their conventions, of that which 5 made them "civilized".From the depths of life that have become fathomless also arise ecstasy and inspired prophecy.Life is intoxicated by death at those moments when it glows with its greatest vitality, where the most remote is near and the past is the present (Otto, 1933:128-129). According to Lévi-Strauss (1979:22) mythical thinking is original in that it plays the part of conceptual thinking: "[M]ore and more the sense data are being reintegrated into scientific explanation as something which has a meaning, which has a truth, and which can be explained" (Lévi-Strauss, 1979:6).Myth represents a kind of thinking other than what we are used to, a thinking through images.Images can reflect human experience without the mediation of ideas.As man reacts inwardly to his experience even before thinking takes place, prephilosophical insights and reactions to experience are established which can be regarded as prephilosophical wisdom (Kerényi, 1976:xxxi).In this connection Otto is correct when he regards philosophy as the heir of myth (Otto, 1933:127). By exploiting the mythical dimension of Thomas Mann's novella in his opera, Britten expressed the metaphysical in musical terms.In this essay I have tried to show how the musical treatment of myth has the potential to enhance understanding of the unconscious mind, or the inner landscape, regardless of time and place.As mythical characters in the opera communicate a message of universal human suffering, it demonstrates Otto's belief (1933:128-129) that the past is in the present, and that the remote is actually very near. describes a dialogue between Socrates and Phaedrus, one of Socrates's young admirers. According to De Vries (1969:23)the central theme in the Phaedrus is the persuasive use of words."Its means is beauty, its condition … is knowledge.Eros is the striving after knowledge and after beauty."In Plato's Phaedrus dialogue Eros plays a central role
2018-12-12T11:41:50.940Z
2002-04-01T00:00:00.000
{ "year": 2002, "sha1": "8935227910d43a2f10db64b6874ae509e89e02d4", "oa_license": "CCBY", "oa_url": "https://literator.org.za/index.php/literator/article/download/316/286", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8935227910d43a2f10db64b6874ae509e89e02d4", "s2fieldsofstudy": [ "Art" ], "extfieldsofstudy": [ "Art", "Philosophy" ] }
256935565
pes2o/s2orc
v3-fos-license
Miltefosine Resistant Field Isolate From Indian Kala-Azar Patient Shows Similar Phenotype in Experimental Infection Emergence of resistance to drugs used to treat the Indian Kala-azar patients makes control strategy shattered. In this bleak situation, Miltefosine (MIL) was introduced to treat mainly antimonial unresponsive cases. Within years, resistance to MIL has been reported. While checking the MIL sensitivity of the recent KA clinical isolates (n = 26), we came across one isolate which showed four times more EC50 for MIL than that of MIL-Sensitive (MIL-S) isolates and considered as putative MIL-Resistant (MIL-R). The expressions of LdMT and LdRos3 genes of this isolate were found down regulated. Th1/Th2 cytokines, ROS and NO, FACS dot plots and mitochondrial trans membrane potential measurement were performed. In vivo hamster model with this MIL-R isolate showed much lesser reduction in liver weight (17.5%) compared to average reduction in liver weight (40.2%) of the animals infected with MIL-S isolates. The splenic and hepatic stamps smears of MIL-R infected hamsters revealed the retention of parasite load of about 51.45%. The splenocytes of these animals failed to proliferate anti leishmanial T-cells and lack of cell mediated immunity hampered recovery. Thus, these phenotypic expressions of experimental model may be considered similar to that of the MIL unresponsive patients. This is first such kind of report. this deadly disease. For this, each and every field isolate of the patients should be characterized extensively and it is not done in many places including India. For MIL transport in Leishmania parasites, a two-subunit amino phospholipid translocase, Leishmania donovani miltefosine transporter (LdMT) and its specific beta subunit LdRos3, internalizes the drug 23 . Phospholipid vesicles (liposomes) employed as carrier systems for MIL, reduces its toxic side effects 23 and there have connection between the expression levels of both proteins and the parasite sensitivity towards the drug 24,25 . Understanding the mechanism of resistance, factors related to it and control strategy to develop thereafter against the resistant parasites, it is prerequisite to study MIL-R isolates in vitro and/or in vivo. In recent report, two Leishmania isolates were identified as MIL-Resistant (MIL-R) by the in vitro and genome study and these isolates were collected from confirmed Indian KA patients 26 . Few years back, researchers in the field generated a MIL-R Leishmania parasite by step-wise increment of drug pressure in vitro and reported that in case of MIL-Sensitive (MIL-S) Leishmania parasite, the effect of MIL was mediated through Apoptosis-like death but not in MIL-R Leishmania parasite 27 . Till date, there is no report of animal models for in vivo characterization of the MIL-R isolates. The phenotypic expression observed in the animal model may be similar to that of the unresponsive MIL patients and would be instrumental in developing the control strategy. In the present study, we have characterized all the clinical isolates of Indian KA at species level because of the fact that though Leishmania donovani historically known as the causative agent for Indian KA or VL 3,4 , other species (L. tropica) is found to be associated with the disease 5,28 . Thus, before going for any typological work with any clinical isolate, it became mandate to ascertain its identification at species level with the help of species specific markers [e.g., rRNA gene-internal transcribed spacers (ITS), heat shock protein of 70 kDa (hsp70), Major surface protease msp (gp63) gene and genes encoding cysteine proteinase B etc] [29][30][31][32] . As a part of our epidemiological search for Indian KA, we rigorously characterized all the recently collected clinical isolates through Randomly amplified polymorphic DNA (RAPD) analysis and performed Restriction Fragment Length Polymorphism (RFLP) analysis with the help of several species specific markers (ITS1 and hsp70) to ascertain their species identity 4,5 . All the isolates used here (n = 26) were collected from confirmed Indian KA and typed as Leishmania donovani 4,5 . Then we have checked the drug sensitivity of all isolates (n = 26) for MIL and found one as putative MIL-R field isolate of KA (L. donovani) with four times more EC 50 for MIL than that of MIL-Sensitive (MIL-S) isolates. We have opted for in vivo evaluation of this field isolate (study code T9) in hamster model. Infection with this MIL-R isolate in hamsters showed 17.5% reduction in liver weight compared to average reduction in liver weight by 40.2% of the animals infected with MIL-S isolates. MIL-R infected animals revealed the retention of parasite load in spleen and liver by about 51.45% respectively. The splenocytes of these animals failed to proliferate anti leishmanial T-cells and this T-cell anergy hampered recovery mimicking the scenario with MIL unresponsive patients. Thus, we are reporting for the first time, the phenotypic expressions of confirmed MIL-R L.donovani isolate in hamster model. Identification of the clinical isolates at species level by Restriction Fragment Length Polymorphism (RFLP) method. The Internal Transcribed Spacer 1 (ITS1) RFLP 33 and the Heat Shock Protein 70 (hsp70) RFLP 32 are well-known molecular markers for the characterization of Leishmania parasites at species level. The ITS1 region and hsp70 region of all the samples were amplified separately and subjected to ITS1 RFLP and hsp70 RFLP analysis individually. Species-specific RFLP patterns were obtained for L.donovani WHO strain (DD8) and L. tropica WHO strain (K27) respectively. A portion of the isolates of the present study, had been characterized earlier by RAPD (n = 9) 4 and RFLP (n = 25) 5, 34 methods. We noticed that the additional clinical isolate (n = 1) of the present study have shown similar ITS1 RFLP ( Supplementary Fig. S1) and hsp70 RFLP profile ( Supplementary Fig. S2) as that of L. donovani standard strain DD8. Thus, the clinical isolates along with the putative MIL-R (study code T9) included in the present study were identified as L. donovani. In vitro MIL susceptibility assay. MIL susceptibility was determined at intracellular amastigote stage for the field isolates of KA. In the present study, we have used RAW 264.7 cells as host cell and percentage of infected MØs ranged from 80 to 89 and the number of amastigotes/100 MØs ranged from 90 to 98. When the intracellular amastigotes of the KA isolates were subjected to test the susceptibility towards MIL, the EC 50 values ranged from 1.74 ± 0.10 to 5.35 ± 0.83 μM with a mean EC 50 of 3.32 ± 0.07 μM for MIL-sensitive (MIL-S) isolates while only one field isolate, T9 showed EC 50 of 13.47 ± 0.87 μM (Table 1) which was approximately 4 times more EC 50 value compared to the mean EC 50 value of the MIL-S isolates. On the other hand, the AG83 (MHOM/IN/1983/AG83) isolate was used as a reference strain as it has been thoroughly worked out by workers as sensitive towards both of the drugs: MIL 35 and Sodium Stibo Gluconate (SSG) 36 . Our single MIL-resistant (MIL-R) isolate from the field, showed approximately 3 times higher EC 50 value compared to the EC 50 value of the L. donovani MIL-S isolate (AG83). This observation corroborated the previous report of two MIL-R isolates 26 , where the researchers suggested that the IC 50 value of MIL-R field isolates approximately 2 times more higher than the IC 50 value of the L. donovani MIL-S standard strain (DD8) 26 although it is also reported that, the in vitro activity regarding the effectiveness of any anti-leishmanial drugs against Leishmanial parasites may be dependent on the host cell 37 . Our putative MIL-R isolate (T9) is SSG-Sensitive (SSG-S) with EC 50 value 4.69 ± 0.30 μg/ml and the result was also expressed in terms of Activity Index (AI) using AG83 as reference isolate of KA. Isolates with an AI ≥ 3.0 are considered as SSG-Resistant (SSG-R) and an AI < 3 are considered as SSG-Sensitive (SSG-S) 34,38 . The AI of the MIL-R isolate is 2.51. Therefore it was identified as SSG-S. Expression of cytokines level. In case of MØs infected with MIL-S clinical isolates ( Fig. 1a panels AG83, T8), the expression level of IL-10 were gradually decreased with increasing concentrations of MIL. On the other hand, there was no significant decrease in the expression level of IL-10 release from MØs infected with putative MIL-R isolate (T9) and then treated with MIL ( Fig. 1a). On the other hand, the expression levels of IL-12 ( Fig. 1b panels AG83, T8) and TNF-α ( Fig. 1c panels AG83, T8) releases from MØs infected with MIL-S KA isolates following drug treatment (MIL), were gradually increased with increasing concentrations of MIL. On contrary, there is no significant change in the expression levels of IL-12 ( Fig. 1b, panel T9) and TNF-α (Fig. 1c, panel T9) releases from MØs infected with putative MIL-R isolate and then treated with MIL with respect of its infected control group. Measurement of nitric oxide (NO) and reactive oxygen species (ROS). The in vitro MIL sensitivity assay and cytokines data showed the response patterns of the studied clinical isolates towards MIL. We carried out experiments to understand the status of NO and ROS in infected and drug treated macrophages because they are the essential leishmanicidal molecules as stated earlier 39 . In case of MIL-S clinical isolates (AG83 & T8) infected MØs, the generation levels of NO ( Fig. 2a panels AG83, T8) and ROS ( Fig. 2b panels AG83, T8) were gradually enhanced with increasing concentrations of MIL. There was no significant increase in the NO (Fig. 2a, panel T9) and ROS (Fig. 2b, panel T9) releases from MØs infected with putative MIL-R isolate (T9) and treated with MIL with respect to infected control group. Detection of the mitochondrial trans membrane potential (ΔΨm) and viable Leishmania isolates following MIL treatment. JC-1 was used as a probe for the measurement of ΔΨm by flow cytometry. We observed that the MIL-S clinical isolates (AG83, T8) showed sensitivity to MIL as evident from the noteworthy decrease in ΔΨm after 72 h of MIL treatment (Fig. 3a,b panels AG83, T8). When the putative MIL-R isolate (T9) was exposed to similar MIL treatment, the red/green fluorescence ratio was not decreased significantly due to the less sensitivity towards MIL (Fig. 3a,b panel T9). The flow cytometric analysis with FITC Annexin V/PI double staining result revealed that when MIL-S clinical isolates were exposed to MIL, around 7% viable population of cells were observed (Fig. 3c,d panels AG83, T8). On contrary, after MIL exposure to T9 (putative MIL-R) about 67% viable population of cells were observed (Fig. 3c, Parasite load of liver and spleen were expressed as Leishman-Donovan Unit (LDU). In liver, the parasite load after MIL treatment in experimental animal groups (AG83-TRE & T8-TRE) showed about 3-7% retention compared to that of the infected groups (Fig. 5c, panels AG83, T8) while group T9-TRE animals (infected with putative MIL-R isolate and then treated with MIL) showed retention of 32-62% (average 51.54%) parasites in the liver (Fig. 5c, panel T9). The retention of parasite load after MIL treatment in experimental animal groups (AG83-TRE & T8-TRE) of animals was about 3-8% in spleen compared to that of the infected groups ( Fig. 5d panels AG83, T8). T9-TRE animals (infected with putative MIL-R isolate and then treated with MIL) showed retention of 36-69% (average 51.35%) parasites in spleen (Fig. 5d, panel T9). T-cell proliferation assay. The splenocytes of treated MIL-S groups (AG83-TRE & T8-TRE) stimulated with SLA showed a significantly higher level of T-cell proliferation than that of the infected groups and the splenocytes of T9-TRE group animals failed to proliferate anti leishmanial T-cell against leishmanial antigen at significantly higher level than T9-INF group ( Supplementary Fig. S5). Measurement of antileishmanial antibody responses. To understand the disease dynamics, anti leishmanial antibodies were measured using the sera from all the groups of animals. The presence of anti leishmanial IgG2 antibodies in the sera of these animals were detected together with IgG1 ( Fig. 6a-c). Following MIL treatment, in respect to the infected groups of hamsters (AG83-INF and T8-INF groups), the level of IgG1 isotype of treated groups [AG83-TRE (P = 0.001 at 10 −1 dilution), T8-TRE (P = 0.0039 at 10 −1 dilution)] were decreased significantly (Fig. 6a,b) except for T9-TRE group (hamsters infected with putative MIL-R isolate T9 and then treated with MIL) (Fig. 6c). On the other hand, MIL treated hamsters groups: AG83-TRE (P < 0.0001 at 10 −3 dilution) and T8-TRE (P < 0.0001 at 10 −3 dilution) showed significantly increased level of IgG2 in respect of AG83-INF and T8-INF groups (Fig. 6a,b). In contrast, there was no significant change in the level of IgG2 between T9-TRE and T9-INF group (Fig. 6c). Analysis of Th1/Th2 mRNA cytokines levels by Real-time PCR. Real-time PCR data revealed that there was noteworthy increased expression levels in the mRNA transcripts of IFN-γ (Fig. 7a, panel AG83), iNOS (Fig. 7b, panel AG83) and significant decreased expression levels in the mRNA transcripts of TGF-β (Fig. 7c, panel AG83) in the MIL treated MIL-S group of animals (AG83-TRE). T9-TRE group of animals showed no significant change in the expression levels of mRNA transcripts of Th1, Th2 cytokines and iNOS with respect to T9-INF group (Fig. 7a-c, panel T9). Discussion The common pentavalent antimonials like sodium stibo gluconate (SSG) used for the treatment of VL, necessitate prolonged course of treatment and is losing its efficacy due to increasing parasite resistance. This has emerged as a major difficulty in the treatment and control of VL 9 . The progression of SSG resistance in the endemic region specially in Bihar, directed that resistance could also come out to the other established anti leishmanial drugs and this may be due to poverty, illiteracy, poor health and HIV/VL-coinfection 6 have been executed previously to demonstrate the role of probable factors in the treatment failure and mechanism of resistance towards the drug MIL 22,27 . Function of fatty acid and steroid metabolism in addition to the expression levels of two MIL transporter proteins, LdMT and its specific beta subunit LdRos3 24, 25 seem responsible for the resistance. It was further suggested that LdMT gene mutation could be employed as a molecular marker for MIL-R L. donovani isolates 26 . Earlier studies confirmed that the biochemical actions for SSG uptake are catalyzed either by thiol metabolising genes or antimony transporter genes and several types of ATP Binding Cassette (ABC) transporters are related to multi-drug resistance (MDR) 36 . We have noticed the increment of 3.4 times of MRPA gene in the putative MIL-R corroborating the observations that in in vitro conditions, the MDR-related proteins [Multidrug Resistant Protein A (MRPA) or P-glycoprotein (PGPA)] have been amplified in different Leishmania spp. in response to different drugs 42 . The parasites are not able to metabolize MIL itself and can extrude via either exocytosis or probably by ABC transporter protein, such as P-glycoprotein (mdr1) 43 . These studies considerably increased our knowledge about Miltefosine resistance in clinical isolates of KA. Further studies need to be performed in the natural populations of L. donovani to examine the epidemiology of resistance in order to diminish the harshness of KA. In order to divulge the mechanism of resistance, establishment of in vivo animal model is essential. The phenotypic expression in experimental infection model may be extrapolated to that of the KA patients unresponsive to Miltefosine. By testing the MIL susceptibility of all the clinical isolates of KA and PKDL in vitro amastigote-macrophage model, we identified one putative MIL resistant (MIL-R) field isolate (study code T9), which is typified as L. donovani by ITS1 RFLP and hsp70 RFLP methods. This finding was supported through the measurement of Th1, Th2 cytokines and level of ROS and NO release from MIL induced macrophages infected with this isolate and then treated with the drug. The measurement of ΔΨm and FITC-conjugated Annexin-V and PI double staining results further corroborated our claim. Experimental MIL-resistant L. donovani isolates showed down regulated expression of LdMT and LdRos3 transporters 25 . Our study with single field isolate, resistant to MIL also showed the down regulation in the expression of these transporters. The MIL-R fields isolate also showed approximately 3.4 times higher MRPA expression level than MIL-S isolate (Supplementary Fig. S4). Hamster is a superior model for VL and expands a progressive, fatal disease which is very closely related to the human symptoms of the disease 44,45 . The splenic and hepatic stamps smears revealed the retention of parasite load after MIL treatment in the AG83-TRE and T8-TRE groups of animals of about 3-8%. On contrary, animals infected with MIL-R isolate and then treated with the drug showed retention of 36-69% (average 51.35%) parasites in spleen and 32-62% (average 51.54%) parasites in the liver. Cell mediated immunity impaired in Leishmania infection is characterized by marked T-cell anergy specific for Leishmanial antigen 46 . After checking the effects of MIL on treated animals, we became interested to see whether the T-cell anergy occurred during progressive infection, could be reversed by the treatment of MIL. The splenocytes of T9-TRE group of animals failed to proliferate anti leishmanial T-cell in response to leishmanial antigen. Active VL is also associated with the production of an altered level of antibody 44 . Significant increase in IgG2 levels in cured animals is surrogate marker of enhanced cell mediated immunity 45 . Our study revealed that MIL treated hamsters groups: AG83-TRE and T8-TRE showed significantly increased level of IgG2, indirectly indicating development of an effective Th1 type immune response 44,47 . In contrast, there was no enhancement of cell mediated immunity in the treated MIL-R infected (T9-TRE) group. This observation further supported through the measurement of mRNA transcripts of Th1 and Th2 cytokines. Our observations strongly suggested that out of three groups of experimental animals, AG83-TRE and T8-TRE groups of animal were almost cured with MIL treatment but T9-TRE group of animals did not realize recovery establishing the animal model for MIL-R faithfully. The whole genome analysis of this clinical isolate is in progress. Ethics Statements. Bone marrow aspirates were collected from KA patients and approved by the Ethical Committee of the Calcutta National Medical College, Kolkata. The written consent was obtained from legal guardian of the patient (as it was the case of a minor). In the present study, all methods were carried out in accordance with the relevant guidelines. Quantification of nitric oxide (NO). NO production was evaluated by the Griess Reagent as described previously 39 . Measurement of mitochondrial trans membrane potential. Transmembrane potential (ΔΨm) was evaluated using JC-1, a lipophilic cationic dye 39 39 . Briefly, untreated, MIL-treated (72 hrs treatment) promastigotes were washed with PBS. The pellets were resuspended in 1X binding buffer at a concentration of 1 × 10 6 /ml followed by incubation with 5 μl of FITC Annexin V and 1 µg/ml PI for 25 min in dark. Then 400 μl of 1X binding buffer was added and samples were analyzed by Flow Cytometry within 1 hr. Measurement of reactive oxygen species (ROS RNA isolation and semi quantitative RT-PCR analysis of the genes responsible for MIL transport. RNA was isolated from samples by disrupting in Trizol solution 51 and newly prepared cDNA were then amplified by taking 0.5 μl of cDNA with1 μl 10 mM dNTP, 1.5 μl 50 mM MgCl 2 , 0.5 μl Taq polymerase as well as gene specific primers (Supplementary Table S2). Amplification reactions were performed with cycling conditions for genes of interest were 5 min at 95 °C, followed by 30 cycles of denaturation at 95 °C for 30 s, annealing at (55°-60 °C) for 30 s and extension at 72 °C for 30 s. PCR-amplified respective gene products were checked by Agarose gel electrophoresis. For Densitometry analyses, the ImageJ software (National Institute of Health) was used and the same band area in Agarose gel was used to find out band intensity and normalized for GAPDH. Statistical analyses for the in vitro study. Results of all in vitro studies are expressed as mean ± SD. Student's t test for significance was carried out using GraphPad prism software and P value of <0.05 was considered to be significant. Correlation between the EC 50 and other parameters were determined by Spearman rank correlation coefficient and expressed as r 36 . Animals for in vivo study. Golden Selection of MIL doses for in vivo experiments. Hamsters were challenged with freshly transformed Leishmania T9, T8 and AG83 parasites (10 7 parasites/animal) via intra cardiac route 52 . After 8 weeks of infection, infected animals were treated with MIL at dose of 40 mg/kg (body weight) for 10 consecutive days and were sacrificed on days 45 post treatment 44 . Hamsters have been grouped in the following ways: Normal: 5 healthy control without challenge infection and treatment with MIL; Infected: each 10 animals were infected with T9, T8 and AG83 respectively and would be represented as T9-INF, T8-INF and AG83-INF respectively; Treated: among 10 infected animals of each group, 5 were treated with MIL. These groups henceforth would be written as T9 -TRE, T8-TRE and AG83-TRE respectively. Collection of blood and preparation of serum. Blood was collected from hamsters as illustrated earlier 39 and kept overnight at 4 °C. Then serum was prepared from collected blood sample. Determination of Organomegaly and Parasite Burden in spleen and liver. Weight and parasitic burden of spleen and liver from experimental groups of animals were assessed after sacrifice. Splenic and hepatic parasite burden of hamsters of different groups were determined by microscopic evaluation of Giemsa stained tissue imprints method 39 . Total parasite load in each organ is expressed in LDU unit (Leishman-Donovan Unit). 1 LDU = amastigote per nucleated cell x organ weight in milligram. T-cell proliferation assay. Splenocytes from studied groups of hamsters were prepared after Ficoll density gradient centrifugation and dissolved in complete RPMI medium and then plated in 96-well plates (at a concentration of 10 5 cells/well) therefore allowed to proliferate for 72hrs at 37 °C with 5% CO 2 either in the presence or absence of SLA (5 μg/ml) or ConA (5 μg/ml). Cells were treated with MTT (0.5 mg/ml) as described earlier 39 . Then Isopropanol-HCl mixture (0.04%) was used to solubilise the MTT crystals and the absorbance at 570 nm was interpreted at an ELISA plate reader (DTX 800 multimode detector, Beckman Coulter, California). Measurement of anti leishmanial antibody responses. Serum samples were collected from different groups of hamsters and examined to find out the parasite SLA-specific antibody titer. IgG1 and IgG2 present in the collected sera were measured as stated earlier 39 . Real-time PCR to estimate expressions mRNAs of Th1/Th2 cytokines levels. RNA was isolated from the splenocyte of hamsters and 50-100 ng of total RNA was used for synthesis of cDNA. RT-qPCR was done as described elsewhere 44 . Briefly, it was carried out with 7 μl of SYBR green PCR master mix, 1 μl of cDNA from RT reaction mix and gene specific primers in a final volume of 15 μl. PCR was conducted under the following conditions: initial denaturation at 95 °C for 10 min followed by 40 cycles, each consisting of denaturation at 95 °C for 15 s, annealing at 58 °C for 1 min and extension at 72 °C for 40 s per cycle using the ABI 7500 Real time PCR system and data were analyzed by the comparative CT method 45,54 . cDNAs from infected hamsters were used as comparator samples. All quantifications were normalized to the housekeeping gene hypoxanthine phosphoribosyl transferase (HPRT). Scientific RepoRts | 7: 10330 | DOI:10.1038/s41598-017-09720-1 Statistical analysis for in vivo study. Statistical level of significance between different groups was calculated by unpaired two-tailed Student's t-test with GraphPad Prism software (San Diego, California, USA). P < 0.05 were considered to be significant for all analyses.
2023-02-17T15:29:00.184Z
2017-09-04T00:00:00.000
{ "year": 2017, "sha1": "f45be42b1e3811bafe1e6b1439ef1a768db4444c", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-017-09720-1.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "f45be42b1e3811bafe1e6b1439ef1a768db4444c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
15176108
pes2o/s2orc
v3-fos-license
Motor Weakness after Caudal Epidural Injection Using the Air-acceptance Test Air injected into the epidural space may spread along the nerves of the paravertebral space. Depending on the location of the air, neurologic complications such as multiradicular syndrome, lumbar root compression, and even paraplegia may occur. However, cases of motor weakness caused by air bubbles after caudal epidural injection are rare. A 44-year-old female patient received a caudal epidural injection after an air-acceptance test. Four hours later, she complained of motor weakness in the right lower extremity and numbness of the S1 dermatome. Magnetic resonance imaging showed no anomalies other than an air bubble measuring 13 mm in length and 0.337 ml in volume positioned near the right S1 root. Her symptoms completely regressed within 48 hours. Injection of local anesthetics or steroids into the epidural space through the caudal approach is a widely used and effective method for treating chronic benign pain syndromes, such as chronic axial pain, discogenic pain, spinal stenosis, and postsurgery syndrome [1]. Caudal epidural injection is a relatively safe and simple procedure with a low risk of inadvertent dural puncture, and it can also be safely used for postsurgery syndrome patients [1]. Successful caudal epidural injection requires correct evaluation of the needle position which can be achieved by injecting a small amount of air and noting any bulging or crepitus of the tissues overlying the sacrum or over-resistance of the plunger. A test aspiration must also be done to rule out vessel puncture [2]. Despite these efforts, complications such as local anesthetic toxicity, hematoma, ecchymosis of the puncture site, infection, urinary retention, and incontinence may follow. However, neurologic complications due to caudal epidural injection are known to be very rare. When complications do occur, they usually result from surgical trauma or an underlying neurologic lesion [2]. Herein, we report a case of unilateral motor weakness 287 www.epain.org identified as an air bubble (white arrow) adjacent to the right S1 root (black arrow). identified as an air bubble (white arrow) adjacent to the right S1 root (black arrow). in the right leg and numbness in the S1 dermatome area as a possible consequence of a small volume of trapped air from the caudal epidural injection. CASE REPORT The patient, a 44-year-old female with a weight of 48 kg, height of 158 cm, and no significant medical history or underlying condition was admitted to the orthopedic ward for low back pain. Despite admission and conservative treatment, her pain failed to subside and she was referred to our pain clinic. However, she had no symptoms of radiculopathy of the lower extremities. Vital signs upon admission were within the normal range, with a blood pressure of 110/70 mmHg, a heart rate of 74 beats per minute, and oxygen saturation of 98%. Chest X-ray, electrocardiogram, complete blood cell count, blood chemistry, prothrombin time, activated partial thromboplastin time, and other laboratory findings revealed no abnormalities. Nevertheless, the patient could not walk straight for 200 meters due to her low back pain. A physical examination showed a local tenderness around the L4, L5, and S1 vertebrae, but the straight leg raising (SLR) test showed negative results. Motor and sensory functions were fully intact, and defecation and urination were normal, as well. L-spine magnetic resonance imaging (MRI) showed mild bulging of the intervertebral discs from L3 through L5. To relieve her symptoms, she received a caudal epi-dural injection. Local anesthesia was given around the puncture site with 3 ml of 2% lidocaine. Then, a 20-gauge spinal needle was inserted 2 cm inwards, and 1 ml of air was injected which checking for any bulging or crepitus of the tissues overlying the sacrum or over-resistance of the plunger. Resistance was present; thus, the needle was advanced 0.5 cm farther. Then, 1 ml of air was re-injected, and the loss of resistance was confirmed. No blood was aspired, confirming negative vessel puncture. In total, 2 ml of air were used during the procedure. After correctly positioning the needle, 15 ml of 0.3% mepivacaine and 20 mg of triamcinolone were injected. Ten minutes later, the patient felt numbness in both legs and muscle weakness in the right lower leg. Decreased motor and sensory function failed to resolve spontaneously in the right lower leg and continued to persist for 1 hour. Specifically, sensory function, which was checked at the posterolateral side of the right lower leg and the plantar area of the right foot, decreased to just 20/100 compared to the corresponding areas of the normal left leg. With regard to decreased muscle strength, flexion and extension of the right knee was normal, but at the right ankle, dorsiflexion was marked as motor grade IV and plantar flexion as motor grade I. Vital signs were within the normal range, with a blood pressure of 120/80 mmHg, a heart rate of 78 beats per minute, and oxygen saturation of 98%. Close observation was done for the next 4 hours, but the symptom persisted, making the patient anxious. To rule out the possibility of a hematoma caused by vessel injury, the patient underwent an MRI, which showed adjacent to the right S1 nerve root a 13-mm-long air bubble with a low signal intensity in both the T1- (Fig. 1) and T2- (Fig. 2) weighted images. A consultation with doctors from orthopedics and radiology was carried out, and based on the distribution of symptoms, the cause was agreed to be a space occupying lesion, probably an air bubble, near the right S1 root. The patient was put on close observation, and symptoms began improving spontaneously 7 hours post-procedure, with muscle strength reaching motor grade III for plantar flexion. Sensory function also improved to 40/100. After 24 hours post-procedure, the patient had almost completely recovered, with a motor grade of IV and sensory function of 80/100. Forty-eight hours after the initial procedure, motor and sensory functions were fully back to normal. DISCUSSION Administration of local anesthetics or steroids to the epidural space via the caudal approach is useful in the treatment of a variety of chronic benign pain syndromes, including lumbar radiculopathy, low back syndrome, spinal stenosis and pelvic pain syndromes [2]. Because of the simplicity, safety, and patient comfort associated with the caudal approach to the epidural space, this technique is beginning to replace the lumbar epidural approach for these indications in some pain centers [3]. In this case, however, within the first 1 hour after the caudal epidural injection, the patient showed symptoms of plantar flexion impairment in the right ankle and numbness of the right S1 dermatome, namely the posterolateral side of the lower leg and the plantar area of the foot. Possible causes for neurologic complications after a caudal epidural injection include an inadvertent intrathecal injection, epidural abscess, and epidural hematoma [4]. First, in this case, inadvertent intrathecal injection seems unlikely for the following reasons; MRI showed a normal anatomy of the dural sac, with its extension limited to the first sacral vertebra; the needle was advanced inwards for only 2.5 cm; and no cerebrospinal fluid (CSF) appeared during the test aspiration. Furthermore, intrathecal injections have bilateral effects, whereas the patient's symptoms mainly persisted unilaterally. In such cases of unilateral motor weakness and numbness, the possible presence of a midline epidural septum may be considered [5]. However, the initial bilateral numbness that appeared 10 minutes post-procedure ruled out this possibility. Second, although rare, an epidural abscess is also capable of causing paraplegia or paralysis with vertebral pain, fever, and motor and sensory deficits. Nonetheless, it is reported in the literature that an average of 5 days are needed for the symptoms to manifest [4], which does not align with the details of our case. Finally, epidural hematoma, a rare but serious complication, can cause neurologic deficits that can remain permanent despite an emergency laminectomy [4]. Rapid diagnosis and treatment are crucial to counter its rapid progress. In the initial hours of our case, when the symptoms failed to improve, ruling out an epidural hematoma was crucial, providing the rationale for an emergency MRI study. Both the T1- (Fig. 1) and T2- (Fig. 2) weighted images showed a low signal lesion measuring 13 mm in the vicinity of the right S1 root. The MRI readings strongly suggested that the lesion was trapped epidural air rather than a hematoma. In the presence of epidural hematoma, the initial MRI findings during the first 12 hours are characterized by an almost equivalent signal in the T1-weighted MRI and a slightly high signal in the T2-weighted MRI [6]. However, the patient's 4-hour post-procedure MRI findings showed a low signal lesion in both the T1-and T2-weigted images. Hence, the possibility of a hematoma was ruled out. With these possible causes ruled out, it was highly suspected that the patient's neurological symptoms were due to an air bubble trapped near the right S1 nerve root. Although no clear signs of direct nerve compression were seen, consulting doctors from orthopedics and radiology all agreed that an air bubble, as a space occupying lesion, was highly likely to account for the symptoms. This conclusion was based on the fact that previously nonexistent symptoms of right ankle plantar flexion impairment and S1 dermatome numbness appeared after the procedure, with manifestations similar to an S1 radiculopathy. Epidural air can spread along the nerves of the paravertebral space, and, depending on its location, neurologic complications such as multiradicular syndrome, lumbar root compression, and even paraplegia can occur [7,8]. Kennedy et al. [8] reported a case of back pain and paraplegia due to an erroneous injection of massive air in the epidural space during continuous lumbar epidural infusion of opioids and local anesthetics to treat cancer pain. Computed tomography (CT) showed the epidural space from L1 to L4 filled with air, with the thecal sac of the L2 and L3 levels severely compressed. After a spinal needle was introduced into the epidural space, removing 15 ml of air, the patient promptly recovered. Miguel et al. [9] reported a case with symptoms of sharp shooting pain, motor weakness, and paraplegia after using the loss of resistance to air technique for epidural anesthesia. The CT showed compression due to air trapping on the spinal nerve roots of the corresponding symptomatic dermatomes. There have also been reported cases of subcutaneous emphysema developing at the supraclavicular region after epidural anesthesia, commonly due to injection of more than 20 ml of air after multiple failures or difficult attempts to identify the epidural space [10]. Cuerden et al. [11] reported that in four obstetric patients, recovery was delayed due to neurologic symptoms such as numbness, paresthesia, muscle weakness, hypomyotonia, and decreased muscle reflexes following lumbar epidural anesthesia. All patients recovered within 48 hours. The authors concluded that air caught in the epidural space is absorbed within 24 to 48 hours, resulting in spontaneous resolution of the symptoms. This also was the case for our patient because her symptoms subsided within 48 hours. Unlike the above reports, the volume of air used in our patient was minimal. However, it is highly likely that the air trapped in the right S1 nerve root was responsible for the unilateral motor weakness and the numbness of the S1 dermatome. Waldman [2] suggested the use of 1 ml of air for the air-acceptance test. Similarly, in our case, 1 ml of air was injected to find resistance, and then the needle was advanced 0.5 cm farther before the injection of an additional 1 ml. Thus, a total of 2 ml of air was used. With the aid of the Rapidia 2.8 program (INFINITT company, Seoul, Korea), the MRI-identified air bubble was measured to be 13 mm in length and 0.337 ml in volume and determined to be trapped near the right S1 nerve root. Stevens et al. [12] investigated how air bubbles within the epidural space migrate around the nerve roots. They reported that air bubbles collect near the outlet space for the exiting nerve roots. Therefore, while a large amount of air injection may cause radiculopathy, even the smallest amount of air may show up on an MRI as a herniated disc [13]. Because epidural gas is absorbed spontaneously, the first line of treatment in patients with neurologic symptoms must be conservative, using nonsteroid anti-inflammatory drugs and muscle relaxants, along with close observation. Gas aspiration under fluoroscopic guidance can be considered; however, in our case, the gas volume was too small for the patient to undergo such a procedure. Surgery should be reserved for chronic encapsulated lesions not responding to conservative therapy [14]. To prevent complications from epidural air, only a minimal amount of air should be injected. Furthermore, the use of ultrasound or fluoroscopic guidance with contrasts can be considered as alternatives to the air-acceptance test [15]. In conclusion, using even a minute amount of air during caudal epidural injection can cause air trapping around a nerve root and induce neurologic complications. Hence, more precautions should be taken during such procedures.
2018-04-03T04:41:03.571Z
2013-07-01T00:00:00.000
{ "year": 2013, "sha1": "8d5383350a48d035465b2c6acebbbb9c7b53fc6b", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3344/kjp.2013.26.3.286", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8d5383350a48d035465b2c6acebbbb9c7b53fc6b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
11862009
pes2o/s2orc
v3-fos-license
Incorrect statistical method in parallel-groups RCT led to unsubstantiated conclusions The article by Aiso et al. titled “Compared with the intake of commercial vegetable juice, the intake of fresh fruit and komatsuna (Brassica rapa L. var perviridis) juice mixture reduces serum cholesterol in middle-aged men: a randomized controlled pilot study” does not meet the expected standards of Lipids in Health and Disease. Although the article concludes that there are some significant benefits to their komatsuna juice mixture, these claims are not supported by the statistical analyses used. An incorrect procedure was used to compare the differences in two treatment groups over time, and a large number of outcomes were tested without correction; both issues are known to produce high rates of false positives, making the conclusions of the study unjustified. The study also fails to follow published journal standards regarding clinical trial registration and reporting. Background The conduct of rigorous randomized controlled trials (RCTs) is essential for progress in nutrition-related research [1]. In particular, rigorous tests of the causal effects of fruit and vegetable consumption on aspects of health would be valuable [2]. We therefore read with interest the paper by Aiso et al. [3] reporting results of an RCT of the effects of consumption of a commercial vegetable juice to that of the intake of fresh fruit and komatsuna (Brassica rapa L. var. perviridis) juice on serum cholesterol in men. Unfortunately, upon reading it became clear that incorrect statistical analyses were used, that the conclusions drawn in the paper are not supported by the analyses reported, and that there is insufficient adherence to RCT reporting guidelines [4], making it further difficult to determine the appropriateness of the analyses and the extent to which they adhere to original analytic plans. What the authors conclude The authors conclude "Compared with the intake of commercial vegetable juice, the intake of fresh fruit and B. rapa juice is highly effective in reducing serum cholesterol." As we will show below, this conclusion is not supported by the data and analyses presented. Why the analysis is incorrect The stated goal of this study was to compare the effects of the two types of juices on anthropometric data, blood constituents, and dietary intake. To do so, the authors performed paired tests (baseline versus after 4 weeks) within each treatment group, and declared a significant difference between the juices when one juice's test came up significant and the other juice's test did not. This analysis strategy is frequently used in published literature, but is not statistically valid and can result in a type-1 error rate as high as 50% in trials with two groups [5]. As Allison et al. [6] wrote, given a parallel-groups RCT with measures of a continuous outcome at baseline and at endpoint; there are at least four legitimate ways to formally test the difference between two groups: (a) ignore the baseline data and analyze the endpoint data only with a simple independent samples t-test; (b) use a repeated measures ANOVA with one between-groups factor (treatment assignment) and one within-groups factor (time) and test the group-by-time interaction (Y ij = β 0 + β 1 Treatment i + β 2 Time j + β 3 Treatment i Time j + e ij for i = 1, …, N, j = 0, 1, and {e ij } has a multivariate normal distribution) [7,8]; (c) analyze change scores (i.e., endpoint measurement minus baseline measurement) with a simple independent samples t-test; or (d) analyze the final outcome as an ANCOVA with one between-groups factor (treatment assignment) and one covariate (baseline scores) [9]. More details on these methods can be found in many classic experimental design books [7,9,10] and tutorial papers [6,8,10,11]. Of note, method (d) (ANCOVA) is typically more powerful than method (c) (t-test on change scores) as it uses the observed pre-post correlation to more efficiently reduce the residual variance [11][12][13]. Why the conclusions of the paper are not supported Because a proper test between groups was not reported, we emailed the corresponding author of the paper, explained the statistical concern, and requested the standard deviation for the change in LDL-cholesterol and change in total cholesterol in each group or that they make the raw data available thereby allowing us to calculate the values ourselves. Unfortunately, we received no reply to our request. The ICMJE guidelines (http:// www.icmje.org/icmje-recommendations.pdf ) state "authors have a responsibility to respond appropriately and cooperate with any requests from the journal for data or additional information should questions about the paper arise after publication." Given this, we suggest that Aiso et al. make the raw data from this trial available so that others may verify the results. Although appropriate between-groups tests of the effects of treatment assignment on the key outcome variables were not reported, it seems unlikely that many of such tests could be significant. For total and LDL cholesterol on which Aiso et al's conclusion claim is based, Aiso et al. do report the means and standard deviations for each variable within each of the treatment and control groups both at baseline and at endpoint. Using this information, we can implement choice a above 1 . If we do this for total cholesterol, the twotailed p-value is 0.9480 (t = 0.0663; df = 14). If we do this for LDL cholesterol, the two-tailed p-value is 0.5525 (t = 0.6087; df = 14). In neither case is the result even close to significant, meaning that by this legitimate test, the appropriate conclusion would have been that there was no compelling evidence of a treatment effect. Admittedly, the t-test only on endpoints is a relatively low power test; choice c above (a t-test on change scores) will usually be more powerful. Although it is clear that such a t-test would not be significant for LDL cholesterol (the groups had identical 9 mg/dl reductions), it is conceivable that the difference between the two groups in change of total cholesterol is statistically significant but we lack necessary information (such as the standard deviation of the change score) to conduct such a test. If Aiso et al. can show a statistically significant between-groups difference in the outcome variable, then their conclusion would be supported, but at present it is unsupported. There is a concern regarding Aiso et al.'s reporting of p-values from 58 variables per treatment group (116 tests overall). Such a high number of tests would strongly suggest the use of a multiple testing correction to control the type-1 error rate [14] as one may expect approximately 5.8 significant findings to occur by chance alone if one tests 116 independent tests with a significance level of 0.05 and all the null hypotheses are true (i.e., there is really nothing to find). The smallest reported p-value was 0.012, far larger than what would be needed for significance under a Bonferroni (0.000431) or Sidak [15] (0.000442) correction. Although correlation between the 58 variables may reduce the extent of Type I error inflation and methods exist for correcting multiple correlated outcomes [16], those methods were not used in this article and without knowing the correlation between each variable it is impossible to quantify the extent of the inflation. Taken as a whole, it is plausible that many of the p-values reported as significant represent type-1 errors. Lack of trial registration Articles published in Lipids in Health and Disease require adherence to BioMed Central's editorial policies, http://www.lipidworld.com/about. BioMed Central follows the International Committee of Medical Journal Editors (ICMJE) guidelines, which necessitate clinical trials registration for RCT reports submitted to its journals. ICMJE defines a clinical trial as, "any research study that prospectively assigns human participants or groups of humans to one or more health-related interventions to evaluate the effects on health outcomes" [17]. ICMJE recommends that authors include the trial registration number in the abstract of the manuscript. This journal article does not include the clinical trials registration number. We emailed the authors to inquire about public clinical trials registry for this article, but received no response. Given the above, we believe that the authors should provide documentation of clinical trial registration. Conclusions Clinicians, scientists, regulators, and the general public require and have a right to expect scientific evidence based on valid procedures [18] and free from spin [19] on which they can base decisions. The Committee on Publication Ethics [20] states that "Journal editors should consider retracting a publication if…they have clear evidence that the findings are unreliable [including]… as a result of … miscalculation or experimental error." We believe that the conclusions of Aiso et al. [3] are unreliable as a result of using an incorrect statistical procedure. 1 We conducted our calculations with the free public software at this site http://www.graphpad.com/ quickcalcs/ttest2/ so that anyone could reproduce our calculations. Competing interests The authors report no financial connection to the content of the paper discussed. David B. Allison and/or his institution have accepted funds from food companies, but not ones who, to his knowledge, market products discussed in this research. Authors' contributions David B. Allison conceived the paper. All three authors drafted sections of the manuscript and edited the entire paper. All authors read and approved the final manuscript. Introduction Here we respond to the commentary on our article by Allison et al. As an overview of the issues they raise, we have selected the following statement from their commentary. "Although the article concludes that there are some significant benefits to their komatsuna juice mixture, these claims are not supported by the statistical analyses used. An incorrect procedure was used to compare the differences in two treatment groups over time such that no direct between-group comparison was done, and a large number of outcomes were tested without correction." At various points their commentary, they raise questions about the following aspects of our study: 1) statistical analysis, 2) dietary survey, and 3) clinical trial registration. Below, we respond to each of these questions in turn. Statistical analysis Before we turn to the detailed statistical issues that Allison et al. raise in their commentary, we would first like Response to "Incorrect statistical method in parallel-groups RCT led to unsubstantiated conclusions" Izumi Aiso 1 to point out that, when our article was reviewed by the journal's referees, both referees specifically stated that our statistical analysis did not need further verification. Referee 1 wrote in his/her review: "Statistical review: No, the manuscript does not need to be seen by a statistician." And Referee 2 wrote in his/her review: "Statistical review: No, the manuscript does not need to be seen by a statistician." As the referees thought that a statistical review was unnecessary, we felt confident that we had used the appropriate statistical methods. Therefore, we were surprised that our methods were criticized by Allison et al. However, we were also very interested in their interpretation, so we followed their suggested method for analyzing our data. In their commentary, Allison et al. state: "The stated goal of this study was to compare the effects of the two types of juices on anthropometric data, blood constituents, and dietary intake. To do so, the authors performed paired tests (baseline versus after 4 weeks) within each treatment group, and declared a significant difference between the juices when one juice's test came up statistically significant (defined by the authors as p<0.05) and the other juice's test did not. This analysis strategy of comparing nominal significance of within group changes (done with either paired-sample parametric or non-parametric tests) is frequently used in published literature, but is not statistically valid and can result in a false positive rate as high as 50% in trials with two groups of equal size." and "there are several legitimate ways to formally test the difference between two groups: (a), (b), (c)." […] "Method (c) (ANCOVA) is typically more powerful than method (b) (t-test on change scores). As it uses the observed pre-post correlation to more efficiently reduce the residual variance." We thank Allison et al. for pointing out this new statistical approach. Like the other researchers in the field, we were not aware of this more advanced method when we wrote our paper. However, we were pleased to be able to apply it to the analysis of our data following the suggestion of Allison et al. We return to that analysis below, but first we would like to clarify some issues related to the Wilcoxon signed rank test that we used in our study. The goal of this study was to examine changes in various parameters in the intervention group and the control group before and after their respective juice interventions. In analyzing our data, we performed both a paired t-test and a Wilcoxon signed rank test. Both tests showed that the concentration of total cholesterol and LDL-cholesterol in the intervention group were significantly lower after 4 weeks compared with the baseline values. However, we chose to report the results of the Wilcoxon signed rank test, as that test is more appropriate for small sample sizes that do not have normal distributions. In retrospect, we now think that it would have been clearer to include the reason that we selected the Wilcoxon test in our article. We apologize for any misunderstanding that this omission may have caused. As Allison et al. raised questions about our statistical methods, we first rechecked the results of our original statistical analysis. After that, we applied the statistical analysis that they have suggested. a. Re-application of Wilcoxon signed rank test. When we repeated the Wilcoxon signed rank test to ensure that the results of our original test were accurate, we found that the results were the same as those we had originally published. b. Application of analysis of covariance (ANCOVA). In their commentary, Allison et al. state: "there are several legitimate ways to formally test the difference between two groups: (a), (b), (c)." […] "Method (c) (ANCOVA) is typically more powerful than method (b) (t-test on change scores). As it uses the observed pre-post correlation to more efficiently reduce the residual variance." Following this suggestion, we re-examined our data using ANCOVA. Data were analyzed using SPSS for Windows version 15.0 J computer software. We conducted the ANCOVA test by adjusting for levels of age, BMI, and each variable. The results did not show a significant difference for the parameters in either group. We speculate that this may be due to the small number of subjects participating in the study. In the light of this new analysis, we think that the conclusions stated in our original article should be moderated. We return to this point in our conclusion below. Dietary survey In their commentary, Allison et al. state: "There is also a concern regarding Aiso et al.'s reporting of p-values from 58 variables per treatment group (116 tests overall)." We would like to clarify the nature of the dietary survey used in our study. We used the "brief-type selfadministered diet history questionnaire" (BDHQ), which is a standard dietary history survey instrument used in Japan [21,22]. The BDHQ is used to calculate intake values such as energy and nutrients based on information about 58 types of food and drink. The questionnaires are batch processed at the Diet History Questionnaire Support Center. Because the Center calculates nutrient values from the intake frequency of the 58 food and drink items and then sends those values to us, we cannot individually manipulate the food and drink items as variables afterwards by ourselves. We can only perform our analysis based on the data categories provided by the Center, and therefore we cannot perform a more detailed statistical analysis on the 58 items. Clinical trial registration In their commentary, Allison et al. state: "The study also fails to follow published reporting guidelines for this journal and the scientific community overall regarding clinical trial registration and reporting." We have registered our study with the UMIN-CTR Clinical Trial database. The information can be found at: https://upload.umin.ac.jp/cgi-open-bin/ctr/ ctr.cgi?function=brows&action=brows&recptno= R000022765&type=summary&language=E. Conclusion The statistical test that we used in our article was considered appropriate by both of the journal's reviewers. We have rechecked the results of that test, and have obtained the same result. In addition, we have carried out the ANCOVA test suggested by Allison et al. ANCOVA did not show a significant difference between the intervention group and the control group. We thank Allison et al. for providing us with this insight. In future studies we plan to increase the number of subjects and reinvestigate the effect. We will also consider conducting a crossover study similar to that of Lee et al. in the future [23]. Based on the insights that we have gained from the suggestions of Allison et al., we think that the conclusion originally drawn in our article should be moderated to be stated as follows: Compared with the intake of commercial vegetable juice, the intake of fresh fruit and B. rapa juice may be effective in reducing serum cholesterol. We would like to once again thank Dr. Allison and colleagues for their thoughts on our article.
2017-08-03T02:00:42.108Z
2016-04-15T00:00:00.000
{ "year": 2016, "sha1": "5ad1116302d31ec192cbc09d74433ad76054fbcb", "oa_license": "CCBY", "oa_url": "https://lipidworld.biomedcentral.com/track/pdf/10.1186/s12944-016-0242-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5ad1116302d31ec192cbc09d74433ad76054fbcb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17325424
pes2o/s2orc
v3-fos-license
Mining Retrospective Data for Virtual Prospective Drug Repurposing: L-DOPA and Age-related Macular Degeneration BACKGROUND Age-related macular degeneration (AMD) is a leading cause of visual loss among the elderly. A key cell type involved in AMD, the retinal pigment epithelium, expresses a G protein–coupled receptor that, in response to its ligand, L-DOPA, up-regulates pigment epithelia–derived factor, while down-regulating vascular endothelial growth factor. In this study we investigated the potential relationship between L-DOPA and AMD. METHODS We used retrospective analysis to compare the incidence of AMD between patients taking vs not taking L-DOPA. We analyzed 2 separate cohorts of patients with extensive medical records from the Marshfield Clinic (approximately 17,000 and approximately 20,000) and the Truven MarketScan outpatient and databases (approximately 87 million) patients. We used International Classification of Diseases, 9th Revision codes to identify AMD diagnoses and L-DOPA prescriptions to determine the relative risk of developing AMD and age of onset with or without an L-DOPA prescription. RESULTS In the retrospective analysis of patients without an L-DOPA prescription, AMD age of onset was 71.2, 71.3, and 71.3 in 3 independent retrospective cohorts. Age-related macular degeneration occurred significantly later in patients with an L-DOPA prescription, 79.4 in all cohorts. The odds ratio of developing AMD was also significantly negatively correlated by L-DOPA (odds ratio 0.78; confidence interval, 0.76–0.80; P <.001). Similar results were observed for neovascular AMD (P <.001). CONCLUSIONS Exogenous L-DOPA was protective against AMD. L-DOPA is normally produced in pigmented tissues, such as the retinal pigment epithelium, as a byproduct of melanin synthesis by tyrosinase. GPR143 is the only known L-DOPA receptor; it is therefore plausible that GPR143 may be a fruitful target to combat this devastating disease. Developing a new drug costs more than $2 billion and takes 13.5 years from discovery to market. Drug repositioning does not require anywhere near these costs and has been successfully used for more than a dozen drugs. 1 Electronic medical records (EMRs) offer a powerful tool to examine the effects of a drug on conditions for which it was not originally prescribed. 2 Indeed, long-term EMRs can be mined for retrospective data to develop a "virtual prospective" drug repurposing study. Here we use EMR analysis to determine whether L-DOPA, a drug used for movement disorders, is a candidate for treatment of an unrelated disease, age-related macular degeneration (AMD). Age-related macular degeneration is the leading cause of blindness in developed nations, [3][4][5][6] even accounting for 10% of blindness in Sudan. 7 Despite years of intensive research efforts, we do not know the cause of AMD. Patients with AMD typically experience a gradual loss of central vision over years. Most patients develop geographic atrophy, a progressive loss of the region of highest visual acuity, the macula. When this atrophy involves the center of the macula, visual acuity drops precipitously. The other form of AMD includes the development of abnormal blood vessels, or neovascularization, leading to "wet" or "exudative" AMD. These abnormal blood vessels, if left untreated, result in progressive leakage, bleeding, and irreversible scarring of the macula. 8 Wet AMD tends to develop suddenly and progress rapidly, resulting in catastrophic vision loss. [9][10][11][12][13][14][15] Although wet AMD only occurs in 10%-15% of AMD cases, it is responsible for most blindness due to AMD. 16 The impact of AMD on Americans is staggering. Age-related macular degeneration affects patients of all ethnicities, but vision loss due to AMD is most common among Caucasians and is approximately 5-fold less common among those of African descent, with intermediate risk in Hispanic and Asian populations. 4,6,17 Approximately 9 million people in the United States have moderate to severe AMD, and this is projected to increase to more than 16 million by the year 2020. 12,18 Approximately 1.75 million of these people have vision loss or immediate vision-threatening disease (wet AMD or geographic atrophy). 18 The cost associated with AMD will only increase as the number of people over the age of 65 years increases. For those patients with vision loss due to AMD through geographic atrophy, there is no treatment at all, only vitamin supplements that may slow vision loss. 19,20 Recent progress in the use of agents that inhibit vascular endothelial growth factor (VEGF) has significantly improved the outcomes of patients who develop wet AMD. [21][22][23][24] These treatments have been successful in preserving vision, but this comes at a cost of significant discomfort, inconvenience, and expense. The most successful treatment involves repeated injections of VEGF inhibitors directly into the eye. Although successful, these drugs can be incredibly expensive. 25 The cost of this treatment strategy in 2010 through Medicare Part B was just under $2 billion. 26 CLINICAL SIGNIFICANCE • Patients prescribed L-DOPA are less likely to develop age-related macular degeneration (AMD). • In patients taking L-DOPA who did develop AMD, the age of onset was significantly delayed. • L-DOPA may both prevent and delay AMD in aged patients. We have previously discovered a G protein-coupled receptor that binds to and is activated by L-DOPA. 27 This receptor, GPR143, is expressed in the retinal pigment epithelium, a primary support tissue for the neurosensory retina. Further, we have shown that GPR143 controls trophic factor release by the retinal pigment epithelium, 27,28 such that GPR143 signaling may protect from AMD. Herein we test this novel hypothesis to investigate whether L-DOPA may be repositioned as an AMD preventative drug, using EMRs in a virtual prospective clinical trial. METHODS We used International Classification of Diseases, 9th Revision (ICD-9) codes 362.50, 362.51, 362.52, and 362.57 to capture all AMD diagnoses from each database. We used prescription history of L-DOPA, rather than Parkinson's disease diagnosis (PD: ICD-9 332), because many with Parkinson's disease do not take L-DOPA, and individuals without Parkinson's disease are prescribed L-DOPA for other movement disorders. Because our real question related to L-DOPA and AMD, regardless of why they were prescribed L-DOPA, this creates an unbiased observation. Statistical analysis included t test analysis and binomial testing for the Marshfield Clinic Cohort (equation below) to examine the population distribution. For the Truvan MarketScan Cohort we limited our analysis to those with a record of Ophthalmology, for any reason (15,215,458 individuals). This allows for selecting patients with access to ophthalmologists or other healthcare providers diagnosing ophthalmic conditions without affecting the potential relationship between L-DOPA use and AMD. The prevalence of AMD in this selected population was 4.5%, indicating that AMD was not overrepresented by including individuals who had an ophthalmology history. For comparisons, using SPSS (version 22; SPSS Inc, Chicago, Ill), an independent-samples t test was used to compare the age difference between the groups, and multinomial regression analysis was used to control for potential confounding variables (age and gender) and to evaluate the association between L-DOPA use and diagnosis of AMD by calculating odds ratios (ORs), 95% confidence intervals (CIs), and P-values. Marshfield Clinic Cohorts To determine the possible relationship between L-DOPA and AMD, we examined clinical data from the Marshfield Clinic's Personalized Medicine Research Project (PMRP) (N = approximately 20,000), 29 plus an additional non-overlapping group of approximately 17,000 patients with long-term nearly complete electronic health records in the Marshfield Epidemiologic Study Area. 30 Institutional review board approval was obtained. In the PMRP cohort, AMD was present in 5.7% of the subjects (n = 1142), and Parkinson's disease (ICD-9 332) was present in 0.85% of the subjects (n = 170). However, AMD and Parkinson's disease were found together in 0.21% of the subjects (n = 43), 4 times the expected rate if they are independent variables, not unexpected because both AMD and Parkinson's disease are disorders of aging and may even share some common etiology. However, the etiologies of the 2 diseases have not previously been shown to intersect, and they do not share any known risk factors. In fact, one of the main risk factors for AMD, smoking, 31 may protect from Parkinson's disease. 32,33 Because we are primarily interested in determining the effect of L-DOPA on AMD and because only 67% of the Parkinson's disease patients in PMRP have been given L-DOPA (and other patients are prescribed L-DOPA), subsequent analyses included all patients taking L-DOPA (1.1% of PMRP, n = 229) with or without Parkinson's disease. We found that an AMD diagnosis (Dx) and L-DOPA prescription (Rx) also occurred together in the EMR 3 times more frequently than expected (in 0.2% of subjects, n = 39), even after stratifying for age. To control for this, we examined the 39 patients with both an AMD Dx and L-DOPA Rx in their EMR. Because the average age of L-DOPA Rx is 67.1 years, and the average age of onset of AMD in the PMRP cohort is 71.2 years, we would expect a bias that L-DOPA Rx appears in the EMR earlier then AMD Dx in patients with both in their EMR. However, the opposite trend was found. Of the 39 patients in PMRP with both AMD and L-DOPA in their EMR, 30 received L-DOPA after the AMD diagnosis; 4 received L-DOPA in the same year; and 5 received L-DOPA before the AMD diagnosis. This same trend was noted within the major age brackets: ages 65-70 years: 9 LDOPA after AMD, 1 L-DOPA before AMD; ages 70-75 years: 10 L-DOPA after AMD, 1 same year; ages 75-80 years: 4 L-DOPA after AMD, 2 L-DOPA before AMD, 2 same year; ages 80-85 years: 3 LDOPA after AMD, 1 L-DOPA before AMD. We also examined an independent patient cohort, a subset of the Marshfield Epidemiologic Study Area (approximately 100,000 people), consisting of approximately 17,500 individuals with more complete EMRs. The same trends were noted in the 20 patients in this cohort with both AMD and L-DOPA in their EMR: 14 received L-DOPA after the AMD diagnosis; 1 received L-DOPA in the same year; and 5 received L-DOPA before the AMD diagnosis. Figure 1 summarizes the combined data for patients from both cohorts, PMRP and the Marshfield Epidemiologic Study Area subset (n = approximately 37,500). Thus, our study shows that AMD and Parkinson's disease (or L-DOPA Rx) occur more frequently together than if they were independent, even after stratifying for age. As illustrated in Figure 1, for our combined cohorts, the average L-DOPA Rx age is 67.2 years, 4 years younger than the average AMD Dx age (71.3 years), similar to other studies. Just as in the PMRP subset, the expectation is that we should see more individuals with an L-DOPA Rx before an AMD Dx in individuals who have AMD and have taken L-DOPA at any time. However, again the opposite pattern is seen: the vast majority have taken L-DOPA only after an AMD Dx (Z score 4.627; P <.001), implying that L-DOPA is protective against AMD. Most intriguingly, shown in Figure 1 and summarized in Table 1, the AMD Dx age is significantly skewed in the 10 people who had an L-DOPA Rx before the AMD Dx (79.3) compared with the 44 people who had L-DOPA after the AMD Dx (71.3), demonstrating that the AMD Dx was significantly delayed in people taking L-DOPA before getting AMD (t test: 3.567; P <.01). Our age distribution of AMD Dx and L-DOPA Rx fits the known national pattern, 34,35 and so we expect to see more individuals with an L-DOPA Rx before an AMD Dx. We performed a binomial test (Equation 1) with a conservative null model assumption in which only half of L-DOPA Rx cases will be before AMD Dx. We also conservatively assumed that only 44 of the 54 individuals had the L-DOPA Rx after the AMD Dx (ie: we categorized the 7 individuals for whom the L-DOPA Rx date was effectively indistinguishable from the AMD Dx). The resulting conservative P-value for observing 44 or more individuals from the 54 total under these assumptions was 1.7E-06, which is highly significant. We conclude that because the actual P-value of the data is more highly significant than the conservative one calculated, these data offer compelling evidence that there is substantial skewing of AMD Dx dates to later than the L-DOPA Rx dates than one would expect. (1) Truven MarketScan cohort To further examine the possible protective role for DOPA on AMD, we performed a similar retrospective analysis using the Truven MarketScan outpatient databases from the years 2007-2011 (Truven Health Analytics). These are the largest insurance claim-based proprietary databases in the United States, containing medical insurance claim records of more than 87 million unique individuals. The de-identified and anonymized data provided by MarketScan databases include demographic and medical diagnosis information. The Outpatient Prescription Drug databases provide data regarding the medications used (both generic and brand) by each patient (in the form of National Drug Codes) and the dates in which medications were dispensed to patients. Figure 2 summarizes our statistical analysis of the MarketScan cohort. In the subset of patients with a record of ophthalmology-related diagnosis, we found that the mean age at first recorded AMD diagnosis in patients not treated with L-DOPA (n = 679,574) was 71.4 years, in agreement with both cohorts from Marshfield Clinic and in agreement with AMD incidence statistics. 34 Thus, although we do not have complete medical records, this cross-sectional cohort matches other populationbased AMD incident characteristics. We also examined the mean age of L-DOPA prescription in all of MarkestScan databases to determine whether that matched populationbased statistics and the Marshfield databases, and found the average age of L-DOPA prescription was 68 years, again similar to the Marshfield Clinic cohort and national averages. These data are summarized in Table 1. Having verified that both AMD age of onset and L-DOPA prescription ages match expectations, we investigated the intersection of those populations. As illustrated in Figure 2, the mean age of first AMD diagnosis in patients with an L-DOPA prescription record was 79.3 years (n = 12,387), and this was significantly later than in individuals without an L-DOPA prescription, 71.4 years (P <.001). Using multinomial logistic regression, we found that after controlling for age and gender, patients with a prescription history of L-DOPA were significantly less likely to have a diagnosis of AMD (OR 0.78; CI, 0.76-0.80; P <.001). Importantly, this finding was also carried through with diagnoses of neovascular AMD (ICD-9 362.52). After controlling for age and gender, and excluding patients with a record of neovascular AMD before an L-DOPA prescription history, we found that age of onset of wet AMD without L-DOPA was 75.8 years, whereas neovascular AMD onset in those with an L-DOPA prescription history was 80.8 years, and this difference was significant, P <.001. Further, the OR suggests that patients with a record of L-DOPA were significantly less likely to have a diagnosis of neovascular AMD (OR 0.65; 95% CI, 0.65-0.69; P <.001). Although we suspect that the positive trophic environment developed by increasing retinal pigment epithelium secretion of pigment epithelium-derived factor (PEDF) may account for protection from AMD via GPR143 signaling, a corresponding decrease in VEGF secretion from the retinal pigment epithelium is also possible. 28 The combined effect of increased PEDF, a potent antiangiogenesis factor, [36][37][38] and decreased secretion of VEGF may act together to reduce neovascular AMD. We also examined whether this effect was specific for L-DOPA by testing for a potential relationship between patients taking other movement disorder drugs. These drugs are dopamine receptor agonists, largely targeted at the D2 dopamine receptor. As shown in Figure 2, we found a small but significant delay in AMD onset in this group, which developed AMD at 73.9 years (CI, 73.73-74.10 years; P < .05). The OR for this group is 0.71 (CI, 0.70-0.73; P <.001). When compared with the age of onset with no drug (71.34 years) or L-DOPA (79.26 years), this was significantly different than both (P <.05). In our previous studies of GPR143, we showed that dopamine and L-DOPA, closely related molecules, compete for the same GPR143 binding site, 27 suggesting that dopamine receptor agonists developed for movement disorders may cross-react with GPR143. However, it is also possible that other dopamine receptors are participating in the effect we observed. We also examined this in the Marshfield databases but found no effect on AMD onset or OR for any other movement disorder therapies. DISCUSSION In this retrospective study of 3 independent cohorts, we show for the first time that, of patients with a history of both AMD and L-DOPA use, most received L-DOPA after an AMD diagnosis, in contrast to the expected opposite trend given that the mean age of L-DOPA prescription is years earlier than AMD onset. Furthermore, those who went on to have AMD were diagnosed with AMD at a significantly later age than those who had no record of taking L-DOPA. These results were the same for both dry AMD and neovascular AMD. These data strongly support a protective role for L-DOPA in AMD pathogenesis. Our experimental design does not allow us to specifically assess the mechanism of action of L-DOPA on AMD incidence. GPR143 is the only known receptor for L-DOPA, 27,28,39 and signaling through GPR143 simultaneously increases PEDF secretion while decreasing VEGF, providing a plausible biological explanation for the ameliorating effect of L-DOPA on the retina. Normal aging in the retina includes both reduced pigmentation, 40,41 the source of L-DOPA, and retinal PEDF. 42 Our results may also explain the racial differences in AMD frequency and suggest that pigmentation, a surrogate for GPR143 activity, may be protective from AMD. Pigment epithelium-derived factor levels are significantly lower in vitreous and Bruch's membrane of eyes with neovascular AMD, [43][44][45] further suggesting an imbalance between retinal pigment epithelium secretion of PEDF and VEGF as part of AMD pathology. [45][46][47] Importantly, our data suggest that GPR143 signaling, a component of both pigmentation and PEDF/VEGF pathways, could be manipulated pharmaceutically to prevent or delay AMD pathogenesis. Finally, the drug to manipulate GPR143 signaling exists, has been used by millions for 50 years, is safe, and is available as a low-cost generic. Our data indicate prospective clinical trials to determine whether L-DOPA can prevent AMD are warranted. Data from the Truven MarketScan database illustrates that L-DOPA both delays age-related macular degeneration (AMD) onset and reduces the risk of developing AMD. (A) Data represent the age of AMD onset in several groups, with error bars representing the 95% confidence interval. The AMD group represents our control individuals that had no record of movement disorder prescription history. The L-DOPA AMD group represents all individuals with an International Classification of Diseases, 9th Revision (ICD-9) code for AMD that also had a prescription history for L-DOPA. Neovascular (NV) AMD represents individuals with the specific ICD-9 code 362.52 but no history of L-DOPA prescriptions. The L-DOPA and NV AMD group is similar except that the individuals had a history of L-DOPA prescriptions. The dopamine agonist group represents individuals who had a prescription history for various movement disorder drugs, largely dopamine agonists. All groups were significantly different from the AMD control. *P <.001. (B) Odds ratio analysis to determine whether the drugs alter the probability of developing AMD. All values below 1, representing the control, AMD with no L-DOPA or movement disorder prescription history, indicating a reduction in the probability risk of developing AMD, either in general or specifically NV AMD. Each reduction in risk was significant. *P <.001.
2016-08-09T08:50:54.084Z
2015-10-30T00:00:00.000
{ "year": 2015, "sha1": "1bfda1679f78eb5b8aeb2c02bb01568ba0b650cf", "oa_license": "CCBYNCND", "oa_url": "http://www.amjmed.com/article/S0002934315010190/pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "1bfda1679f78eb5b8aeb2c02bb01568ba0b650cf", "s2fieldsofstudy": [ "Computer Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
5827265
pes2o/s2orc
v3-fos-license
Gaucher Disease with Mesenteric Lymphadenopathy: A Case with 13-year Follow-up Mesenteric lymphadenopathy is a rare manifestation of Gaucher disease (GD) with only 26 cases reported worldwide and its outcome remains largely unknown. In this manuscript, we described a 17-year-old girl with GD who has been treated with standard enzyme replacement therapy (ERT) for 16 years. The follow-up of her mesenteric lymphadenopathy began 13 years ago, which is one of the longest follow-up for this condition worldwide. Clinical Practice Mesenteric lymphadenopathy is a rare manifestation of Gaucher disease (GD) with only 26 cases reported worldwide and its outcome remains largely unknown. In this manuscript, we described a 17-year-old girl with GD who has been treated with standard enzyme replacement therapy (ERT) for 16 years. The follow-up of her mesenteric lymphadenopathy began 13 years ago, which is one of the longest follow-up for this condition worldwide. The patient had been definitively diagnosed with GD at the age of 1 year based on the identification of Gaucher cells in her bone marrow and decreased leukocyte β-glucosidase activity. Abdominal ultrasonography at that time revealed hepatosplenomegaly but no lymphadenopathy. Standard ERT had been initiated immediately and maintained since then, and no manifestations had recurred. Ultrasonography revealed asymptomatic abdominal lymph nodes (LNs) as large as 3 cm in diameter at the root of the mesentery when the patient was 4 years of age. After that, the routine follow-up showed increment in the size, number, and calcification of LNs. Two months before, the patient was presented with mild edema of both ankles, without other discomforts. On physical examination, a large, lobulated and hard mass was palpated in the right lower quadrant, and mild edema appeared in both ankles. Routine blood examinations showed serum albumin of 29.6 g/L (35-45 g/L) and were otherwise normal. Tuberculin test was negative. Abdominal computed tomography (CT) confirmed a large mass of LN origin at the root of the mesentery, with 7.7 cm × 10.2 cm × 8.5 cm in size. Large patches of calcification were detected inside and around the large LN. Multiple LNs, with or without calcification, were also detected at the root of the mesentery [ Figure 1]. The superior mesenteric vessels were encased by the large LN; c b a the vein was nearly occluded whereas the artery was not obviously stenosed. Obvious edema of the intestinal walls was noticed. The major veins that drain the lower limbs were not affected. Resection or biopsy was not performed due to the explicit clinical diagnosis and the high risks of surgery. Six months later, the edema disappeared without medication while abdominal CT showed no obvious changes. Mesenteric lymphadenopathy is a rare manifestation of GD. Abdelwahab et al. [1] reported the largest cohort so far, including eight cases of GD with lymphadenopathy. Three of them underwent a LN biopsy, which revealed no malignancy but infiltration of Gaucher cells, which were confirmed in all other reports. This indicates that mesenteric lymphadenopathy is likely a benign complication of GD. Misdiagnosis as a malignant tumor and incorrect treatment can be avoided with increased recognition of this rare manifestation. Among all reported cases, surgical resection of enlarged LNs in GD was attempted in only three patients and has never been completed due to high risks. [1][2][3] Thus, the indications for resection should be carefully evaluated. We suggest that surgical intervention should be considered only when malignancy is suspected or when severe space-occupying effects are observed. More cases and a longer follow-up are required to further investigate the outcomes of this condition. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2018-04-03T00:17:01.159Z
2016-10-01T00:00:00.000
{ "year": 2016, "sha1": "d84fc1478b16d6552a37b0c8882f961781f2ebc1", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/0366-6999.191825", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d84fc1478b16d6552a37b0c8882f961781f2ebc1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
244347942
pes2o/s2orc
v3-fos-license
An Open-Source Low-Cost Mobile Robot System with an RGB-D Camera and Efficient Real-Time Navigation Algorithm Currently, mobile robots are developing rapidly and are finding numerous applications in the industry. However, several problems remain related to their practical use, such as the need for expensive hardware and high power consumption levels. In this study, we build a low-cost indoor mobile robot platform that does not include a LiDAR or a GPU. Then, we design an autonomous navigation architecture that guarantees real-time performance on our platform with an RGB-D camera and a low-end off-the-shelf single board computer. The overall system includes SLAM, global path planning, ground segmentation, and motion planning. The proposed ground segmentation approach extracts a traversability map from raw depth images for the safe driving of low-body mobile robots. We apply both rule-based and learning-based navigation policies using the traversability map. Running sensor data processing and other autonomous driving components simultaneously, our navigation policies perform rapidly at a refresh rate of 18 Hz for control command, whereas other systems have slower refresh rates. Our methods show better performances than current state-of-the-art navigation approaches within limited computation resources as shown in 3D simulation tests. In addition, we demonstrate the applicability of our mobile robot system through successful autonomous driving in an indoor environment. I. INTRODUCTION Recently, mobile robots have been navigating cluttered environments such as buildings and roads. Implementation of these devices into various industrial fields has been accelerating [1]. Depending on the design and purpose, they are utilized in various areas, such as for delivery, guidance, searches, and inspections. Therefore, robot navigation in crowded environments has been studied as a key topic in many research fields. Furthermore, the demand for mobile robots is increasing not only in industrial fields but also for individual uses. Examples include social robots, home service robots, and cleaning robots. However, expensive hardware and high power consumption are hindering the practical application of mobile robots [2]. For safe driving, the ability to recognize traversal areas and to detect obstacles is critical in an advanced motion * These authors contributed equally 1 planning strategy. LiDARs have been used as a dominant sensor to ensure accurate distance measurements and have been combined with cameras for deep learning recognition. However LiDARs are far more expensive than other sensors and thus increase the price of the robot. Simultaneous localization and mapping (SLAM) [3], mainly used for indoor positioning, requires a high-performance computer. Meanwhile, with the rapid development of deep reinforcement learning (DRL), numerous studies have focused on the use of neural networks for robot navigation [4]- [6]. Although the inference of a DRL model can be done in a short time, environment recognition requires heavy iterations for sensor data processing. This should be supported by a high performance computer, and the process ends up draining the battery more rapidly. Furthermore, software components for environmental interactions such as image-based object detection should be followed by navigation components to serve a socially interactive robot. These image-based recognition algorithms rely highly on the graphics processing unit (GPU). Therefore, navigation algorithms with low GPU utilization are greatly welcomed in mobile robot systems, even if they have an on-board GPU. We build an open-source low-cost autonomous mobile robot system without a need for a high-performance GPU or LiDARs that successfully overcomes the aforementioned problems. We also propose a real-time navigation approach designed for low-cost indoor mobile robots. Only an RGB-D camera is used for environment recognition, and real-time performance is achieved on a low-end single-board computer (SBC) without external computing aids. The robot can build a point cloud map and perform real-time positioning by means of lightweight RGB-D SLAM. We use a modified A* algorithm that generates a stable path while maintaining sufficient distances from adjacent obstacles. In addition, we propose a ground segmentation approach that provides a compact traversability map in real time using an RGB-D camera. This approach enables the robot to navigate among pedestrians safely. We demonstrate the feasibility of our ground segmentation method using both rule-based and learning-based navigation policies with the traversability information. All of the software for fully autonomous driving is integrated on our mobile robot platform, DPoom (see Fig. 1). For human-computer interactions, friendly expressions are displayed on the front screen. It also has an appropriate exterior design for educational and socially interactive purposes. We deployed DPoom as a social robot in a crowded residential environment. All of our materials, including the hardware and software, are released under an open-source license. 1 II. RELATED WORK System design of a robot is important, as the design determines its purpose, function and price. Most traveling mobile robots are designed for mission automation. For full automation, several essential functions should be realized synchronously with cross-interaction capabilities. In modern autonomous driving, the task is developed with separately divided modules that are integrated in a pipeline. Localization is the most basic module for all control tasks with closedloop feedback. SLAM is generally used for indoor robot localization. In order to drive to a certain location in a wide area, the robot should generate a trajectory through global path planning via, for instance, the A* algorithm [7]. When obstacles not on the prior map or moving objects appear on the planned trajectory, the robot avoids them by motion planning. Collision avoidance and safe navigation are particularly important for stable robot operation [8]. Reciprocal velocity obstacles [9] and optimal reciprocal collision avoidance (ORCA) [10] have been commonly used for dynamic robot navigation. However, given that they are based on handcrafted functions, they do not work well in more complex environments. Recent works applied DRL to navigation in crowds [4], [5], [11]. These approaches assume that the robot is aware of objects in a 360 • field of view (FOV) and that it accurately measures the positions of objects in real time with LiDAR. Unlike those assumptions, point cloud processing is computationally expensive and lowers the decision frequency of navigation algorithm when running on an onboard computer. A slow decision often causes frozen robot situations or even collisions. Therefore, it is necessary to choose a navigation policy that guarantees realtime execution according to the robot's computational performance. Furthermore, if procuring a 360 • FOV considering the assumptions above, the price of the required sensor increases greatly. This also places negative constraints on the mechanical design and on the design of the robot body's exterior components. Meanwhile, successful navigation is coupled with the ability to estimate traversable areas, not merely depending on the navigation policy. Recent ground segmentation methods based on a convolutional neural network (CNN) incur a high computational cost [12], [13]. In this paper, we use an RGB-D camera, which is generally much less expensive than a 3D LiDAR. The depth data provide robustness for localization and the direct distances to obstacles without estimating them with heavy algorithms. The robot body design was convenient because the sensor does not need to be mounted on top and an empty layer is not required in the middle of the body for laser range scanning. Real-time navigation is possible using our RGB-D ground segmentation approach. A. DPoom indoor mobile robot platform DPoom (see Fig. 1) is an open-source indoor autonomous mobile robot designed to interact with people while traveling around indoor environments. It was developed while focusing on three factors: cost performance, human-robot interaction, and ease of use. DPoom is built for fully autonomous driving using only a low-end SBC (LattePanda Alpha 864, LattePanda) and an RGB-D camera (Realsense D435i, Intel). The low-end SBC consists of an Intel dual-core m3-8100y processor, 8 GB RAM and Intel HD 615 on-board graphics. The controller board (OpenCR, ROBOTIS) for our system is is embedded with a robot operating system (ROS) [14] and has a nineaxis IMU Sensor MPU9250. The robot uses the front RGB-D camera D435i with a 1280×720 resolution to recognize the environment. The camera has 85.2 • × 58 • FOV. Two actuators (Dynamixel XM430-W210-T, ROBOTIS) are used for wheel driving, and two ball casters at the bottom of the rear structure support differential driving. The software interface is built on Ubuntu-based ROS Kinetic. The hardware is 33.0×33.5×35.0 mm (width×depth×height), weighs approximately 4 kg, and can achieve speeds up to 0.26 m/s. B. 3D SLAM Mapping should be preceded before deploying a robot to an unknown area. Mapping enables the robot to plan its trajectory to the goal and to perform localization to estimate its pose. 3D mapping using RGB-D data is known to be capable of higher accuracy than monocular vision-based mapping as it provides additional robust features for scan matching. We used RTAB-Map [15] to perform mapping and localization simultaneously with RGB-D data and wheel odometry. RGB image frames and depth frames were obtained from the Intel Realsense D435i with Intel Realsense SDK 2.0. Wheel-based odometry was calculated in the OpenCR controller using data from the built-in IMU sensor and motor encoders. Environments are represented as a point cloud map or a grayscale occupancy grid map after SLAM. Fig. 2a presents the result when mapping around a building hallway. Point cloud maps are saved in local memory and are used for scan matching during localization. The projected occupancy grid map can be used as prior knowledge of global path planning. The localization results are synchronously published to the ROS middleware in our system. C. Global path planning Under a 2D environment, the deterministic planning can generate more accurate paths in less time than a probabilistic path planner [16]. In this study, we use the A* algorithm [7], a deterministic path planning algorithm, as the global path planner. The A* algorithm searches for the path by adding information about the goal node to the Dijkstra algorithm [17], which is the most basic path planner. By introducing the distance cost d(n) to the cost calculation of the A* algorithm as shown in Equation 1, it is possible to generate a path with more stability, maintaining a proper distance from obstacles [18]. We used the fast marching method (FMM) to calculate the distance cost, as this approach solves the boundary value problem of the Eikonal equation [19]. The distance cost is designed to have a larger value as the node becomes closer to nearby obstacles. Hence, the modified A* algorithm generates a smoothed path with a tendency to keep a distance from nearby obstacles. D. Fast raw depth image ground segmentation When a low floor mobile robot follows a globally planned path, it is necessary not only to detect obstacles for collision avoidance but also to recognize whether the floor surface is traversable. Along with the rapid growth in the field of deep learning, research on ground traversability estimation with RGB-D cameras has been also conducted [20]. Yang et al. demonstrated the robustness of CNN-based models [12]. Paigwar et al. presented a real-time preprocessing method and a CNN model for application to robot navigation [13]. However, these methods require GPUs for real-time operation. In addition, deep-learning-based approaches can take into account floor thresholds, but they are unable to adjust the height of ground threshold on a deployed model depending on the situation. Floor thresholds are disastrous to low floor robots, as they can cause a malfunction or cause the robot to overturn. Therefore, there is a need for an analytic algorithm capable of adjusting the threshold height according to the robot platform and driving situation. Mathematically derived estimation algorithms have been developed at the same time. For a mobile robot system designed at a low price point, the algorithm must be computationally efficient and must support real-time implementation. Holz et al. showed that real-time plane segmentation is possible on a CPU by clustering and merging points in normal space [21]. However, this requires an additional analysis to determine whether each plane area is actually drivable when considering physical constraints such as the vehicle's width and rotation angle. Here, we propose a concept known as milestone over rendered paths (MORP), a real-time ground segmentation algorithm that can robustly recognize the forward traversal area with an RGB-D camera and that is designed to avoid obstacles effectively. With this algorithm, motion planning can be solved with a very small amount of computation by separating the area in front of the robot into virtual lanes and using the information of the closest non-traversable point recognized in each lane. Ground segmentation is performed for the center path of each lane area, and the first encountered non-traversable point is saved as a dead-end. This procedure is similar to 2D line extraction for fast segmentation of 3D point clouds [22]. A raw depth image is used for ground segmentation, which is converted from a depth point cloud into a 2D gray-scale image. Holz et al. showed that considering the pixel neighborhoods instead of spatial neighborhoods leads to a significant increase in the point cloud processing speed at the cost a small degree of accuracy [23]. It is possible to represent a large lane area by performing single column segmentation in a raw depth image. The score of each pixel is a weighted sum of the gradient from both the assigned start pixel depth and the adjacent pixel depth relative to the current pixel depth. Let (w 1 , w 2 ) denote the weighting factors of the gradients. The first pixel exceeding the score limit will be the dead-end in that area. Let (d (i,j),r , d (i,j),z ) denote the actual position from the sensor of pixel P (i,j) in the N × M size depth image, as shown in Fig. 3. Let n denote the number of virtual lanes to scan. The dead-ends D(n) consist of the set of the largest indices to be segmented as the ground in each column j: where s j is the start index of the ground on column j, and C is the empirically determined threshold to segment as the ground. When this process is done sparsely over the entire image, milestones are generated for a traversability map containing robust but very compact data. It is easy to implement parallel computing because the process is explicit and the execution time is virtually consistent on each lane. The implementation results are visualized in Fig. 3. More dense segmentation will be done on the front area of the robot as n becomes larger. However, there is a trade-off relationship between the density and the computation cost. An appropriate n should be selected in consideration of the width and the driving performance of the robot. E. Robot navigation based on traversable areas Given that each dead-end in D(n) contains direct information about non-traversable areas, collision avoidance is possible just with simple rule-based decisions. Avoidance in this case works in the same manner as a vehicle lane change on a road. On the other hand, this type of data pre-processing to obtain compact and representative features D(n) can be used as the observation space (o) for reinforcement learning. One powerful and sample-efficient approach of reinforcement learning is imitation learning. Behavior cloning (BC) in particular has been successfully used in many robotics applications given its simplicity and efficiency [24]. In this work, we also implement a neural network policy based on BC to verify the feasibility of our lightweight ground segmentation method aside from rule-based navigation policy. Both policies are evaluated in Section IV. We used a feedforward fully-connected neural network for imitation learning. Here, we denote the robot position relative to the goal point as x and y, and denote the robot direction as θ. The robot state is defined as s = [x, y, θ]. The action command is defined as a = [v x , ω z ], representing the longitudinal velocity and angular velocity of the robot, respectively. Then, we can consider a policyπ that takes x t = [s t , o t , a t ] at time t as input and results in the next action a t+1 as output. If the policy is given only a single time step information, capturing high-level intentions from the human demonstrations may be ambiguous. Therefore, we provide the state, observation and action history here as the input of the policy. In this case, the observation history contains implicit information about the velocities of the moving obstacles. This approach has been introduced in relation to applications of helicopters, autonomous vehicles, and quadruped legged robots [26]- [28]. The length of the history is denoted as H. The overall structure of our neural network policy a t+1 =π(x t−H+1 , x t−H+2 , . . . x t ) is shown in Table I. Finally, all modules are synchronously integrated into the autonomous driving system. In order to navigate to the goal in a complex environment, the ideal trajectory should be generated from global path planning as part of the proposed method. The orientation toward the way point should be updated while avoiding obstacles via localization. A realtime traversability analysis is required to avoid local obstacles successfully. The overall system architecture is shown in Fig. 4. A. 3D SLAM We tested our SLAM in a residential lobby on the DPoom platform. The 3D point cloud map was saved in local memory for localization. Fig. 5a depicts the obtained occupancy grid map. We added artificial grass and tables as confined areas on the map. Compared with Fig. 5b, the map shows that a loop closure was performed to correct distortion and elevation issues. B. Global path planning The modified A* algorithm uses a binarized 2D occupancy grid map. It is also used to obtain a distance cost map by FMM. The modified A* algorithm generates a path by calculating the fitness cost based on the binarized map and the distance cost map. We compared the generated paths from the original A* algorithm and the modified method. These results are shown in Fig. 6. The original A* algorithm generates a path close to obstacles, whereas the modified method generates a smoothed path with keeping distances from the adjacent obstacles. given trajectory is too long, it will cause overfitting. In contrast, if the history is too short, it becomes difficult for the model to find the optimal policy. We collected a human demonstration dataset in a simulated environment that included static and dynamic obstacles. We used Gazebo [29] simulation, and we implemented the hardware and driving characteristics of DPoom in the simulation. The RGB-D camera specifications are described on the Gazebo plugin for a realistic simulation. The demonstration data were collected in environments containing [0, 1, . . . 5] moving obstacles and [0, 3, 5] static obstacles. 2 Moving obstacles used the DPoom 3D model as well and were controlled by ORCA [10]. We trained the neural network policy using the dataset with different H values. We used the mean squared error (MSE) loss and Adam optimization fotraining. The results are shown in Fig. 7, demonstrating the fastest convergence speed and lowest test loss when H = 4 (in red). C. Training neural network navigation policy For further training, we used the DAgger [30] method, which is a basic on-policy approach of imitation learning [24]. During the training procedure, we leveraged the previously aggregated dataset as the initial dataset as a warm start. Moreover, we used the pre-trained policy network as the initial policy for bootstrapping rather than using a randomly initialized policy. This strategy can also be found in recent studies of imitation learning and its effectiveness has been proven [31], [32]. The best policy model during training is saved and used in the experiments. 2 The dataset is available in: https://github.com/ SeunghyunLim/Dpoom_gazebo D. Fast raw depth image ground segmentation and robot navigation In this section, we denote 'MORP-RB' as our rule-based navigation policy and denote 'MORP-IL' as our neural network policyπ trained in the imitation learning manner, coupled with ground segmentation. 1) Training existing policies: We implemented several existing state-of-the-art navigation methods in Gazebo for a comparison: ORCA [10], CADRL [4], and SARL with Local Map [6]. The obstacles were detected with the RGB-D camera and fed into the policies. The motion commands of the policy were published to the integrated system via ROS to actuate the motors. The DRL policies designed for navigation in dynamic environments, in this case CADRL and SARL, were trained using parameters suggested by the authors [4], [6]. The limited FOV and depth range of the RGB-D camera were applied to the observable area of the agents. In cases where no object was detected by the DRL agents, we fed a dummy pedestrian with a zero velocity and radius into the network, which did not affect the navigation [1]. ORCA served as the policies of the moving obstacles. 2) Gazebo simulation results: The point cloud of the RGB-D camera was downsampled by voxel grid filtering and detected obstacles were fed into the ORCA and DRL policies as observed inputs. We evaluated the runtime of each navigation method by measuring the one-step time to return a motion command. We specified the runtime as three parts: the communication delay on ROS, the depth data pre-processing time (abbreviated as "Depth."), and the decision delay of the navigation policy (abbreviated as "Nav."). It was tested on a laptop with an Intel Core i7-8565U CPU. The results are the average of more than 100 iterations. Measurements of our methods showed that it is up to 20 times faster than the other methods compared here (see Table II). Unlike existing methods that require CPU and memory-intensive point cloud preprocessing, our approaches use an exteroception optimized for indoor mobile robots, greatly reducing the processing time. The compact information after ground segmentation also reduces the complexity of motion planning. We compared the following metrics for a performance evaluation: the success rate, collision rate, and average time to reach the goal. Tests were conducted separately in two different randomized environments with and without static obstacles. Goal distances from our robot were sampled from [7,11] m. The first environment had only pedestrians and no static obstacles. Moving obstacles used DPoom 3D models of identical sizes. Their start and goal points were empirically determined to avoid collisions with each other. There were one to five moving obstacles that were visible to all agents, but they were not able to perceive our robot (see Fig. 8a). In the second environment, a unit cube and cylinder-shaped obstacles each with a 0.5 m radius were added to the first environment, and the number of obstacles was varied from one to nine (see Fig. 9a). We modeled ten worlds for each case and ran the test three times per world. Accordingly, 60 tests were conducted for each navigation method in total. MORP-IL shows the highest success rate while retaining a short time to reach the goal in both environments (see Tables III). The collision rate of CADRL is lower than that by our method with static obstacles because it tends to take large detours, causing it to spend twice as much time compared to the others. Apart from CADRL, our methods also show the lowest collision rate because it guarantees real- time execution. The collision rate for ORCA should be zero in an ideal 2D simulation with holonomic constraints by its design, but collisions occurred due to slow decisions by the robot and were occasionally caused by pedestrians outside of the robot's FOV. A slow refresh rate also has a disadvantage when the robot has non-holonomic constraints because the robot is unable immediately to rotate or move backwards. ORCA assumes that all pedestrians are observing the robot and avoiding it actively regardless of their FOV, which is not practical in the real world. This assumption causes the ORCA agents to take less time to reach the goal in the first environment and causes collisions with pedestrians who cannot observe the robot. Moreover, SARL has the longest one-step runtime due to its complex model architecture (see Table II). Akin to ORCA, slow decisions of SARL resulted in a high collision rate. 3) Real-world experiments: Our navigation method was integrated into an autonomous driving system on our DPoom platform via ROS. We deployed the robot in the DGIST student dormitory lobby. For human interaction, tiny-YOLOv3 [33] was used for object detection. The robot was able to estimate its pose by localization and navigate to the desired locations in the wide, crowded environment without collisions. Face emotions were displayed on the front screen depending on the situation. Our robot was able to interact in a friendly manner with people as a social robot (see Fig. 10). V. CONCLUSION In this paper, we built an open-source low-cost mobile robot platform with a single RGB-D camera. In addition, we designed a software architecture of fully autonomous navigation system for a low-cost mobile robot without Li-DARs or high-end computers. For global path planning, we developed the modified A* algorithm, applying FMM to generate collision-free trajectories. For motion planning, we proposed a new RGB-D ground segmentation method that represents the traversability of the front area in the form of compact information, which is well-suited for mobile robots. This enables depth data processing in real time on a low-end SBC. We combined this idea with both rulebased and learning-based motion planners, validating that our methods can successfully navigate in crowded environments. Unlike current state-of-the-art DRL navigation approaches were slowing down while executed simultaneously with other autonomous driving functions, our approaches operated at 18Hz in real time. We also demonstrated that our methods had lower collision rates and higher success rates in a 3D simulation compared to the other methods. Finally, we deployed our autonomous driving system on our platform in a real-world residential lobby, proving the applicability of the proposed system. We tackled practical issues associated with current mobile robots and contributed to the universal use of this technology through the presentation of a price-efficient mobile robot.
2021-03-05T02:41:24.237Z
2021-03-04T00:00:00.000
{ "year": 2021, "sha1": "00e5bf23a4df10f6147a86e2c1aa1ee95b85b93d", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/09970319.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "d3bd788afd4a52753108b4606c813606f6f9e392", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
256080706
pes2o/s2orc
v3-fos-license
On Poincar\'e-Birkhoff-Witt basis of quantum general linear superalgebra We give a detailed derivation of the commutation relations for the Poincar\'e--Birkhoff--Witt generators of the quantum superalgebra $\mathrm U_q(\mathfrak{gl}_{M|N})$. INTRODUCTION The functional relations are an effective method for investigation of quantum integrable systems. To derive them it is convenient to use the quantum algebraic approach. Previously, what we call quantum algebra was usually called quantum group. In fact, this object is an associative algebra, which in a sense is a deformation of the universal enveloping algebra of a Lie algebra. Nowadays, the term quantum algebra is more commonly used, and we adhere to this terminology. The general notion of a quantum algebra U q (g), used in the present paper, was proposed by Drinfeld and Jimbo [1,2] for the case when g is a Kac-Moody algebra with a symmetrizable generalized Cartan matrix. The derivation of the functional relations based on the quantum algebraic approach was given in the papers [3,4,5,6,7] for the loop Lie algebra g = L(sl 2 ), in the papers [8,9,7] for g = L(sl 3 ), and in the paper [10] we gave the derivation for g = L(sl M ) with an arbitrary M. 1 The derivation of the functional relations given in the papers [7,10] is based on the results of the papers [12,13,14]. In the paper [12], using the commutation relations for the Poincaré-Birkhoff-Witt generators of the quantum algebra U q (gl M ) presented in the paper [15], we found their action in the Verma U q (gl M )-module. Using some limiting procedure, we found a set of q-oscillator modules over the positive Borel subalgebra of U q (gl M ). This modules, via Jimbo's homomorphism were used to construct the corresponding modules over the positive Borel subalgebra of U q (L(sl M )), which are used to construct Q-operators. 2 Finally, we derived the corresponding functional relations in the paper [10]. Here, to analyze the tensor products of the q-oscillator modules, we used their ℓ-weights found in the papers [13,14]. 1 See also the paper [11], where some functional relations for g = L(sl M ) were presented without derivation. 2 For the terminology used for integrability objects, we refer to the papers [5,7,10]. By generalizing the defining relations of quantum algebra appropriately, one arrives at quantum algebras associated with the Lie superalgebras [16]. It would interesting to generalize the procedure of constructing the functional relations to the case of quantum superalgebras. 3 It seems that the right choice is to start with the quantum superalgebra U q (L(sl M|N )). Here the very first step should be derivation of the commutation relations for the Poincaré-Birkhoff-Witt generators of the quantum algebra U q (gl M|N ). Actually, the commutation relations for this case already were presented in the papers [18,19,20] without proof. There is some disagreement between these papers. This fact prompted us to rederive the results of the papers [18,19,20]. The structure of the paper is as follows. In section 2 we remind the necessary facts on the Lie superalgebra gl M|N . In section 3 we define the quantum superalgebra U q (gl M|N ). The detailed proof of the commutation relation is given in section 4. We fix the deformation parameterh in such a way that q = exp(h) is not a root of unity and assume that q ν = exp(hν) for any ν ∈ C. We define q-numbers by the equation We fix two positive integers M and N such that M, N ≥ 1 and M = N, and denote by C M|N the superspace 4 formed by (M + N)-tuples of complex numbers with the following grading. An element of C M|N is even if its last N components are zero, and odd if its first M components are zero. For simplicity, we denote the Lie superalgebra gl(C M|N ) as gl M|N . We denote by v i , i = 1, . . . , M + N, the elements of the standard basis of C M|N . By definition, It is convenient to use the notation The elements E ij ∈ gl M|N , i, j = 1, . . . , M + N, defined by the equation form a basis of the Lie superalgebra gl M|N . It is clear that the matrices of E ij with respect to the standard basis of C M|N are the usual matrix units, and we have It is also evident that . As the Cartan subalgebra k of the Lie superalgebra gl M|N we take the subalgebra spanned by the elements K i = E ii , i = 1, . . . , M + N, which form its basis. Denote by Hence, E ij , i = j, is a root vector corresponding to the root Ξ i − Ξ j and the root system of gl M|N is the set We choose as the system of simple roots the set then the system of positive roots corresponding to Π is Certainly, the corresponding system of negative roots is ∆ − = −∆ + . Denoting we obtain We define a strict partial order ≺ on k * as follows. Given α, β ∈ k * , we assume that β ≺ α if and only if α − β is the sum of positive roots. Define a nondegenerate symmetric bilinear form (· | ·) on k * by the equation Below we often use the relations In fact, these are all nonzero cases. QUANTUM SUPERALGEBRA U q (gl M|N ) We define the quantum superalgebra U q (gl M|N ) as a unital associative C-superalgebra generated by the elements 5 which obey the corresponding defining relations. The Z 2 -grading of the quantum superalgebra U q (gl M|N ) is defined on generators as Before giving the explicit form of the defining relations, introduce the notion of the q-supercommutator. The abelian group is called the root lattice of the Lie superalgebra gl M|N . Assuming that 5 We use capital letters to distinguish between generators of the quantum superalgebra U q (gl M|N ) and the quantum superalgebra U q (L(sl M|N )). we endow U q (gl M|N ) with a Q-grading. Now, for any elements X ∈ U q (gl M|N ) α and Y ∈ U q (gl M|N ) β we define the q-supercommutator by the equation if α, β ≺ 0, and by the equation if α ≺ 0 and β ≻ 0, or α ≻ 0 and β ≺ 0. The defining relations of the quantum superalgebra U q (gl M|N ) have the form [16] Here and below we use the notation It is useful to have in mind that There are also the following Serre relations Let us rewrite the defining relations (3.5)-(3.7) in a more familiar form. The relations (3.5) are equivalent to the equations and the relations (3.6)-(3.7) are equivalent to An element a of U q (gl M|N ) is called a root vector corresponding to a root γ of gl M|N if a ∈ U q (gl M|N ) γ . In particular, E i and F i are root vectors corresponding to the roots α i and −α i . It is possible to construct linearly independent root vectors corresponding to all roots of gl M|N . To this end, being inspired by M. Jimbo [22], we introduce elements E ij and F ij , 1 ≤ i < j ≤ M + N, with the help of the relations Explicitly, the last two equations look as It is clear that the vectors E ij and F ij correspond to the roots α ij and −α ij respectively. These vectors are linearly independent, and together with the elements q X , X ∈ k, are called Cartan-Weyl generators of U q (gl M|N ). It appears that the ordered monomials constructed from the Cartan-Weyl generators form a Poincaré-Birkhoff-Witt basis of U q (gl M|N ). In this paper we choose the following total order for monomials. First, we endow the set of the pairs (i, j), where 1 ≤ i < j ≤ M + N, with the lexicographical order. It means that (i, j) ≺ (m, n) if i < m, or if i = m and j < n. 6 Now we say that a monomial is ordered if it has the form where (i 1 , j 1 ) · · · (i r , j r ), (m 1 , n 1 ) · · · (m s , n s ) and X is an arbitrary element of k. In the present paper we only show that any monomial can be written as a finite sum of monomials of the form (4.3). To prove that they form a basis of U q (gl M|N ) one can use arguments similar to those used in the paper [15] for the the case of the quantum algebra U q (gl M ). We present the relations necessary for ordering as a sequence of propositions. First consider the ordering of q X with E ij and F ij . , and it follows from the defining relation (3.2) that 6 Note that if we define an ordering of positive roots so that α ij ≺ α mn if (i, j) ≺ (m, n) we get a normal ordering in the sense of [23,24], see also [25]. Thus, the first equation of (4.4) is true. The proof of the second equations is similar. Now we consider the ordering of the root vectors E ij , The conditions defining the branches are given in table 1. In the same table we put the Proof. The statement of the proposition is a direct consequence of the Serre relations (3.9). Proposition 4.3. For any Proof. The proposition can be proved by induction over n. For n = j + 1 we have just the definition (4.2). Assume that the statement of the proposition is valid for some given n > j, then we have Using this equation and proposition 4.2, we get Thus, the first equation of the proposition is true. The second one can be proved in the same way. Using the first equations of (3.11) and (3.12), we obtain Thus, equation (4.7) is valid for any admissible i, j and m. Finally, assume that the equation For any ((i, j), (m, n)) ∈ C III one has Proof. Let us consider the case ((i, j), (m, n)) ∈ C I and prove equation ( Hence, for i = M, the equation (4.12) is equivalent to the first of the Serre relations (3.7). Thus, for ((i, j), (m, n)) ∈ C I , equation (4.8) is true. Equation (4.9) can be proved in the same way. In the case when ((i, j), (m, n)) ∈ C III , one can prove equations (4.10) and (4.11) in a similar way. It follows from the above proposition that if ((i, j), (m, n)) ∈ C I , then Note that the quantum supergroup U q (gl M|N ) has two natural subgroups isomorphic to U q (gl M ) and U q (gl N ). The former is generated by E i , F i , i = 1, . . . , M − 1, and q X , where X belongs to the linear span of the elements K i , i = 1, . . . M, and the latter is generated by E i , F i , i = M + 1, . . . , M + N − 1, and q X , where X belongs to the linear span of the elements K i , i = M + 1, . . . M + N. It is clear that [i] + [j] = 0 iff E ij belongs to one of these two subgroups. Each of them has no zero divisors, see the paper [15]. Hence, for any element E ij belonging to them one has E 2 ij = 0. In other words, Proof. In fact, we should demonstrate that if i ≤ M < j, then First, we show that for all j > M. It is certainly the case, at least for j = M + 1. Using the fact that q j = q −1 for any j > M, we obtain Further, we assume that (4.16) is true for some 1 < i < M and M < j ≤ M + N, then we have Here we take into account that d i = 1 for any i < M. It follows from the first relation of (4.14) that or, in a more explicit form, Multiplying this equation from the left and from the right by E ij , we obtain Using this equation in (4.22), we come to Thus, we see that the statement of the proposition is always true. Proof. Using proposition 4.3, we get ((i, j) For any ((i, j), (m, n)) ∈ C VI one has Proof. The statement of the proposition is a direct consequence of the defining relation (3.3). Proposition 4.9. For any ((i, j), (m, n)) ∈ C II one has Prove equation (4.28) for i = k − 1, m = k, n = k + 1 and j = k + 2. We have It follows that Using the defining relation (3.3) and proposition 4.1, we obtain ((i, j), (m, n)) ∈ C I one has For any ((i, j), (m, n)) ∈ C III one has Proof. We first prove equation (4.31) for j = i + 1. Using proposition 4.3, we obtain . and, using proposition 4.1, we come to be true. Using proposition 4.3, we obtain It follows from proposition 4.4, equation (4.35) and proposition 4.1 that We see that equation (4.31) is always true. In the same way one can prove equations (4.32), (4.33) and (4.34). Proposition 4.11. For any Proof. The statement of the proposition is certainly true for j = i + 1. Let us consider the case when j − i > 1. It follows from proposition 4.3 that Using these equations in (4.36), we obtain That was to be proved. (3.4), we see that the first equation of the proposition is true. The second equation can be proved similarly. One can get convinced that the propositions 4.1-4.12 allow us to reduce any monomial on the Poincaré-Birkhoff-Witt generators to the ordered form (4.3). CONCLUSIONS We have derived the commutation relations for the Poincaré-Birkhoff-Witt generators of the quantum algebra U q (gl M|N ). Our results do not fully coincide with the results of the papers [18,19,20]. We are planning to use the obtained relations for constructing of q-oscillator representations of the positive Borel subalgebra of the quantum superalgebra U q (gl M|N ).
2023-01-23T06:42:14.411Z
2023-01-20T00:00:00.000
{ "year": 2023, "sha1": "f8b85d1484b5befaf5edd1240d77745e5bfa5be1", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "f8b85d1484b5befaf5edd1240d77745e5bfa5be1", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
229389701
pes2o/s2orc
v3-fos-license
Linear System Identification and Vibration Control of End-Effector for Industrial Robots This paper presents the discrete state space mathematical model of the end-effector in industrial robots and designs the linear-quadratic-Gaussian controller, called LQG controller for short, to solve the low frequency vibration problem. Though simplifying the end-effector as the cantilever beam, this paper uses the subspace identification method to determine the output dynamic response data and establishes the state space model. Experimentally comparing the influences of different input excitation signals, Chirp sequences from 0 Hz to 100 Hz are used as the final estimation signal and the excitation signal. The LQG controller is designed and simulated to achieve the low frequency vibration suppression of the structure. The results show that the suppression system can effectively suppress the fundamental natural frequency and lower vibration of end-effector. The vibration suppression percentage is 95%, and the vibration amplitude is successfully reduced from ±20 μm to ±1 μm. The present work provides an effective method to suppress the low frequency vibration of the end-effector for industrial robots. Introduction The problem of vibration suppression has traditionally been a research hotspot of scholars, and studies on vibration control are increasing year by year [1]. Due to the advantages of lightweight, large output, fast response speed and high strain sensitivity, piezoelectric materials have been widely used for vibration control and other occasions [2].What is more, a piezoelectric patch can be easily attached to the cantilever structure which is the most used mechanism in the area of industrial robots and the most easily simplified model for transportation robots [3,4]. Therefore, using piezoelectric materials for vibration suppression of beam structures has been widely studied. Piezoelectric material was introduced into the beam vibration by Crawley and Luis [5] for the first time, and subsequently, it was widely used in robot structure. Some scholars use the positive piezoelectric effect and the inverse effect to design sensor and actuator for robot. It achieved the collection of the vibration signal through a sensor and gave a reverse vibration by the actuator in order to achieve the mechanism vibration suppression, e.g., Tzou et al. [6,7], Shen et al. [8], Lin [9] and Lou [10]. However, soon the hysteresis characteristic of piezoelectric materials and the interference between electrical signals appeared. Leang [11] found the influence of the time-delay effect of piezoelectric materials on vibration suppression and started to study the hysteresis characteristics, as well as Zhang [12], Chen [13], and so on. In addition, some scholars focused on separation of control signals. Another research direction is only using piezoelectric materials as actuators. Dadfarnia [14] used a piezoelectric (PZT) patch as actuator which is bonded on the surface of the flexible beam to suppress Materials and Methods-Structure Design of Cantilever Beam Since the first generation of industrial robots came out in 1945, the application of industrial robots has gradually become more and more widespread [31]. More and more industrial robots replace human to complete industrial tasks with unusual or difficult requirements. People have higher and higher requirements for the quality of industrial robots. This paper mainly studies the transport robot for glass substrate shown in Figure 1. Due to the heavy weight, large area and easy damaging of the glass substrate, the position accuracy of the end-effector of the transport robot is required to be high. In practice, the vibration of the end-effector is the lead cause of glass substrate damage. Therefore, based on the end-effector of a glass substrate handling robot, a cantilever structure is designed for system identification and vibration control. Another research direction is only using piezoelectric materials as actuators. Dadfarnia [14] used a piezoelectric (PZT) patch as actuator which is bonded on the surface of the flexible beam to suppress residual vibration. Qiu et al. [15] gave a vibration suppression method of two bending modes and two torsional modes. At the background of this research direction, Chang et al. [16] discussed using an auxiliary piezoelectric actuator to control vibration for high-speed linear robots. Because of the minute output of piezoelectric material, it is often used in many pieces to achieve the precise vibration suppression of robots. Jia et al. [17], Zehetner [18] and Douat et al. [19]. Yang et al. [20] studied the accurate models of a flexible link and two surfaces bonded with piezoelectric patches, where the link and the piezoelectric patches will be modelled through the use of Euler-Bernoulli beam theory (EBT). When using PZT patch as actuator, in order to achieve vibration suppression, some people have studied the model identification of the system. Narendra [21] carried out detailed research on system identification and real-time control of dynamic systems, and realized adaptive control of nonlinear systems. People of the same range studied the identification method based on dynamic system, e.g., Abd Jalil [22], Sethi [23][24][25]. Elsley [26] of Rockwell International Science Center established a selflearning double-layer back propagation (BP) neural network control system for unknown characteristics and dynamic model systems, while Song [27] established neural network discrimination. Chen et al. [28,29] studied an adaptive method and gave the relationship between the proportion of low frequency components and the modal order. Some identification models came from experiments, such as Takawa et al. [30]. The existing research on vibration control of beam structure lacks the research on industrial robot with low frequency vibration. At the same time, under this working condition, most adaptive control systems or feedback compensation systems cannot provide the feedback signals needed by the system in real time. Therefore, a vibration controller based on linear system identification is proposed to solve the problem of low frequency oscillation. The remainder of this paper is organized as follows. The linear system identification and LQG controller of the structure of cantilever beam are described in Section II. In Section III, the linear identification system is designed. Furthermore, the LQG controller is presented in section IV. Conclusions are shown in Section V. Materials and Methods-Structure Design of Cantilever Beam Since the first generation of industrial robots came out in 1945, the application of industrial robots has gradually become more and more widespread [31]. More and more industrial robots replace human to complete industrial tasks with unusual or difficult requirements. People have higher and higher requirements for the quality of industrial robots. This paper mainly studies the transport robot for glass substrate shown in Figure 1. Due to the heavy weight, large area and easy damaging of the glass substrate, the position accuracy of the end-effector of the transport robot is required to be high. In practice, the vibration of the end-effector is the lead cause of glass substrate damage. Therefore, based on the end-effector of a glass substrate handling robot, a cantilever structure is designed for system identification and vibration control. The end execution structure of the existing glass substrate handling robot is illustrated in Figure 2. The overall structure is a rod-shaped structure with uniform mass distribution, large span and relatively thin thickness. The upper surface is the contact surface of the glass substrate which is connected by the glass substrate absorber. The end-effector is usually fixed to the arm by riveting or screw insertion into the arm mounting hole. Most substrates are composed of rubber while the material of the end-effector is steel and carbon fiber. Therefore, the structure can be simplified as a cantilever beam model, as shown in Figure 3. Appl. Sci. 2020, 10, x FOR PEER REVIEW 3 of 14 The end execution structure of the existing glass substrate handling robot is illustrated in Figure 2. The overall structure is a rod-shaped structure with uniform mass distribution, large span and relatively thin thickness. The upper surface is the contact surface of the glass substrate which is connected by the glass substrate absorber. The end-effector is usually fixed to the arm by riveting or screw insertion into the arm mounting hole. Most substrates are composed of rubber while the material of the end-effector is steel and carbon fiber. Therefore, the structure can be simplified as a cantilever beam model, as shown in Figure 3. From Figure 3, it can be found that the absorber is omitted after comparing the mass so that we can simplify the end-effector into a thin cantilever beam. According to the actual length, width, and height of the end-effector, we reduced in equal proportion. The structural parameters of the cantilever beam designed and processed for system identification are 250 × 2 × 25 in mm. Additionally, on the left there are 30 mm left to fix the end which is called exposed core. The specific structure is shown in Figure 4. The parameters of the pasted piezoelectric ceramics are shown in Table 1. The end execution structure of the existing glass substrate handling robot is illustrated in Figure 2. The overall structure is a rod-shaped structure with uniform mass distribution, large span and relatively thin thickness. The upper surface is the contact surface of the glass substrate which is connected by the glass substrate absorber. The end-effector is usually fixed to the arm by riveting or screw insertion into the arm mounting hole. Most substrates are composed of rubber while the material of the end-effector is steel and carbon fiber. Therefore, the structure can be simplified as a cantilever beam model, as shown in Figure 3. From Figure 3, it can be found that the absorber is omitted after comparing the mass so that we can simplify the end-effector into a thin cantilever beam. According to the actual length, width, and height of the end-effector, we reduced in equal proportion. The structural parameters of the cantilever beam designed and processed for system identification are 250 × 2 × 25 in mm. Additionally, on the left there are 30 mm left to fix the end which is called exposed core. The specific structure is shown in Figure 4. The parameters of the pasted piezoelectric ceramics are shown in Table 1. From Figure 3, it can be found that the absorber is omitted after comparing the mass so that we can simplify the end-effector into a thin cantilever beam. According to the actual length, width, and height of the end-effector, we reduced in equal proportion. The structural parameters of the cantilever beam designed and processed for system identification are 250 × 2 × 25 in mm. Additionally, on the left there are 30 mm left to fix the end which is called exposed core. The specific structure is shown in Figure 4. The parameters of the pasted piezoelectric ceramics are shown in Table 1. The end execution structure of the existing glass substrate handling robot is illustrated in Figure 2. The overall structure is a rod-shaped structure with uniform mass distribution, large span and relatively thin thickness. The upper surface is the contact surface of the glass substrate which is connected by the glass substrate absorber. The end-effector is usually fixed to the arm by riveting or screw insertion into the arm mounting hole. Most substrates are composed of rubber while the material of the end-effector is steel and carbon fiber. Therefore, the structure can be simplified as a cantilever beam model, as shown in Figure 3. From Figure 3, it can be found that the absorber is omitted after comparing the mass so that we can simplify the end-effector into a thin cantilever beam. According to the actual length, width, and height of the end-effector, we reduced in equal proportion. The structural parameters of the cantilever beam designed and processed for system identification are 250 × 2 × 25 in mm. Additionally, on the left there are 30 mm left to fix the end which is called exposed core. The specific structure is shown in Figure 4. The parameters of the pasted piezoelectric ceramics are shown in Table 1. Parameters Symbol Value Elastic constant (N/m 2 ) C11 12.6 × 10 10 C12 5.5 × 10 10 C13 5.3 × 10 10 C33 11.7 × 10 10 C44 3.53 × 10 10 In this paper, the excited signal of the end-effector comes from the contacting end with the arm of the robot. In order to restore the excitation of the end-effector as realistically as possible, we pasted a piezoelectric sheet as close to the exposed core as we can in order to excite the vibration. The maximum strain of a cantilever beam is at the clamp, so we pasted the other piezoelectric sheet near the excited one to control the vibration. Expressing the actual output vibration characteristics of the system called displacement acquisition point is 10 mm relative to the other end. The response of 0-300 Hz sinusoidal sweep excitation signal of the cantilever beam is carried out by fast Fourier transform. As shown in Figure 5, the first-order natural frequency of the structure is about 32.8 Hz, and the second-order natural frequency is about 195 Hz. The two natural frequencies have a large distance on the horizontal axis. The frequency domain response amplitude difference is large too. Therefore, the first-order vibration mode of the structure is mainly considered in the identification process. Based on the structure of the cantilever beam, this paper designs the experiment shown in Figure 6. In this paper, the excited signal of the end-effector comes from the contacting end with the arm of the robot. In order to restore the excitation of the end-effector as realistically as possible, we pasted a piezoelectric sheet as close to the exposed core as we can in order to excite the vibration. The maximum strain of a cantilever beam is at the clamp, so we pasted the other piezoelectric sheet near the excited one to control the vibration. Expressing the actual output vibration characteristics of the system called displacement acquisition point is 10 mm relative to the other end. The response of 0-300 Hz sinusoidal sweep excitation signal of the cantilever beam is carried out by fast Fourier transform. As shown in Figure 5, the first-order natural frequency of the structure is about 32.8 Hz, and the second-order natural frequency is about 195 Hz. The two natural frequencies have a large distance on the horizontal axis. The frequency domain response amplitude difference is large too. Therefore, the first-order vibration mode of the structure is mainly considered in the identification process. Based on the structure of the cantilever beam, this paper designs the experiment shown in Figure 6. Theoretical Design of Linear System Identification Basing on the state space model of the system, the discrete state space model of linear time invariant system is assumed to be as follows, Theoretical Design of Linear System Identification Basing on the state space model of the system, the discrete state space model of linear time invariant system is assumed to be as follows, where: x k -the n-dimensional state vector of the system y k -the L-dimensional output observation vector The parameter vector is defined as The vector length d = n 2 + n(m + l) + lm is the total number of system parameters. vec(A) is the straightening operation. The identification of system state space can be expressed as the estimate of the parameter matrix A, B, C, D by giving the input-output observation sequence (U N , Y N ), so as to minimize the value of the objective function as Theoretical Design of Linear System Identification Basing on the state space model of the system, the discrete state space model of linear time invariant system is assumed to be as follows, where: k x -the n-dimensional state vector of the system k y -the L-dimensional output observation vector The parameter vector is defined as The vector length Basing on subspace identification method, the extended objective matrix of structure is proposed as Hankel matrix of impulse response Basing on subspace identification method, the extended objective matrix of structure is proposed as Hankel matrix of impulse response According to the corresponding time, the Hankel matrix is U p = U 1,s,T in the past which at present is U c = U s,1,T and U f = U s+1,s,T in the future. Hankel matrices defining the future measurement noise and input noise are denoted as M f , N f . The state sequence of a system is defined as Basing on the input-output matrix equation of the system Y f = Γ s X s + Φ s U f + Φ s w M f + N f . Removing M f , N f and Hankel matrices U f by projection method gets O s = Γ s X s . Basing on the estimated state sequence X s and extended observability matrix Γ s , The parameter matrix A, B, C, D of the system state space model can be obtained by solving the equations with the least square method of N4SID regression. Linear System Identification Experiment System identification generally can be divided into off-line identification and on-line identification. In this paper, this system model structure and system order have been determined and the background is the industrial robot, so we used the off-line identification, as shown in Figure 7. Basing on the input-output matrix equation Linear System Identification Experiment System identification generally can be divided into off-line identification and on-line identification. In this paper, this system model structure and system order have been determined and the background is the industrial robot, so we used the off-line identification, as shown in Figure 7. This paper mainly studied the low-frequency vibration of the cantilever beam so the experimental platform (illustrated as Figure 8) was built according to the cantilever beam model designed in Section II. By amplifying the voltage by 30 times, the power amplifier excites the vibration of the cantilever structure. The laser displacement sensor gathers the displacement signal from the displacement acquisition point. In the process of the experiment, different excitation signals were used to excite the system vibration and collect the real-time signals. This paper mainly studied the low-frequency vibration of the cantilever beam so the experimental platform (illustrated as Figure 8) was built according to the cantilever beam model designed in Section 2. By amplifying the voltage by 30 times, the power amplifier excites the vibration of the cantilever structure. The laser displacement sensor gathers the displacement signal from the displacement acquisition point. In the process of the experiment, different excitation signals were used to excite the system vibration and collect the real-time signals. The order of state space identification is assumed to be 4, and the identification results are presented in Table 2. The accuracy of the identification results is represented by the root mean square value (MSE) of the actual output value and the output value of the identification model. In the process of data acquisition, the laser displacement sensor is set in advance to carry out sliding average filtering with a length of 265 to reduce the influence of measurement noise. Because the output signal of a laser displacement sensor is the absolute value of its measured displacement value, it is necessary for the average and the trend of the output signal before identification. The identified system model is set as a discrete state space model. The order of state space identification is assumed to be 4, and the identification results are presented in Table 2. The accuracy of the identification results is represented by the root mean square value (MSE) of the actual output value and the output value of the identification model. In the process of data acquisition, the laser displacement sensor is set in advance to carry out sliding average filtering Appl. Sci. 2020, 10, 8537 7 of 13 with a length of 265 to reduce the influence of measurement noise. Because the output signal of a laser displacement sensor is the absolute value of its measured displacement value, it is necessary for the average and the trend of the output signal before identification. The identified system model is set as a discrete state space model. In the above table it can be seen that the difference of the excitation signal and sampling frequency will have a significant impact on the identification results. In the above experiments, the optimal performance (i.e., minimum MSE) of Gaussian white noise, Pseudo-Random Binary Sequence (shorted as PRBS) sequence and chirp sweep signal are 2.9043, 2.6542 and 14.2563, respectively. The experiment should be repeated many times to avoid the interference of external factors. By comparing the output of actual and the identification space while three different kinds of signals excited the structure at different frequencies, the optimal situation under different excitation signals is selected. The time-domain differential value of the system is obtained by subtracting the actual output displacement of the system and the output displacement of the identification space at the corresponding time, as shown in Figure 9. Figure 9 shows that the influence of different excitation signals on the system identification can be directly represented by the time-domain differential curve. The experimental results show that in the whole system identification process, the maximum output displacement differences of Gaussian white noise, PRBS excitation signal and 1-100 Hz are 4.86 mm, 4.92 mm and 19.1 mm, respectively. Although the values of Gaussian white noise. PRBS excitation signal are smaller than that of the chirps signal, the identification results of these two have the similar time-varying identification Figure 9. Displacement difference between actual output and identification space output. Figure 9 shows that the influence of different excitation signals on the system identification can be directly represented by the time-domain differential curve. The experimental results show that in the whole system identification process, the maximum output displacement differences of Gaussian white noise, PRBS excitation signal and 1-100 Hz are 4.86 mm, 4.92 mm and 19.1 mm, respectively. Although the values of Gaussian white noise. PRBS excitation signal are smaller than that of the chirps signal, the identification results of these two have the similar time-varying identification difference deviation. But the curve of 0-100 Hz Chirps is stable around line 0 and the maximum difference is focused on a very short time. Therefore, considering the accuracy of identification results, the coverage of frequency bandwidth and the real-time requirements of sampling frequency, this paper chooses 1-100 Hz Chirp signal as the final estimation of the mathematical model and excitation signal of controller. The final parameter matrix is as Design and Simulation of LQG Controller For the discrete state space, the link quality indicator regulator, abbreviated as LQI regulator, obtains the optimal control rate by minimizing the linear quadratic cost function J(u) = Y= k=0 (z T Qz + u T Ru) without considering the noise interference. Q and r are the weight matrix of cost function determined by human. While the control strategy of the linear-quadratic-Gaussian controller shorted as LQG controller is to minimize the error quadratic functional of linear systems disturbed by external noise. It does not need the system state to be fully observable. Therefore, it is more suitable for this control system to be calculated by adjusting the value of weight Q and r during designing. The state estimator gives the state equation as According to the state estimation of Kalman filter and the optimal control matrix K obtained by the LQI method, a 1-DOF position tracking controller is constructed. The structure diagram is shown in Figure 10. Design and Simulation of LQG Controller For the discrete state space, the link quality indicator regulator, abbreviated as LQI regulator, obtains the optimal control rate by minimizing the linear quadratic cost function  without considering the noise interference. Q and r are the weight matrix of cost function determined by human. While the control strategy of the linear-quadratic-Gaussian controller shorted as LQG controller is to minimize the error quadratic functional of linear systems disturbed by external noise. It does not need the system state to be fully observable. Therefore, it is more suitable for this control system to be calculated by adjusting the value of weight Q and r during designing. The state estimator gives the state equation as x[k +1| k] = A x[k | k -1] + Bu[k] + L(y[k] -C x[k | k -1] -Du[k]) According to the state estimation of Kalman filter and the optimal control matrix K obtained by the LQI method, a 1-DOF position tracking controller is constructed. The structure diagram is shown in Figure 10. Figure 11 illustrates the comparison of the Bode diagram between the identification system and the original system. The amplitude of the identification system is consistent with the amplitude of the Bode diagram of the actual input-output relationship between 0 Hz and 100 Hz. But there is a huge deviation beyond this range. As for the phase, there is an understandable delay between 0 Hz and 100 Hz, but there is no regular pattern during overstep. Figure 11 illustrates the comparison of the Bode diagram between the identification system and the original system. The amplitude of the identification system is consistent with the amplitude of the Bode diagram of the actual input-output relationship between 0 Hz and 100 Hz. But there is a huge Appl. Sci. 2020, 10, 8537 9 of 13 deviation beyond this range. As for the phase, there is an understandable delay between 0 Hz and 100 Hz, but there is no regular pattern during overstep. Appl. Sci. 2020, 10, x FOR PEER REVIEW 10 of 14 Figure 11. Comparison of Bode diagram between identification system and original system. Figures 12 and 13 show the Bode diagram of the closed-loop system and the LQG controller. Generally, the Bode diagram amplitude of the closed-loop system is less than 1 and the phase diagram is basically maintained at 360 degrees when the frequency is below 100 Hz after adding the LQG controller, which indicates that the closed-loop system can effectively suppress the vibration below 38.7 Hz. Generally, the Bode diagram amplitude of the closed-loop system is less than 1 and the phase diagram is basically maintained at 360 degrees when the frequency is below 100 Hz after adding the LQG controller, which indicates that the closed-loop system can effectively suppress the vibration below 38.7 Hz. Appl. Sci. 2020, 10, x FOR PEER REVIEW 10 of 14 Figure 11. Comparison of Bode diagram between identification system and original system. Figures 12 and 13 show the Bode diagram of the closed-loop system and the LQG controller. Generally, the Bode diagram amplitude of the closed-loop system is less than 1 and the phase diagram is basically maintained at 360 degrees when the frequency is below 100 Hz after adding the LQG controller, which indicates that the closed-loop system can effectively suppress the vibration below 38.7 Hz. Figures 12 and 13 show the Bode diagram of the closed-loop system and the LQG controller. Generally, the Bode diagram amplitude of the closed-loop system is less than 1 and the phase diagram is basically maintained at 360 degrees when the frequency is below 100 Hz after adding the LQG controller, which indicates that the closed-loop system can effectively suppress the vibration below 38.7 Hz. Figure 14 shows the results of the response for the open-loop system and the closed-loop system, respectively. It can be seen that the amplitude of the closed-loop system has a rapid reduction which means the dynamic performance of the system is significantly improved after adding the controller. Appl. Sci. 2020, 10, x FOR PEER REVIEW 11 of 14 Figure 14 shows the results of the response for the open-loop system and the closed-loop system, respectively. It can be seen that the amplitude of the closed-loop system has a rapid reduction which means the dynamic performance of the system is significantly improved after adding the controller. The system controller designed by appeal is simulated. The frequency of sinusoidal interference signal is 35 Hz which is near the natural frequency, and the vibration amplitude is ±20 μm. The vibration response of the structure is shown in Figure 15. It can be seen from Figure 15 that the vibration amplitude of the structure is successfully reduced from ±20 μm to ±1 μm after the effective suppression of the control system indicating that the vibration suppression percentage of the vibration suppression system is as high as 95% without high frequency noise. When the vibration frequency of the excitation signal is 50 Hz and the vibration amplitude is ±20 μm, the response of the system reaches about ±40 μm. The results are shown in Figure 16, and the control system completely loses the vibration suppression effect. In conclusion, the LQG vibration suppression system has an obvious suppression effect on vibration interference less than 38.7 Hz, but has no suppression effect on other vibration signals higher than 38.7 Hz. The system controller designed by appeal is simulated. The frequency of sinusoidal interference signal is 35 Hz which is near the natural frequency, and the vibration amplitude is ±20 µm. The vibration response of the structure is shown in Figure 15. It can be seen from Figure 15 that the vibration amplitude of the structure is successfully reduced from ±20 µm to ±1 µm after the effective suppression of the control system indicating that the vibration suppression percentage of the vibration suppression system is as high as 95% without high frequency noise. When the vibration frequency of the excitation signal is 50 Hz and the vibration amplitude is ±20 µm, the response of the system reaches about ±40 µm. The results are shown in Figure 16, and the control system completely loses the vibration suppression effect. In conclusion, the LQG vibration suppression system has an obvious suppression effect on vibration interference less than 38.7 Hz, but has no suppression effect on other vibration signals higher than 38.7 Hz. Appl. Sci. 2020, 10, x FOR PEER REVIEW 11 of 14 Figure 14 shows the results of the response for the open-loop system and the closed-loop system, respectively. It can be seen that the amplitude of the closed-loop system has a rapid reduction which means the dynamic performance of the system is significantly improved after adding the controller. The system controller designed by appeal is simulated. The frequency of sinusoidal interference signal is 35 Hz which is near the natural frequency, and the vibration amplitude is ±20 μm. The vibration response of the structure is shown in Figure 15. It can be seen from Figure 15 that the vibration amplitude of the structure is successfully reduced from ±20 μm to ±1 μm after the effective suppression of the control system indicating that the vibration suppression percentage of the vibration suppression system is as high as 95% without high frequency noise. When the vibration frequency of the excitation signal is 50 Hz and the vibration amplitude is ±20 μm, the response of the system reaches about ±40 μm. The results are shown in Figure 16, and the control system completely loses the vibration suppression effect. In conclusion, the LQG vibration suppression system has an obvious suppression effect on vibration interference less than 38.7 Hz, but has no suppression effect on other vibration signals higher than 38.7 Hz. Conclusions In this paper, the vibration control of a cantilever beam is discussed. In order to solve the low frequency vibration problem of industrial robot cantilever structure, a discrete state space mathematical model based on subspace identification method is proposed. Based on the model, an LQG controller is designed. The results of the simulation demonstrate that the LQG control method has a fast response speed while the frequency range is limited, so it requires high accuracy of the identification model. When the excitation frequency is lower than 38.7 Hz, the LQG controller designed in this paper can effectively reduce the first-order frequency of the system and make the vibration amplitude reach 95% and the vibration amplitude of the structure is successfully reduced from ±20 μm to ±1 μm. This study provides a feasible method for the vibration control of the cantilever beam. The subspace identification method is used to identify the discrete state of the system, which is suitable for industrial transport robots with end-effectors. At the same time, the LQG controller designed based on the identification results can control the vibration of the end-effector of the robot without the real-time feedback signal of the system. This method can be implemented in vibration control of industrial robots with an end-effector. Author Contributions: All the authors have made great contributions to the design of the system. X.S and J.F supervised the experimental measurements and the writing of the manuscript. H. S designed and processed the structure, performed the experiment, analyzed the data and wrote the paper. C.Z and G.W provided valuable suggestions and comments on the interpretation of results and on the paper. All authors have read and agreed to the published version of the manuscript. Conclusions In this paper, the vibration control of a cantilever beam is discussed. In order to solve the low frequency vibration problem of industrial robot cantilever structure, a discrete state space mathematical model based on subspace identification method is proposed. Based on the model, an LQG controller is designed. The results of the simulation demonstrate that the LQG control method has a fast response speed while the frequency range is limited, so it requires high accuracy of the identification model. When the excitation frequency is lower than 38.7 Hz, the LQG controller designed in this paper can effectively reduce the first-order frequency of the system and make the vibration amplitude reach 95% and the vibration amplitude of the structure is successfully reduced from ±20 µm to ±1 µm. This study provides a feasible method for the vibration control of the cantilever beam. The subspace identification method is used to identify the discrete state of the system, which is suitable for industrial transport robots with end-effectors. At the same time, the LQG controller designed based on the identification results can control the vibration of the end-effector of the robot without the real-time feedback signal of the system. This method can be implemented in vibration control of industrial robots with an end-effector.
2020-12-03T09:07:36.687Z
2020-11-29T00:00:00.000
{ "year": 2020, "sha1": "c1c18ac6b08ca9b43f64eb976d8b4cba80fe661c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/10/23/8537/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "ab9b81fe8362d26bd31b8331654980918f73c9ab", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
257090420
pes2o/s2orc
v3-fos-license
Integrated management of plant-parasitic nematodes on guava and fig trees under tropical field conditions Two field experiments were carried out to study the efficacy of different biological control agents in controlling certain plant-parasitic nematode species including Meloidogyne javanica, Tylenchorhynchus mediterraneus, Hoplolaimus seinhorsti, Longidorus latocephalus, and Xiphinema elongatum on guava and fig trees under the tropical field conditions of Jazan region, south-west Saudi Arabia during two successive seasons from Feb. 15, 2016 to Jan. 15, 2017. The evaluated bio-agents were used in different integrated management combinations of certain fungal species (Trichoderma harzianum, Verticillium chlamydosporium, and Purpureocillium lilacinum), the bacterium Pasteuria penetrans, some organic amendments (cow manure, compost, and chicken manure), urea 46% as a nitrogenous fertilizer, and the nematicide carbofuran 10G for comparison. Results showed that all the tested treatments gradually decreased (P ≤ 0.05) the population densities of plant-parasitic nematodes on guava and fig trees over the study period. The highest reduction of nematode densities occurred at the end of the experiment. Carbofuran 10G was the most effective treatment in suppressing the nematode densities on guava and fig trees. The most effective management combinations, next to carbofuran 10G, in suppressing the nematode densities in the rhizosphere of guava trees were P. lilacinum + P. penetrans + urea 46%, P. lilacinum + P. penetrans + chicken manure, and T. harzianum + P. penetrans + chicken manure (66.54–69.22% nematode reductions). Correspondent combinations in the rhizosphere of fig trees were P. lilacinum + P. penetrans + cow manure, T. harzianum + P. penetrans + cow manure, P. lilacinum + P. penetrans + urea 46%, and V. chlamydosporium + P. penetrans + urea 46% (54.68–57.17% nematode reductions). On the other hand, nematode population densities continued to increase (P ≤ 0.05) in the rhizosphere of guava and fig trees in the absence of nematode management combinations. All the tested treatments significantly increased (P ≤ 0.05) the number of fruits/tree on guava and fig trees. Treatments which included the combinations of fungal and bacterial parasites along with chicken manure gave the highest numbers of fruits/tree, followed by the treatment with the nematicide carbofuran 10G. Regression analysis showed a significant negative linear relationship between the number of nematodes/kg soil and the number of guava and fig fruits/tree. health and environment. So, alternative control measures should be adopted to replace those compounds. The concept of combining compatible tactics for controlling nematodes predates that of integrated pest management (IPM) (Barker 2013). The biological control agents of nematodes include many microorganisms, but the most important are fungi and bacteria. Purpureocillium lilacinum (Thom) Luangsa-ard, Houbraken, Hywel-Jones & Samson, Trichoderma harzianum Rifai, and Verticillium clamydosporium Goddard were announced to be the most potent fungal parasites that can effectively control Meloidogyne spp. on many host plants (Rao 2007). Pasteuria penetrans Sayre & Starr, a mycelial endospore-forming bacterial parasite, represents another successful bio-control agent against root-knot nematodes (Chen et al. 1996). P. lilacinum offers a successful biological control against many pathogenic nematode species (Jatala 1986). For example, it effectively controlled Tylenchulus semipenetrans Cobb on mandarin and rough lemon, and the results were best when the fungus was combined with oil-cakes (Le Roux et al. 2000). Also, when P. lilacinum and the bacteria Pseudomonas fluorescens Migula were combined to enrich the farm yard manure, which was added to the rhizosphere of papaya seedlings, the root populations of Rotylenchulus reniformis Linford & Oliveira and Meloidogyne incognita (Kofoid & White) Chitwood were reduced by 73% and 78%, respectively, and the papaya yield was increased by 26% (Rao 2010). As well, T. harzianum effectively suppressed the population of the root-knot nematode, M. enterolobii Yang & Eisenback in both soil and roots of guava in Thailand (Jindapunnapat et al. 2013). When the nursery soil of papaya trees was treated by T. harzianum and the rhizobacteria P. fluorescens, either in a single or combined applications, M. incognita was greatly controlled and the papaya yield was increased (Rao 2007). De Leij et al. (1992) reported the potential of some Verticillium chlamydosprium isolates against M. arenaria on tomato plants. In Saudi Arabia, Al-Hazmi et al. (2013) found a heavy colonization of the cysts of Heterodera avenae Woll. with the fungus Verticillium chlamydosporium. The bacterium P. penetrans has shown a great control potential against many plant-parasitic nematode species, especially Meloidogyne spp. (Chen and Dickson 1998). This bacterial parasite has been reported adhering to, or infesting hundreds of nematode species from many countries worldwide (Sturhan 1988). Al-Rehiayani (2007) reported the potential of P. penetrans in controlling M. incognita on grape in Al-Qasim region, Saudi Arabia. Organic and inorganic nitrogenous amendments, which have been usually added to soil to improve soil fertility, have also offered good nematicidal effects against plantparasitic nematodes (Oka 2010). Urea and ammonia were found to be effective in controlling the plant-parasitic nematodes at rates as low as 300-400 mg/kg soil (Rodriguez-Kabana 1986). Guava decline disease, a complex disease involving M. mayaguensis and Fusarium solani Keratitis, has been greatly managed in a commercial guava plantation and a major yield gains were obtained by the applications of cow manure and poultry compost (Gomes et al. 2010). This study aimed to evaluate different bio-control agents in an integrated management combinations to manage the nematode problems on guava and fig trees under the tropical field conditions of Jazan region, southwest of Saudi Arabia. Materials and methods Two field experiments were carried out in a 2-year-old guava and fig orchards located at Abu Areesh governorate, Jazan region, southwest of Saudi Arabia to study the efficacy of different integrated combinations of biological control agents in controlling certain plant-parasitic nematode (PPN) species. Nematode infestation and identification Soil of guava and fig orchards were naturally infested with a group of plant-parasitic nematode (PPN) species including Meloidogyne javanica (Treub) Chitwood, Tylenchorhynchus mediterraneus Handoo, Hoplolaimus seinhorsti Luc, Longidorus latocephalus Lamberti, Choleva and Agostinelli, and Xiphinema elongatum Schuurmans Stekhoven & Teunissen. These nematode species were morphologically and molecularly characterized by Dawabah and Al-Yahya (2017), and their frequency of occurrence (FO %) in both guava and fig orchards at the beginning of the experiments were determined (Table 1). The experiments were carried out during two successive seasons from Feb. 15, 2016 to Jan. 15, 2017. Guava and fig trees were spaced (5 × 5 m), and irrigated by sprinklers as needed. One week prior to the implementation of the two experiments on guava and fig trees (8th of Feb., 2016), rhizosphere soil samples were collected from under the trees of the two orchards, representing the experimental units for the different treatments, to extract and count the initial population densities (Pi) of PPN species in each experimental unit (tree). Accordingly, trees with approximately close numbers of PPNs were selected, labeled, and assigned randomly at four replicates (trees)/treatment, in a complete randomized design (CRD) (Siddiqi et al. 2007). Beside, a map was designed for each experiment. Identification and preparation of fungal and bacterial inocula T. harzianum and V. chlamydosporium were isolated from the egg masses of the root-knot nematode, M. javanica, collected from galled guava roots grown in Riyadh region, central Saudi Arabia. Egg masses were surface sterilized by 0.1% NaOCl for 30 s, washed three times in a sterile distilled water, and then transferred to Petri dishes containing sterilized potato dextrose agar (PDA). Petri dishes were incubated at 25°C and observed for fungal growth. Pure isolates of T. harzianum and V. chlamydosporium, isolated and identified, based on the cultural and spore morphological features, were also sent to the Plant Pathology Research Institute, Agricultural Research Center, Giza, Egypt to confirm the identification. An aggressive German isolate of P. lilacinum (DSMZ 14052) was obtained from Damascus University. The fungi: T. harzianum, V. chlamydosporium, and P. lilacinum were continuously maintained on PDA for further use. To prepare fungal inocula, wheat grains were immersed in water over night, and weights of 250 g of wetted grains were transferred to 500 ml conical flasks, which were autoclaved at 15 psi for 20 min twice. Each flask containing sterilized wheat grains was inoculated by a couple of 2-mm discs of the designated fungus and incubated at 25°C for 2 weeks. The flasks were shaken every 3-4 days to ensure the uniform colonization of the fungus. For fungal infestation in the field soil, the basin of each tree was infested with 1 kg of fungal-infected wheat grains, distributed in the top 20 cm soil of the tree basin. A local isolate of the endospore forming bacteria, P. penetrans, was isolated from M. javanica second-stage juveniles (J 2 ), parasitizing olive roots at Al-Melaidah, Al-Qasim, Saudi Arabia. This local bacterial isolate was previously identified on a molecular basis by Al-Rehiayani and Motawei (2014). M. javanica J 2 s, with bacterial endospores attached, were obtained by centrifugation (Hewlett and Dickson 1993), and used to inoculate susceptible tomato plants cv. "Sulatana-7," growing in a steam-sterilized sandy loam soil at 3000 J 2 s/plant/pot. Tomato plants were harvested, 60 days after inoculation, uprooted and the root systems were washed, then air dried on a lab. bench. Aliquots of root materials (0.5 g/5 ml water) were homogenized, using a pestle and a mortar. Homogenates were then passed through a 35 mesh (50 μ openings) sieve (Bird and Brisbane 1988). Number of released endospores in the suspension was determined, using a hemocytometer, and adjusted to a concentration of 1 × 10 7 /ml. The bacterial inoculum consisted of 20 ml of the bacterial endospore suspension/tree basin (Abd-Elgawad et al. 2010). Application of treatments The first application of the treatments took place on the 15th of Feb., 2016. Three months later (on the 15th of May, 2016), root and rhizosphere soil samples were collected from the trees of both experiments to determine the numbers of nematodes/tree, and the mean number of nematodes/treatment. The treatments (T1-T14) included: -T 1: P. lilacinum + P. penetrans + chicken manure. The second application of the treatments (same as in the first one) took place on the 15th of Oct., 2016, and also number of nematodes per each tree was determined. Three months later (on the 15th of Jan., 2017), rhizosphere soil samples were collected from all the experimental units (trees) and the mean number of nematodes/treatment (final nematode populations = Pf ) was determined in both experiments. Data analysis Data were statistically analyzed, using SPSS (2016), and means were separated using Fisher's Protected LSD 0.05 . Results and discussion All the tested treatments significantly reduced (P ≤ 0.05) the population densities of PPN in the rhizosphere of guava and fig trees in the two separate field experiments (Tables 2 and 3). Nematode populations (M. javanica, T. mediterraneus, H. seinhorsti, L. latocephalus, and X. elongatum) gradually decreased at all the tested treatments over the study period from Feb. 15, 2016 to Jan. 15, 2017. The highest reductions were achieved by the end of the experiments (final nematode populations) (Tables 2 and 3). In both experiments, carbofuran 10G was the most effective treatment in suppressing the nematode population densities in the rhizosphere of guava and fig trees (80.13 and 83.13%, respectively). Actually, chemical treatments mostly have the advantage of the quick and effective response in controlling plant-parasitic nematodes. Soltani et al. (2013) found that aldicarb, enzone, oxamyl, and cadusafos at 6 and 8 ppm concentrations were effective treatments in controlling the root-knot nematode, M. javanica, on 1-year old olive seedlings in the greenhouse. On the other hand, nematode population densities gradually increased ( (Tables 2 and 3). The ability of the different non-chemical combinations used in this study is greatly consistent with the results of previous studies. Many alternative control measures have been recently adopted to replace the chemical compounds due to their hazardous effects (Sahebani and Hadavi 2008;Jindapunnapat et al. 2013). P. lilacinum, T. harzianum, and V. clamydosporium, as fungal parasites of nematode eggs and adults (Rao 2007) as well as P. penetrans as a bacterial parasite (Chen et al. 1996) were previously reported among the most potent measures used in this subject. In addition, urea and nitrogenous fertilizers are considered to be good nematicides when applied at levels as low as 300-400 mg/kg soil (Rodriguez-Kabana 1986;Alam 1992;Al-Hazmi and Dawabah 2014;Al-Hazmi et al. 2017). Likewise, many previous studies have shown that organic and inorganic nitrogen amendments had a nematicidal effect against plant-parasitic nematodes (Rodriguez-Kabana 1986;Akhtar and Malik 2000;Oka 2010). As shown, in the present study, reductions of the nematode densities in the rhizosphere of guava and fig trees were much higher when chicken manures, cow manure, and urea 46% were added along with the fungal and bacterial parasites. These findings are in agreement with the results of previous studies which used the fungus P. lilacinum and the bacteria P. fluorescens to enrich the farm yard manure, then applied the enriched manure to the rhizosphere of papaya seedlings to effectively control the reniform nematode, R. reniformis, and the root-knot nematode, M. incognita (Rao 2007 and. Similarly, the fungus P. lilacinum was previously found suppressing the population densities of the citrus nematode, T. semipenetrans, on citrus seedlings in pot experiments, and the results were best when the fungus was combined with organic amendments (oil-cakes) (Le Roux et al. 2000). Gomes et al. (2010) concluded also that coupling the control treatments with organic soil amendments, particularly poultry compost and cow manure spread evenly under the guava canopy, gave a better control of the root-knot nematode, M. mayaguensis. In fact, organic soil amendments stimulate the activities of soil microorganisms that are antagonistic to plant-parasitic nematodes. The decomposition of organic matter results in the accumulation of specific compounds in the soil that may have nematicidal effects against nematodes (Akhtar and Malik 2000). In addition, long-term effects might include increases in the population densities of the nematode antagonists in the soil. Also, improved crop nutrition and plant growth following amendments use might lead to tolerance of plants against plant-parasitic nematodes (McSorley 2011). However, urea is readily converting to the toxic ammonia (NH 3 ) by the urease enzyme, which is readily present in the soil (Rodriguez-Kabana 1986). The nematicidal properties of ammonia could be attributed to either its plasmolysing effect in the immediate vicinity of its application site in the soil, or the possibility that ammonia could exert a selective influence for microbial antagonists of nematodes, particularly fungi (Rodriguez-Kabana 1986;Chavarria-Carvajal andRodriguez-Kabana 1998 andSantana-Gomes et al. 2013). Both guava and fig have two fruiting periods a year in Jazan region, southwest of Saudi Arabia (Mars to May and November to January of the second year). In both periods during the present study, all the tested treatments significantly increased (P ≤ 0.05) the number of fruits/tree either for guava or fig (Tables 4 and 5). Treatments which included the combinations between fungal and bacterial parasites along with chicken manure had the highest numbers of guava fruits/tree, followed by the treatment with the nematicide carbofuran 10G (Table 4). However, treatments of fungal and bacterial parasites enriched with any of the chicken manure, urea 46%, cow manure, or compost also gave the highest numbers of fig fruits/tree like the nematicide, carbofuran 10G or may be more (Table 5). These findings might be greatly supported with some previous studies, which have repeatedly proved the effectiveness of carbamate and organophosphorus nematicides in controlling plant-parasitic nematodes and increasing the yield of some tropical and subtropical fruit trees in different countries (Queneherve et al. 1991). Similarly, other previous studies also proved the usefulness of some non-chemical treatments such as cow manure and poultry compost in managing the plant parasitic nematodes attacking guava trees and increasing their fruit yields (Souza et al. 2006 andGomes et al. 2010). Obtained results also obviated that adding each of urea 46%, chicken manure, or cow manure to the tested fungal parasites and the bacteria P. penetrans increased the yield of guava and fig trees, and these yield increments were sometimes higher than those of increments gained by the use of carbofuran 10G. These results are consistent with the findings of Rao (2010) who reported that the fungus P. lilacinum and the bacteria P. fluorescens enriched the farm yard manure fairly controlled the reniform nematode, R. reniformis and the root-knot nematode, M. incognita and increased the yield of papaya crop by 26%. Regression analysis showed a significant negative linear relationship between the number of nematodes/kg soil and both the number of guava fruits/tree (y = − 0.03x + 54.54) (Fig. 1) and fig fruits/tree (y = − 0.124x + 112.03) (Fig. 2). It means that, in both relations, the number of fruits/tree gradually decreased as the number of nematodes/kg soil was increased. Similar results were previously obtained by Ibrahim (2002) and Kim and Ferris (2002). Conclusion It is concluded from the present study that carbofuran 10G was the most effective treatment in suppressing the nematode densities in the rhizosphere of guava and fig trees, followed by the combinations of the fungal (P. lilacinum, T. harzianum, and V. chlamydosporium) and
2023-02-23T15:19:46.209Z
2019-05-08T00:00:00.000
{ "year": 2019, "sha1": "163324d312b0bd94fa2174d97b55a360688aa748", "oa_license": "CCBY", "oa_url": "https://ejbpc.springeropen.com/track/pdf/10.1186/s41938-019-0133-9", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "163324d312b0bd94fa2174d97b55a360688aa748", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Biology" ], "extfieldsofstudy": [] }
226976495
pes2o/s2orc
v3-fos-license
The challenge of paediatric epilepsy nursing: An interview with Mrs. Jenny O'Brien, paediatric epilepsy nursing specialist at the Wirral University Teaching Hospital, UK Epilepsy in childhood is one of the most common neurological disorders encountered in paediatric clinical practice. The current treatment of paediatric epilepsy aims to improve health outcomes, as well as to manage the educational, social and psychological issues that are involved in the quality of life of paediatric patients and their parents. In this direction, in several countries, a specialized, comprehensive, multidisciplinary service has been developed, including paediatric epilepsy nursing, which constitute a key component of this service. According to Mrs. Jennifer O'Brien, one of the pioneering paediatric epilepsy nursing specialists in the UK with a significant contribution in the care of children with epilepsy in Merseyside, the mission of paediatric epilepsy nursing is to enable children with epilepsy and their families to live as normal a life as possible, to ensure that all those who care for the child are well-educated regarding the child's epilepsy and to promote the child's safety and integration into society. She notes that in the past, epilepsy was not considered as a specialty and was looked after by all paediatricians; it is recognised now that it is an incredibly complex group of conditions, which deserves to have specialist management. She believes that although modern technology is crucial in informing and educating families, face to face education and advice is still the most important method of providing support. She highlights the recent advances in genetics of paediatric neurology along with the drive for epilepsy specialists, both nursing and medical, while she estimates that over the following years, paediatric epilepsy nursing will have progressed beyond nowadays expectations. Introduction Since early antiquity, epileptic attacks have been described in several texts, while Hippocrates was the first who rejected the superstitious beliefs that had been related to the aetiology of epilepsy (1). Children account for 25% of all new cases of epilepsy, which occurs in both adults and children. Epilepsy in children is one of the most common neurological disorders encountered in paediatric clinical practice. It is related to learning, behavioural and psychological difficulties, while long term follow-up studies have demonstrated that children with epilepsy have an increased mortality rate when compared with the general population (2). The selection of anti-epileptic medication (3,4) is performed by the paediatric neurology specialist, based on the diagnosis, the efficacy and effectiveness of different anti-epileptic drugs (AEDs) and the principle to prescribe the fewest types of AEDs, at the lowest dose in order to achieve the optimal seizure control possible with the fewest side-effects. Furthermore, the current treatment of epilepsy in children includes a number of nursing and other interventions aiming to improve health outcomes, as well as to manage the educational, social and psychological issues that are involved in the quality of life of paediatric patients and their parents. In this direction, in several countries, a specialized, comprehensive, multidisciplinary service has been developed aiming to the state-of-the-art medical care of children with epilepsy. Paediatric epilepsy nursing constitutes a key component of this service, and includes care planning, facilitating appropriate participation, risk assessment, school and respite care liaison, rescue medication training and telephone advice (5)(6)(7)(8)(9). In the UK, paediatric epilepsy nursing specialists have a significant input in the management of children with epilepsy, which is well described by the Institute for Health and Care Excellence (NICE) clinical guidelines for epilepsy (5). Their role is to support both epilepsy specialists and generalists, to ensure access to community and multi-agency services and to provide information, training and support to the child, families, carers and others involved in the child's education, welfare and wellbeing. Mrs. Jennifer O'Brien ( Fig. 1) is one of the pioneering paediatric epilepsy nursing specialists in the UK with a significant contribution in the care of children with epilepsy in Merseyside. She has been working as a paediatric epilepsy nursing specialist since 1998. She was born on The Wirral in 1962 and has lived in Merseyside all her life. Growing up, she developed a love for children and after leaving school, she worked voluntarily in a local school for children with disabilities. She qualified as an RGN/RSCN (Registered General Nurse/Registered Sick Children's Nurse) at 'Alder Hey' Children's Hospital (Liverpool) in 1984 and for some time worked as a staff nurse at both 'Alder Hey' Children's Hospital and 'Arrowe Park' Hospital; 'Arrowe Park' Hospital is one of the biggest and busiest acute NHS (National Health System) hospitals in the North West region in the UK, which is located in Upton, Wirral, Merseyside, and managed by the Wirral University Teaching Hospital NHS Foundation Trust. After developing an interest in epilepsy at her time working in the special school, Mrs. O'Brien set up the paediatric epilepsy nursing specialist service at 'Arrowe Park' Hospital in March 1998; over the past 22 years this has developed and now has 2 full time specialist nurses. She sees new referrals from general practitioners (GPs) and the accident and emergency (A&E) department, ensuring that children with suspected epilepsy are seen and assessed within 2 weeks -as per the NICE guidelines (5). She also provides advice and support to children with epilepsy and their families. Jenny is happily married with 2 children and 3 grandchildren. She plans to retire in 2022 to spend more time with her grandchildren and husband. On February 22nd, 2020, Mrs. Jennifer O'Brien participated in a teleconference with the Paediatric Virology Study Group (PVSG) on paediatric epilepsy nursing, which was organized by the newly founded Institute of Paediatric Virology (IPV). Questions and Answers Question: First of all, Mrs Jenny O'Brien, I feel a great gratitude that I had the chance to work with you as a Senior House Officer (SHO) in Paediatrics and Neonatology at the Wirral University Teaching Hospital almost 12 years ago (10)(11)(12) and receive your support and your hospitality at the local Paediatric Department and the Neonatal Intensive Care Unit (NICU). The level of specialty training by all the neonatal and paediatric team directed at that time by Dr Adrian Hughes, Clinical Director and Consultant Neonatologist and Dr Lil Breen, Consultant Paediatrician was really outstanding. I would also like to thank you for this interview, which will focus on the role of paediatric epilepsy nursing specialists in the UK and will try to analyse to our Paediatric Virology Study Group (PVSG) the significant clinical, educational and research contribution of paediatric epilepsy nursing in the modern management of children with epilepsy. So, what is your mission as a paediatric epilepsy nursing specialist? Answer: My mission is to enable children with epilepsy and their families to live as normal a life as possible. It is recognised that the psychological burden on these children and their families is significant, and having an experienced person to whom they can talk can ease some of the stress. I also aim to ensure that all those who care for the child are educated regarding the child's epilepsy, to ensure the child's safety and integration into society. Question: Could you describe to us examples from your clinical experience that highlight your significant supportive role to the children with epilepsy and their parents? Could you give us examples that show how your input improves patient satisfaction as well as the level of the received medical care? Answer: It's difficult to choose some examples. I have developed good relationships with many of the families I have worked with. Many of them keep in touch, even though their children are now adults. I recently learned that one of my ex patients who was 21 was receiving care in adult intensive therapy unit (ITU). He was a young man with physical and learning difficulties and I had known the family well. I was able to visit him and his family in the ITU. Sadly, he died a week later, but the family contacted me with the details of his funeral. Even following the death of a patient, we can continue to support the family. I have a treasured gift from a family whose 2 children I cared for. They both had learning disabilities and difficult epilepsy. I cared for the siblings for 18 years and when they were handed on to the adult clinic they gave me a framed plaque. This plaque is really so nice showing how much my input means to the family. Just 2 weeks ago, a child was referred to my nurse led clinic following a first seizure. She had been seen at the A&E after the seizure and discharged. When I saw her, in addition to the history of one nocturnal seizure, her parents told me that she had become increasingly unsteady, had experienced several episodes of nocturnal vomiting and complained of a frequent headache. I was concerned about the history and an urgent magnetic resonance imaging (MRI) was arranged, which showed a large tumour pressing on the brain stem. She was transferred to 'Alder Hey' Children's Hospital for surgery and seems to have made an excellent recovery so far. Although she didn't have a diagnosis of epilepsy, having only had one seizure, I feel that I provided excellent care to the family. Question: One of your responsibilities as a paediatric epilepsy nursing specialist is to inform and educate children with epilepsy and their parents. How demanding is this for you? What educational tools do you use and how does modern technology facilitate your practice? Answer: Modern technology is crucial in informing and educating families. In the past, we would provide written information, but increasingly we are using the internet to access advice and support. There are a number of applications available from the epilepsy charities which can support young people to remember medication for example. However, face to face education and advice is still the most important way of providing support. Education is provided either on a one to one basis or in a group setting. We have recently started to hold epilepsy tea parties. We invite newly diagnosed children and their families to meet each other, have some fun and learn something about epilepsy in to the bargain. This is a low-tech way to improve their lives and has proved to be very successful. Questions: Viral infections causing encephalitis are often associated with seizures and can increase the risk of developing children's epilepsy. Moreover, viral infections can frequently trigger a convulsion's episode in a child with epilepsy. What advice do you give in children with epilepsy who are affected by a viral infection? Answer: We advise all families, whose children are affected by a viral infection, to keep the child comfortable with analgesia/ antipyretics. To ensure they are hydrated, to encourage rest. If a child vomits within 1 h of taking their anti epileptic medication, we advise giving the dose again. If the child is not tolerating fluids, and has diarrhoea, we suggest that they contact the hospital for advice. Question: On the other hand, febrile convulsions are frequently presented in children with pyrexia due to a viral infection. At the Wirral University Teaching Hospital a febrile convulsion pathway has been developed according to which no admission is required in cases of children with febrile non-complicated convulsions (11). Based on your clinical practice, what should be the management of children with febrile convulsions? Answer: As long as the seizure was relatively short, accompanied by significant pyrexia and has no focal features the child should be discharged home when it is safe to do so. Parents should be given advice about managing fever and any further seizures. We do not advise that children with febrile seizures are routinely followed up in outpatients. However, if a child has a history of multiple or complicated febrile seizures, we suggest they are referred to epilepsy clinic for assessment. Question: How many cases do you evaluate every day or every week at your outpatient clinic? In what cases do you perform a referral? How important is team working in your practice? Answer: I do a nurse led first seizures clinic each week. In this clinic I see referrals from GPs and A&E assessment ward. I see approximately 5 children each week in this clinic. Many of them will be referred back to the GP with other diagnoses. Approximately 1-2 per week will be referred for further investigations for possible epilepsy -e.g., electroencephalogram (EEG), MRI. Team working is crucial. The epilepsy team now consists of 2 consultants and 2 specialist nurses. The work is allocated to the most appropriate member of the team. Advice is shared between all members of the team. In the past month we have appointed a clinical psychologist to work in the paediatric department. She will cover all specialties but we are hoping she will be able to provide support for those of our children who have psychological difficulties (including anxiety, depression and psychogenic non epileptic seizures). Question: As a paediatric epilepsy nursing specialist you spend time both at the hospital as well as the community. What are the different challenges in each environment? Which one do you enjoy most? Answer: As my role has evolved, I spend less time in the community. At one time I used to visit new referrals at home and take a history to save a visit to the hospital. This was done at a convenient time for the family (usually after school). I would also do a follow-up visit after diagnosis for education and support purposes. Now, new referrals are allocated to my hospital clinic. The time spent in the community is now limited to school/nursery visits for training or meetings. I have a specialist nurse colleague, who has been working with me for the past 16 months. She has taken on many home visits, to newly diagnosed children. She is able to provide support and education in the more relaxed environment of the child's home. Question: The treatment of epilepsy is definitely more than prescribing anti-epileptic medication (6). Have you obtained, though, the prescribing right? What were the reasons for the initial hesitance of the acceptance of this right into your practice? Answer: I have now been a nurse prescriber for 10 years. I don't remember my reasons for being hesitant, unless it was a lack of confidence on my part. I have found it invaluable being able to prescribe. It allows me to care for the child as a whole, including initiating treatment and altering drug doses. I am also able to advise other members of the team, both nursing and medical on the best options for treatment. Question: What happens when children with epilepsy or their parents disagree to receive the recommended treatment? What are the most common reasons for this and how do you manage these cases? Answer: It is rare that parents disagree with our recommendations. However, some do not feel comfortable with their child receiving a drug with potential side-effects. If the child has infrequent, short seizures we will often agree to keep monitoring the child in the clinic. If the seizure frequency increases or the seizures become more risky, we would then encourage the families to consider medication. We often find that with patience we can reassure them. Many of them have unfounded fears about the effects of anti-epileptics. We reassure them that if the medication is not tolerated, we would try an alternative drug. Interestingly, it is more common for parents to want to keep their child on treatment longer than is necessary. This is due to anxiety that the child will have further seizures. We try to manage this sensitively, while explaining that childhood is the best time to have a trial off medication. Once the child is at university/driving/working, it becomes more difficult to find a good time to withdraw the medication and there is a risk that they end up taking it potentially for life. Question: It is really so important to help children with epilepsy and their parents and maximize their quality of life, indeed. How could you define the quality of life in your little patients? Answer: Quality of life is highly subjective. In my work I see a range of children all with different values, goals and abilities. Seizure freedom is not always possible, so we try to ensure that children are as seizure-free as possible, without causing intolerable side-effects from medications. It is better to have occasional seizures and to be well in between times, than to be seizure-free but so tired that you can't enjoy your life. Having the freedom to do as their friends do without unnecessary restrictions is also important. We try to ensure this by educating parents/carers and teachers. Question: Your educational role is significant not only for children with epilepsy and their parents, but for your paediatric colleagues, too; to date, this role has been evaluated very positively. What are the most significant educational fields of your input to the paediatric specialty training (ST) programme? Answer: I don't provide a great deal of training for the paediatric trainees. I am part of the training rota and provide sessions several times a year. These sessions provide basic information about how we manage epilepsy and when to refer children to our service. I teach the ward nurses on a regular basis to ensure that they are up to date with current ideas. Question: You are also very actively involved in clinical research. Could you analyse to us some of the results of your clinical research and your audits during the last years? Answer: As an epilepsy team, we have been involved with several audits. We were significant contributors to both Standard and New Antiepileptic Drugs (SANAD) I (3) and SANAD II (4), United Kingdom Infantile Spasms Study (UKISS) (13) and the International Collaborative Infantile Spasms Study (ICISS) (14). We are contributing on an ongoing basis to the Epilepsy12 audit (15). This is a national rolling audit, which has 12 standards of good practice. All newly diagnosed children are added to the audit. The most recent study we are involved in is the CASTLE (Changing Agendas on Sleep, Treatment and Learning in Childhood Epilepsy) study (16). Question: What training did you receive in order to be subspecialised in paediatric epilepsy nursing? Is there an official training programme in the UK that could be currently attended by a nurse who is interested in children's epilepsy? Answer: Twenty two years ago when I applied for this role I only had my basic paediatric nurse qualification. After starting in post I undertook a distance learning diploma in epilepsy care. Since then I have attended paediatric epilepsy training (PET) 1, 2 and 3 training days. I have also undertaken my independent non-medical prescribing training. Question: Based on your experience, what are the most significant advances in your field during the last decade? What issues remain to be solved by future research in your field? Answer: I would say that advances in our understanding of genetics have been the most significant, along with the drive for epilepsy specialists, both nursing and medical. In the past, epilepsy was not considered as a specialty and was looked after by all paediatricians. It is recognised now that it is an incredibly complicated group of conditions which deserves to have specialist management. We are still at the beginning of our understanding of the way genetics influences epilepsy -we have a long way to go. I think over the next 20 years we will have progressed beyond our expectations. Question: And my last question. You are considered as one of the most successful and well-known paediatric epilepsy nursing specialists in the UK. Should the role of paediatric epilepsy nursing specialists be further promoted in the UK and how? What is your advice to your paediatric colleagues in countries where your specialty has not been developed yet? Answer: Although I am now one of the longest serving children's epilepsy nurses, I don't feel I am particularly well known. Many regions in the UK now have paediatric epilepsy nurses. Parent power is the most powerful ways to persuade local health authorities to provide funding for specialist nurses. Most paediatricians recognise their value, but the funding is often difficult to secure. NICE guidelines recommend that all children with epilepsy have a specialist nurse (5). The Epilepsy12 audit (15) may well be a useful tool to help those units who are struggling to obtain funding. Question: Thank you for your very interesting answers. We hope that this interview will be really useful for junior paediatric trainees as well as for your paediatric nursing colleagues especially in countries where paediatric epilepsy nursing has not been developed yet. Although subspecialisation in medicine can create more problems than it tries to solve, paediatric epilepsy nursing is a good paradigm of how significant it can be into paediatric clinical practice. And without any doubt your pioneering contribution on paediatric neurology nursing is an excellent model for our international paediatric medical and nursing community. We look forward to your participation in one of the forthcoming workshops organised by the IPV.
2020-11-05T09:09:08.944Z
2020-10-30T00:00:00.000
{ "year": 2020, "sha1": "824ebdd6aa029e679dbb3bef75b763d9b5ad44b6", "oa_license": "CCBYNCND", "oa_url": "https://www.spandidos-publications.com/10.3892/etm.2020.9425/download", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bcf20990b8d1c2802efa9744eb079450ad0eebf4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
134504189
pes2o/s2orc
v3-fos-license
Development of Landscape Architecture through Geo-eco-tourism in Tropical Karst Area to Avoid Extractive Cement Industry for Dignified and Sustainable Environment and Life Karst areas in Indonesia amounted to 154,000 km2, potentially for extractive cement and wall paint industries. Exploitation of karst caused serious problems on the environment, health and social culture of the local community. Even though, karst region as a natural and cultural world heritage also have potential environmental services such as water resources, carbon sink, biodiversity, unique landscapes, natural caves, natural attractions, archaeological sites and mystic areas. Landscape architectural management of in the concept of blue revolution through the empowerment of land resources (soil, water, minerals) and biological resources (plant, animal, human), not only have adding value of economy aspect but also our dignified and sustainable environment and life through health, environmental, social, cultural, technological and management aspects. Geo-eco-tourism offers the efficiency of investment, increased creative innovation, increased funding, job creation, social capital development, stimulation of the socio-entrepreneurship in community. Community based geo-eco-tourism in Gunung Kidul Yogyakarta rapidly growing lately due to the local government banned the exploitation of karst. Landscape architecture at the caves, white sand beaches, cliffs in karst areas that beautiful, artistic and have special rare natural architecture form of stalactite and stalagmite, become the new phenomenal interested object of geo-eco-tourism. Many hidden nature objects that had been deserted and creepy could be visited by many local and foreign tourists. Landscape architectural management on hilltops with a wide view of the universe and fresh, sunset and sunrise, the clouds country are a rare sight for modern community. Local cultural attractions, local culinary, home stay with local communities will be an added attraction, but the infrastructure and human resources should be developed. Traveler photographs that widespread rapidly through social media and mass media became a great and effective promotion. With geo-eco-tourism, people can empowering natural resource to gain harmonization of economic, environment and social-culture aspect, without destroy it. Introduction Indonesia is located in equator and represents a series of equatorial emerald with huge potential of natural resources [1]. Over explotation of natural resources that are deposited deep in earth by open mining causes serious degradation of living and environmental things on the earth [2]- [5]. Karst area in Indonesia covers huge natural landscape of about 154,000 km2. The karst area in Java Island is 11,000 km2 and it becomes the target of extractive industry such as cement factory [6]. It has vital economic, environmental, social and cultural functions because it produces calcium carbonate, natural water reservoir, climatic change mitigation, habitat of swallows, bats and other flora and fauna and also human being. Therefore, it is necessary to rearrange green karst area by putting the emphasis on the harmonization of all of the existing aspects. Gunung Sewu that has been officially announced as UNESCO Global Geopark in September 2015 stretches across several regencies (Gunungkidul, Wonogiri, and Pacitan) and provinces (Yogyakarta, Central Java, and East Java) [6]. The Gunung Sewu Geopark is a classic tropical karst landscape in the south central part of Java Island that is dominated by limestone, which is well-known in the world. Tectonic activity still occurs in the region because it is situated in front of an active subduction zone between the Indian Ocean, Australian and Eurasian plates. Active uplifting has been taking place for 1.8 million years and results in the emergence of river terraces at Sadeng dry-valley and also coastal terraces along the southern coast of the Global Geopark [6], [7]. Approximately 805,000 people inhabit the area., with the economy of the local people is driven by agriculture and service sectors [6]. In addition to its aesthetic and recreational values, the area is also rich of biodiversity, archeology, history and culture. It is necessary to develop the Gunung Sewu Geopark in the effort of venerating earth legacy for the purpose of the prosperity of the local people. Geopark represents a geographic area in which geological legacy sites are parts of the concept of conservation, education and sustainable development that puts three elements in harmony, which are geodiversity, biodiversity and cultural diversity [7]- [9]. The synergy among the natural diversity should be managed in an integrated and sustainable manner by empowering local people in a sustainable development that gives economic, environmental, social and cultural added values [9]. Materials and methods The study uses descriptive method and qualitative approach with naturalistic paradigm. Data are collected using in-depth interviews, observation and documentation. The objective of the triangulation data collection technique is to examine the validity of the data using other objects in comparing the results of the interviews about the object of the study. The data are analyzed inductively and qualitatively and the results of the study put more emphases on meaning than on generalization. The concept of integrated bio-cycles system is implemented through holistic and sustainable landscape ecological management in the development of the geopark that is based on education for sustainable development.Other paragraphs are indented (BodytextIndented style). New paradigm in karst management The massive transformation necessary to do in the management of karst is to change extractive economy that causes environmental degradation into productive conservation activity that improves the prosperity of the local people in a holistic and comprehensive development. The karst area should be managed in harmonious, holistic, integrated and sustainable manner for qualified (i) biomass production, (ii) living environment, (iii) biological habitat, (iv) infrastructure, (v) mineral resources, and (vi) aesthetics and culture [3], [5]. Each of the elements of the landscape should not compete for its own sectoral interest, but they should be in mutual and harmonious supporting relation. The output and the outcome of the system are given more emphasis than the output of each constitutive elements [1], [2], [3], [5]. The management program of the karst area should be implemented on the basis of three main pillars of personality, community and institutional empowerment. It is expected that emphaty, care, multidisciplinary cooperation, personality, the contribution of local/national competitiveness and learning community/society. The empowerment program should also be implemented in co-creation, co-finance, and in sustainable and flexible ABCG (academician, business, community, government) network [4]. The empowerment program of 6M (men, money, material, machine, method, and management) should be implemented in a synergetic and optimal manner that all of the existing stakeholders have the ability, the willingness, the chance and the authority to really contribute and to gain optimal merit [5]. Gunung Sewu geopark has been successful in improving the prosperity of the local people, though it is still necessary to make further effort to gain maximal results. The lack of the cooperation among the stakeholders and the lack of the regional zoning are the main obstacles in the development of the area. It is necessary to make a zoning of the geopark into three zones, including main zone, supporting zone, and service delivery zone [9], [10]. The main zone (Blue) is managed by putting the emphasis on the development of the core tourist attraction of geo-tourist objects of flora and fauna. The supporting zone (Green) is managed by putting the emphasis on the construction and the maintenance of supporting facilities that prevent the tourists from doing any harm to the nature and its beauty and set the threshold of the supporting capability of the environment. The service delivery zone (Yellow) consists of the areas with the facilities necessary for the tourists, which are developed by considering the requirements of the geo-tourism. It is necessary to formulate different strategies in developing the geopark area that are specific for each area considering that there are many emerging tourist destinations. People participation-based development strategy for the geo-tourism area should include economic, social, customary and cultural, environmental and political aspects. The managers of the geo-tourism may organize themselves in association that enabled them to improve their managerial capability and their capability in delivering tourist services in addition to the development of the integrated facility in the geopark area. Gunung Sewu contributes only about IDR 800 millions to the local original income in the area of tourism in 2011. However, a year after the official announcement of UNESCO Global Geopark (UGG) its contribution increases to IDR 22.5 billions [8].Tthe Gunung Sewu Geopark is undergoing the tourism development phase that involves local people and even the managers of some geo-tourism objects that have been at overcapacity level begins to explore other areas with huge potential of the geo-tourism objects in the hope that they could be as successful as the prior locations [7], [8]. Therefore, it is necessary to accelerate the consolidation phase to open access for external investors while keeping the strong roots of the local potentials that must be managed in an integrated and sustainable manner. According to United Nation [10], the development of the sustainable natural tourism should follow the following principles: The development of the cultural landscape represents the reflection of the relationship of human being and its culture and natural environment in a wide and integrated time and spatial unity. The natural environment may include mountain, mountains, forest, dessert, and rivers, while the culture may include the outcomes of reason, emotion, and intention and human works such as traditions, beliefs, the way of life, and so on. The results of the engineering of physical landscape by human being include among others settlements, roads, houses, rice fields and non-irrigated land that are formed on the basis of geomorphologic condition and its ecological values. The Gunung Sewu Geopark in Indonesia has many outstanding cultural landscape heritages with strong historical values, heirloom resources, typical geographical conditions, natural system and biogeophisical transformation process that continuously occurs. Considering the unique principles and locality, Indonesian tourism should be based on the national philosophy of life, which is the concept of harmonious and balanced life, meaning that there must be a good balance relationship between human and God and also between human and its natural environment. The concept teaches us to uphold the noble religious values and to actualize the values, to respect humanity values, tolerance, equality, togetherness, brotherhood, and the importance of maintaining natural environment. It also stimulates the awareness of the balance between material and spiritual needs, the balance between the use of the resources and the conservation of them in order to prevent greedy behavior. Integrated bio-cycles system Integrated bio-cycle system (IBS) manages the resources of land (including land, water, air, temperature, etc.), biological resources (flora, fauna, human and other living things) and environmental resources (the relationship among living things, etc.) in an optimal manner [3], 4, [5]. The program pays special attention to the increase in economic value, environmental conservation, social justice and culture in a synergetic and optimal manner that the regional unity can produce food, fodder, shelter, fertilizer, water, herbal medicine, tourism and so on. The IBS program in the karst area will be very useful in improving the quality of the environment and the life through the development of living environment and livelihood for the households of the local people. It is expected that the program would continuously increase the income of the people. The continuously increasing income will in turn cause the improvement of the prosperity of the people and finally the poverty would decrease. It will help people lead independent life and manage local natural resources in a wise manner. The IBS enables the local people to earn daily, monthly, and annual and decade incomes in short, medium and long terms. It is useful for those with small, medium and big capitals and very prospective in continuously establishing sustainable economy, environment, and social and culture that subsequently serve as leverage and locomotive of the prosperity of the local people [4], [5]. It is necessary for the managers of the natural resources to manage the natural resources in an integrated and sustainble mananer with 9R principles (reuse, reduce, recycle, refill, replace, repair, replant, and reward) to make optimal use of the natural resources for all of the living things and our living environment. The integration of the upper stream and the lower stream of the land should be established from input, process, output and outcome with 9A (agro-production, -technology, -business, -industry, -infrastructure, -marketing, -management, -structure and infrastructure, and -tourism) [3], [5]. The agro-production intends to produce multiple products in an entity of land and the products represent the real "golds" that has been being ignored and given less value, including "brown gold" (timbers), "yellow gold" (grains rich of carbohydrate necessary for human life), and "black gold" (organic fertilizer, compos, etc.) in addition to "blue gold" (biomass and biogas energies), "green gold" (green vegetables, fodder, environment, temperature, and humidity), "white gold" (milk, fish, food), "red gold" (animal protein of cattle meat, pork, chicken, ducks, etc.), "transparant gold" (water for life and oxygen) and "colorful gold" of herbal medicine that plays very important role in maintaining human health and dignity human life [5]. Geo-eco-tourism Caves in the karst area that have been long being considered as desolate and spooky places actually have special architecture of stalactites and stalagmites that are rare, beautiful, artistic and phenomenal and formed naturally by drops of water and calc solution. All of the caves may be managed as very interesting natural geo-eco-tourism objects that are very attractive for both domestic and international tourists. White sand beach and limestone ravines represent beautiful and virgin natural architecture with huge attraction for the tourists [5]. Tops of high rising hills with huge, beautiful and fresh natural sceneries are very rare for modern people who begin to be bored with modernity. New spectracular and interesting icons with wooden viewing places, tree bridges, tree houses or other artistic and typical constructions such as wooden canoes, bamboo rafts with artistic ornamentation of butterflies, birds or wild animals and gigantic sculpture as background for selfie photography will provide the tourists with beautiful and rare place to take memorable pictures [5]. Even when the situation is foggy and cloudy the tops of the hills still provide the tourist with amazingly beautiful sceneries that give them a sensation of flying somewhere over the cloud. Beautiful sunrise and sunset sceneries become the most wanted ones with premium beauty of natural phenomena that provides the tourists with the most beautiful moment to take photographs. The selfie photographs of the tourists that are rapidly distributed in social media and become viral are special attraction for people to visit the emerging natural tourist objects. The selfie photographs in the geo-eco-tourism become a must in the present life style and even people feel very strong eagerness to experience the nature and take their own selfie photographs and they consider the experience as the proof that they have conquered nature and no need to cause any harm to the nature and no need to unearth something from it. The dramatic increase in the number of the tourists of the karst area for weekend in Gunung Kidul give the local people many blessings that has positive impact on the economy of the people [5]. However, it seems that the infrastructure and the human resources have not been ready to meet the increasing demand of the tourist for the tourist objects. There has not been any sufficient network of wide and smoothly paved roads to the tourist objects and traffic jams still represent serious problem in addition to parting areas and good lodging houses with good facilities such as clean and decent bathrooms. Additionally, the local people seems to react to the increasingly demand for the tourist object in a wrong manner by drastically increasingly the renting rate of the basic facilities for the tourists. The more demand for the tourist objects, the higher they set the renting rate and the price of the items necessary for the tourist during their visits. Consequently, many tourists get disappointed because of the crowded situation during certain visiting season despite of those with strong eagerness to experience it themselves [5]. Even, seriously damaged ex-limestone mining area may be managed for the geo-eco-tourism by establishing certain buildings with artistic carvings, such as breccia ravine in Yogyakarta and Garuda Wisnu Kencana (GWK) in Bali that become natural tourist attraction. The geo-eco-tourism provides human with the change to enjoy the nature and no need to cause any harm to the nature. The attraction of local cultural events, local culinary, home stay in the family of the local people plays an important role in the implementation of the concept of the geo-eco-tourism. It is expected that the integrated land management based on the local people and environment would be able to establish a harmonious relationship among economic value, environment and social aspects. Blue Earth revolution, a revolution in the paradigm of karst area management, may be national and international reference in improving the quality of the environment and the prosperity of the local people that they may lead dignity and sustainable life [5].
2019-04-27T13:06:48.245Z
2017-08-01T00:00:00.000
{ "year": 2017, "sha1": "b746f1b5570144f55d0946f4856fce9b68598d3e", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1755-1315/83/1/012028", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "2bc8060773b308033256f1c6409de2cb6e01eae0", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Physics", "Environmental Science" ] }
52180143
pes2o/s2orc
v3-fos-license
MS4A2-rs573790 Is Associated With Aspirin-Exacerbated Respiratory Disease: Replicative Study Using a Candidate Gene Strategy Aspirin exacerbated respiratory disease (AERD) is a set of diseases of the unified airway, and its physiopathology is related to disruption of the metabolism of arachidonic acid (AA). Genetic association studies in AERD had explored single nucleotide polymorphism (SNPs) in several genes related to many mechanisms (AA metabolism, inflammation, drug metabolism, etc.) but most lack validation stages in second populations. Our aim is to evaluated whether contribution to susceptibility of SNPs reported in other populations are associated with AERD in Mexican Mestizo patients. We developed a replicative study in two stages. In the first, 381 SNPs selected by fine mapping of associated genes, (previously reported in the literature), were integrated into a microarray and tested in three groups (AERD, asthma and healthy controls -HC-) using the GoldenGate array. Results associated to risk based on genetic models [comparing: AERD vs. HC (comparison 1, C1), AERD vs. asthma (C2), and asthma vs. HC (C3)] were validated in the second stage in other population groups using qPCR. In the first stage, we identified 11 SNPs associated with risk in C1.The top SNPs were ACE-rs4309C (p = 0.0001) and MS4A2-rs573790C (p = 0.0002). In C2, we detected 14 SNPs, including ACE-rs4309C (p = 0.0001). In C3, we found MS4A2-rs573790C (p = 0.001). Using genetic models, C1 MS4A2-rs57370 CC (p = 0.001), and ACE-rs4309 CC (p = 0.002) had associations. In C2 ACE-rs4309 CC (p = 0.0001) and C3 MS4A2-rs573790 CC (p = 0.001) were also associate with risk. In the second stage, only MS4A2-rs573790 CC had significance in C1 and C3 (p = 0.008 and p = 0.03). We concluded that rs573790 in the MS4A2 gene is the only SNP that supports an association with AERD in Mexican Mestizo patients in both stages of the study. INTRODUCTION Aspirin exacerbated respiratory disease (AERD) is an illness characterized by chronic rhinosinusitis with nasal polyps, asthma and hypersensitivity to non-steroidal anti-inflammatory drugs (NSAIDs) such as acetylsalicylic acid (ASA) (Lee and Stevenson, 2010). Its prevalence depends on the reference consulted, ranging from 7% using specific questionnaires to 21% when provocation tests are used (Jenkins et al., 2004;Rajan et al., 2015). The physiopathology mechanism is not yet understood. The principal hypotheses is the disruption of acid arachidonic (AA) metabolism by the pharmacologic action of ASA or NSAID, the blockage of cyclooxygenase (COX) type 2 from the COX pathway to the lipoxygenase pathway with the subsequent increase in the synthesis of leukotrienes (LTC4, LTD4 and LTE4), immunological agents responsible for histopathologic changes, and the severity of the characterized symptoms of AERD (Laidlaw and Boyce, 2013;Thompson et al., 2016). Recently, new mechanisms have been integrated, such as epithelial damage mediated by thymic stromal lymphopoietin with activation of the innate type 2 immune system (Buchheit et al., 2016;Laidlaw and Boyce, 2016) and the involvement of the IL1β-IL1 axis in macrophages and eosinophils, increasing pro-inflammatory effects (Machado-Carvalho et al., 2016). AERD treatment consists of avoiding NSAIDs and controlling the pathologies that are integrated with it, together with nasal and inhaled steroids plus an antagonist of leukotrienes receptors (Montelukast), inclusive surgery for nasal polyps and desensitization with ASA for specific conditions (i.e., asthma control, recurrent polyps, and ASA for cardiovascular prevention) (Fokkens et al., 2012;GINA, 2016). At first, genetic association studies in AERD were performed in genes related to the metabolism of AA, first with direct association with a risk allele, and then with genetic models (co-dominant, recessive and dominant) and clinical markers (methacholine or ASA hyperbronchial reactivity and eosinophilia). Additionally, this methodological strategy was used in studies with other types of genes implicated in inflammation, tissue damage, intracellular signaling, drug metabolism and antigen presentation. These investigations have been primarily performed in Korean population Pavón-Romero et al., 2017). Recently, new methods evaluated the whole genome with techniques such as GWAS (genome-wide association study) to identify new candidate genes and/or SNP (single nucleotide polymorphisms) for screening susceptible populations to this entity, or predictive markers with therapeutic efficacy Kim et al., 2014). It is unknown whether this genetic background is applied to other populations, such as Latinos, specifically the Mexican Mestizo. Design Study We developed a replicative study in two stages. In the first stage, we evaluated SNPs selected by fine mapping genes associated positively with AERD in three groups (AERD, asthma and healthy control -HC) using the GoldenGate array (Illumina, Inc., San Diego CA, USA), and only the positive results were validated in the second stage in another population of subjects with the same inclusion criteria using real-time PCR (Figure 1). Subjects The first stage of the study included 478 subjects in three groups: 120 patients with AERD, 179 with asthma and 179 healthy controls, enrolled in asthma screening campaigns at the Immunogenetic & Allergy Department of the Instituto Nacional de Enfermedades Respiratorias Ismael Cosio Villegas (INER) at Mexico City. All subjects were Mexican-mestizo, defined as being born in Mexico and with Mexican ancestry (at least 2 previous generations) and not being from any particular ethnic group. AERD was defined as the presence of nasal polyps or antecedent of polyp surgery with intolerance to NSAID or ASA (nasal challenge with lysin-aspirin or antecedent of two severe reactions after the intake of NSAID or ASA, i.e., bronchospasm, documented in medical records) plus asthma. Asthma was established as persistent typical symptoms: shortness of breath, wheezing, chest tightness, and cough; plus ≥12% or 200 ml increase of forced expiratory volume in the first second (FEV 1 ); and post-bronchodilatator spirometry (Mater Screen, Jaegger-Germany). If they had no clinical symptoms or positive tests, the subjects were classified as HC. Allergy sensitization was evaluated with a skin prick test, comprising 40 allergens (Alk-abello; Massachusetts, USA), and the levels of total IgE (Architect i2000, Roche, Germany) and eosinophils count in the blood were measured by hematic cytometry (Beckman Coulter LH750, USA). The second stage included 104 patients with AERD, 105 asthma patients, and 132 HC unrelated to the first stage with the same classification criteria. All subjects are residents from urban metropolitan area of Mexico City. DNA Isolation All subjects donated eight milliliters of peripheral blood by venipuncture collected in a tube with EDTA as anticoagulant. Subsequent DNA extraction was performed using a BDtract DNA Isolation Kit (Maxim Biotech; San Francisco, California, USA). The DNA was quantified by ultraviolet absorption at a 260-nm wavelength using a Nanodrop instrument (Thermo Scientific; DE, USA). All samples were adjusted to 50 ng/µl for subsequent genotyping. SNP Selection and Goldengate Genotyping An Illumina 384 SNP custom GoldenGate array was employed (Illumina Inc.; San Diego, CA, USA). The SNPs were selected according to a search of the US National Library of Medicine with the keywords SNP and AERD, ASA hypersensitivity and SNP between 1997 and 2014.The array included 384 SNPs from 53 candidate genomic regions spanning over 19 chromosomes, of which 63 SNPs were associated with AERD, 299 were tag SNPs, and 22 SNPs were ancestry informative markers (AIMs), which must to have a difference with respect to Caucasian (CEU) group of 30% to be considered as AIMs. The selection criteria of the SNPs were based on the minor allele frequency (MAF) >10% in Mexican mestizo population (data obtained from Mexican genome diversity project, MGDP) and with Hardy-Weinberg equilibrium p > 0.05 (Figure 2). Genotyping and Quality Control Genotyping was conducted using the protocol designed by Illumina for the GoldenGate platform (Illumina, Inc.; San Diego, CA, USA) using a Tecan robotic automatic liquid dispenser (Tecan, Trading AG, Switzerland), which operates under the Illumina protocol. The microarrays were read on the BeadArray Reader scanner (Illumina, Inc.; San Diego, CA, USA). Genotype acquisition and generation of documentation (ped and map files) were conducted using the GenomeStudio 2011 v1.0 software (Illumina, Inc., San Diego CA, USA). Subjects who did not comply with the call rate criteria (>95%) were excluded. TaqMan Allelic Discrimination Genotyping in the second stage was performed using TaqMan allelic discrimination real-time PCR with predesigned probes in a7300 Real-Time PCR System (Applied Biosystems, Foster City CA, USA). This stage was performed using independent samples with the same characteristics as those samples used in the first stage. Genotype assignment was performed based on the allelic discrimination and confirmed by absolute quantitation. In addition, three non-template controls (contamination controls) were included for each genotyping plate, and 1% of the samples included in the study were genotyped in duplicate for control allele assignment. Data interpretation was conducted using the Sequence Detection Software (SDS v. 1.4, Applied Biosystems). VIC and FAM fluorophores were used for alleles A and B, respectively. In silico Analysis After validating the results in the second stage, we explored the theoretical role of the main SNPs in different biochemical processes, such as alterations in splicing using splice-site analysis with NetGene2 (http://www.cbs.dtu.dk/services/NetGene2). This program can be used to assess the presence of new binding sites for transcription factors and/or the creation or disruption of alternative splicing sites in the gene. To predict potential microRNAs (miRNAs), including the associated SNP and their potential target genes, the miRDB program (Wong and Wang, 2015) was used (http://www.mirbase.org/). Statistical Analysis In both stages, we analyzed clinical quantitative variables with non-parametric statistics, using SPSS software v.21 (SPSS software, IBM, New York, USA), and frequency analysis was performed with Epi-info software v.7.0. Generation of the fixation index (Fst) was performed using EIGENSOFT v4.2 software (Shringarpure and Xing, 2014). For the genetic association study, the software PLINK v1.07 (Purcell et al., 2007) was used, a logistic regression model (1 • of freedom) was created that included co-variables such as age, sex; only, the genetic analysis of minor allele frequency was performed with PLINK software, subsequently, reanalyzing according to genetic models, the co-dominant and recessive models, were performed using the Epidat v. 3.1 software and Epi-info v. 7.2 software, respectively. These last two methods were applied for analysis in the second stage. In all analyses, we considered significance at a p < 0.05. Ethics This study was reviewed and approved by the Bioethics and Science Committee in Research, with protocol number B14-12, and the Institutional Review Board at the Instituto Nacional de Enfermedades Respiratorias Ismael Cosio Villegas (INER). The participants were invited to join the study and were informed about the objective. They then signed an informed consent letter and were provided with an assurance-of-personal-data document. Each participant was assigned an alphanumeric key with the purpose of assuring confidentiality. Demographic and Clinical Data In both stages, we conducted three comparisons: AERD vs. HC, AERD vs. asthma, and asthma vs. HC. In the first stage, we enrolled 478 subjects. The control group was younger than the patients (AERD or asthma, p < 0.01), and the female gender prevailed in the three groups (∼60%). Eosinophil cell counts were higher in the AERD group than asthma and HC (p < 0.001). Serum total IgE had higher values in asthma than AERD patients and controls, positive allergy sensitivity was very similar in the three groups. A reversibility test at enrollment was positive in the asthma group, but not in the AERD, and it was negative in controls. Total nasal flow decreases after nasal lysin-aspirin challenge occurred only in the AERD group compared with the asthma and healthy control groups (p < 0.0001) ( Table 1). Ancestry The three groups had a similar proportion of genetic ancestry according to the two principal population groups (CEU, Caucasian and AME, Amerindian) that integrate the Mexican mestizo population. AERD had 52% of AME and 48% of CEU; asthma had 56% AME and 44% CEU; and HC had 41% CEU and 58% AME. The F ST test did not identify any significant difference among the three groups, but there was a difference when the groups were compared with CEU and AME ancestry markers (p = 0.005) Figure 3 and Supplementary Table 1. Genetic Models All SNPs associated with risk were evaluated using the codominant (CM) and recessive model (RM). We show the positive results associated with risk in the three different comparisons. Demographic and Clinical Data In the second stage, 343 subjects were included: 104 patients with AERD, 107 with asthma, and 132 healthy controls. Demographic data were similar to the first stage. The HC group was younger than the AERD and asthma group (p < 0.05), and the prevalence of the female gender was approximately 70% in the three groups. Counts of serum eosinophils were greater in AERD patients vs. asthma and HC groups (p = 0.03, p = 0.001). Total IgE levels were higher in the asthma group compared with AERD and controls (p = 0.001), and allergic sensitivity was a principal characteristic in asthma patients vs. the other two groups (p < 0.01). For the lung function test, the reversibility was statistically significative in the asthma group only ( Table 1). Allelic and Genetic Models In the second stage, we investigated only those SNPs associated with risk in the first stage. For the SNP rs573790 (MS4A2), the minor allele is C, and it had a proportion of 33.17% in AERD, 32.24% in asthma and 32.57% in healthy controls (p > 0.05). In the co-dominant model, we found a statistical significance when evaluating the CC genotype vs. (TT or CT) in AERD vs. HC p = 0.008, OR = 2.27, and this was similar in the asthma group vs. HC p = 0.03, OR = 1.93 (Figure 4). Statistical significance as well as its risk association are maintained only in the comparison of AERD vs. HC in the recessive model [CC vs. (TT+CT)], p = 0.03 OR = 3, but not in asthma patients vs. controls (p = 0.09). For TBXAS1 rs757760, MS4A2 rs502581, IL10 rs3024498, and ACE (rs4293 and rs4309), we did not detect any statistical association comparing AERD vs. asthma/HC or asthma vs. HC, in the allelic, recessive and codominant models (p > 0.05) ( Table 6). There was no relationship/association with the splicing process or microRNA generation based on the respective analysis software. DISCUSSION In the present study, we analyzed 53 candidate genomic regions, spanning in 19 chromosomes, associated with AERD previously in the literature between 1997 and 2014, using a tag SNP strategy in Mexican mestizo patients with this disease. The research was developed in two stages. First, we identified six SNPs in four genes by GoldenGate analysis, followed by qPCR validation in other independent groups. MS4A2 rs573790 was the only SNP that supports the association with the risk in the two stages. AERD is considered a particular phenotype of asthma and chronic rhinosinusitis (CRS). It occurs in 15% of patients with severe asthma (Kennedy et al., 2016) and 16% of those with CRS (Stevens et al., 2017). This low prevalence could result in under diagnosed patients with AERD worldwide. All our patients met the three characteristics of AERD, not only asthma with NSAID intolerance. We had a predominance of female gender, similar to Europeans (69%) at the age of 40 years on average (Szczeklik et al., 2000;Bavbek et al., 2012). The allergy sensitivity was approximately 50% in AERD patients. Other reports state that this characteristic can be as high as 85% (Stevens et al., 2017). Most patients had an acceptable lung function similar to other recent findings (Bochenek et al., 2015). To demonstrate aspirin hypersensitivity in the first stage, we included patients with a positive challenge. Because 85% of patients with the antecedent of lung reaction have a positive challenge (Nizankowska-Mogilnicka et al., 2007), we only enrolled patients who had a reaction to NSAID or ASA who were treated in an emergency room in the last 12 months. Using new methods, such as GWAS technology, new candidate genes in AERD were identified, such as HLA-DPB1 . There are few studies of candidate genes in AERD, and most were detected in Asian populations. However, some findings are conflicting, and the majority of reported associations lack of replication (Dahlin and Weiss, 2016). One study in a different population was conducted among Spanish people. They re-analyzed genes in the AA pathway, and a new SNP of ALOX15 and PTGS-1 were found to be associated with AERD risk in comparison with asthma and healthy subjects (Ayuso et al., 2015). The need to validate results found in other populations for use as genetic markers capable of predicting this disease led to multistage studies in a population with different genetic backgrounds (Boezen, 2009). Thus, we selected all genes previously reported to be associated with AERD (n = 384 SNPs) and assessed their genotypes in an Illumina 384 SNP custom GoldenGate array. The SNP rs573790 in the MS4A2 gene was at the top of variants associated with AERD in both stages of our study. MS4A is a large family gene, clustered in chromosome 11q12. MS4A2 encodes the β subunit of highaffinity immunoglobulin E receptor (FcεRIβ), considered a maturation marker for eosinophils and MC. Studies reported that the MS4A2 gene is expressed as multiple splice variants that are predicted to encode different protein isoforms, and some polymorphisms (I181L, V183L, and E237G) were associated with atopy and other diseases (Ma et al., 2015;Eon Kuek et al., 2016). Eosinophils and MCs play a role in the pathogenesis of AERD. Their mediators, eosinophil cationic protein and major basic protein, are linked to the exacerbation and pathogenesis of AERD (Rodríguez-Jiménez et al., 2018). Other studies reported that activated MCs are higher in CRS with NP in AERD compared with those from ATA (Varga et al., 1999) and contribute to the production of leukotrienes (Choi et al., 2013). The roll of MS4A2 in AERD is not fully understood. Its protein, FcεRIβ, is an essential component of the heterotetramer that comprises the IgE receptor FcεRIα, FcεRIβ and FcεRIγ (αβγ2) in eosinophils and MC (Potaczek and Kabesch, 2012). It participated in intracellular signaling, amplifying FcεRIγ-mediated signaling. In mice, FcεRIβ also amplifies FcεRI signaling by promoting the assembly, stabilization, and trafficking of the receptor complex to the cell surface (Cruse et al., 2013). In terms of genetic epidemiology, a meta-analysis of MS4A2 polymorphisms and its association with asthma in Asian subjects did not find any association of E237G with the disease or its atopic phenotype; however, −109C/T in asthma has a significantly decreased risk of disease based on the allele (C vs. T), whereas there is no evidence of association in genetics models (Yao et al., 2015). Kim and coworkers found that the −109T>C polymorphisms (TT vs. TC+CC) are associated with risk in patients with AERD (with Staphylococcus B enterotoxin) vs. ATA and the control group in a Korean population (Kim et al., 2006). MS4A2 was evaluated in Latino asthma patients (Puerto Ricans and Mexicans from native country and residents in the USA) as part of replicative genetic study on asthma. Galanter and collaborators showed that this gene was associated with asthma in Mexicans, but not in Puerto Rican patients (Galanter et al., 2011). The SNP rs573790 is localized in the 5 ′ UTR region of the MS4A2 gene. This type of polymorphism is localized in a noncoding region, usually related to alteration in the function or structure of RNA (Sadee, 2009); however, in our study, we did not detect this using software tools. The C allele (minor allele) is increased in AERD cases, and this frequency reaches 43% in Mexican residents in Los Angeles, USA (Auton et al., 2015). Interestingly, in a Mexican population, this frequency is lower than 32% in controls. In addition, our data showed that this SNP deviates from Hardy-Weinberg equilibrium, which may be due to the young genetic structure (mixture among Caucasian and Amerindian) of Mexican mestizo population (Pérez-Rubio et al., 2016). This is the first time that rs573790 (CC genotype) is associated with a human disease of any type (Zerbino et al., 2017). The angiotensin-converting enzyme (ACE), a key enzyme of the renin angiotensin system, is mainly expressed in the lung and plays an important role in the pathogenesis of asthma (Lee et al., 2000). Its function consists of inactivating a wide range of inflammatory peptides as kinins and substance P (Christiansen et al., 1987). Polymorphisms in the ACE gene were implicated in risk of asthma and AERD (Kim et al., 2008;Liu et al., 2013). The SNP rs4309 in ACE was associated with risk in the first stage in our study, but not in the second. An analysis of surrounding regions shows that this synonymous polymorphism (C) is within the rich zone of CpG islands (Li and Dahiya, 2002).This type of DNA region is strong, resistant to denaturation and is difficult to hybridize primers in conventional qPCR; therefore, it may not be the ideal technique for validating this finding (Flores-Juárez et al., 2016). Genetic studies in AERD have explored genes, single nucleotide polymorphisms, variable number tandem repeats, HLA alleles and exomes, using diverse techniques, however most of them were developed in Asian and Caucasian populations. It is necessary to validate their positives results in a second population, particularly in those with different genetic backgrounds, to strengthen the role of genetic susceptibility in AERD physiopathology, and to provide a framework for personalized medicine. Our current research presents by first time a replicative two-stage genetic association study in AERD, in a population including Amerindian and Caucasian ancestral contribution. We think that this approach strengthen our main findings. In our study, rs573790 in MS4A2 was the only polymorphism associated with AERD risk. Additional studies spanning MS4A2 gene region, employing sequencing techniques, could help to identify other SNPs related to AERD pathogenesis. AUTHOR CONTRIBUTIONS GFP-R enrolled patients, review of literature, DNA isolation, development of molecular biology techniques, bioinformatics' analysis, manuscript redaction. GP-R bioinformatics' and statistical analysis. EA-O development of molecular biology techniques, bioinformatics' analysis. FR-J, EB-O, NA-F, KEX-R, EH-J, and BAF-G enrolled patients. AEC enrolled patients, development of molecular biology techniques. LMT development of molecular biology techniques, manuscript redaction. RF-V development of molecular biology techniques, bioinformatics' analysis, manuscript redaction.
2018-09-11T13:03:38.135Z
2018-09-11T00:00:00.000
{ "year": 2018, "sha1": "fc2f9a9eb2e3dc256e413d61e9d953363f8d8f1a", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2018.00363/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fc2f9a9eb2e3dc256e413d61e9d953363f8d8f1a", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
264619370
pes2o/s2orc
v3-fos-license
A model of price discrimination under loss aversion and state-contingent reference points We study optimal price discrimination when a monopolist faces a continuum of consumers with reference-dependent preferences. A consumer’s valuation for product quality consists of an intrinsic valuation affected by a private state signal (type) and a gain–loss valuation that depends on deviations of purchased quality from a reference point. Following Kőszegi and Rabin (2006), we consider lossaverse buyers who evaluate gains and losses in terms of changes in the consumption valuation, but in our model each buyer evaluates consumption outcomes relative to his own state-contingent reference quality level. We capture the process by which reference qualities are formed via a reference consumption plan, and use a generalization of the Mirrlees representation of the indirect utility to fully characterize optimal contracts for loss-averse consumers. We find that, depending on the reference plan, optimal price discrimination may exhibit (i) downward distortions beyond the standard downward distortions due to screening, (ii) efficiency gains relative to second-best contracts without loss aversion, and (iii) upward distortions above first-best quality levels without loss aversion. We consider ex ante and ex post consistent contracts in which quality offers by the firm coincide, in expectations or at every state realization, respectively, with the reference quality levels. We find the firm’s unique preferred ex ante and ex post consistent contract menu and specify conditions under which, for the second case, it also constitutes the consumers’ preferred menu. Introduction The purpose of this paper is to study monopoly price discrimination in situations where buyers care about comparisons between consumption outcomes and subjective beliefs about these outcomes, which act as reference points. In our model, there is a onedimensional state parameter θ that affects demand for quality and expectations of future consumption. 1 More precisely, a consumer's utility is determined by his intrinsic quality taste and by comparisons between the offered quality and a reference quality level, and both are affected by θ. The way we allow the state parameter to determine willingness to pay is standard (for instance, single crossing is satisfied). The interaction between θ and the reference point is captured by a reference consumption plan: after observing θ, a buyer anticipates a certain state-contingent reference quality level and experiences gains or losses according to whether his purchased quality exceeds or falls short of his reference point. Reference plans are assumed to be nondecreasing. Thus, a higher intrinsic valuation for quality is associated with a higher anticipated quality outcome. 2 Following Tversky and Kahneman (1991), we assume that consumers are loss-averse. Price discrimination takes reference dependence into account and reflects the interaction between loss aversion and the traditional rent extraction and incentive compatibility trade-off. We are able to derive the optimal contract menu for any monotone reference plan. Our approach enables comparative statics analysis of the offered product line and profits arising from changes in the reference plan, for example, due to targeted advertising. In Section 3 we analyze the benchmark model under complete information. We show how loss aversion invites upward distortions in the offered quality relative to the loss-neutral case. This occurs, in particular, when buyers enter the market with high reference quality levels: the first-best quality would fall short of the reference level and the loss-averse consumer is willing to pay a premium to reduce the associated loss. The firm exploits this by increasing both quality and price until either these marginal gains are exhausted or the offered quality hits the reference level, shutting down any further gains. In particular, over a wide range of states, profit maximization implies matching reference qualities exactly. A similar logic drives the comparative statics results. If the monopolist is able to inflate consumers' reference levels, the effect would magnify upward distortions and increase profits. While an empirical demonstration of inefficiently high quality offers is challenging, our predictions could be used to indirectly test for loss aversion in the laboratory: all else equal, loss-averse consumers respond to higher reference consumption points differently than loss-neutral consumers. 3 In Section 4 we turn to the incomplete information case. A contract menu is now feasible if and only if it satisfies the incentive compatibility and participation constraints. Two new effects emerge, compared to the complete information benchmark. First, the marginal profitability of increased quality is reduced due to incentive issues familiar from traditional screening models. Higher quality levels are more attractive to a θ consumer with low willingness to pay, but also to higher type consumers to whom the monopolist was hoping to sell an even higher quality product. The increase in revenues from the θ consumer are offset by information rents ceded to these consumers. Reference dependence modifies this standard trade-off because for any given quality and reference levels, the higher type consumer experiences a larger loss (or smaller gain) than the θ consumer. Thus, it is possible for the firm to increase profits by expanding its product line to both high and low ends of the market, in response to consumers' high expectations. This loss aversion effect can account for three part tariffs and other complex contract schemes that have become increasingly popular among mobile phone operators, Internet providers, and other subscription services, 4 when, for instance, low-type consumers overestimate usage prior to choosing a contract. The second effect is a novel distortion with no counterpart in a loss-neutral screening model. Consider a monopolist contemplating increasing the quality q offered to a given θ consumer from just below his reference level to just above it. Such a change has a discrete effect on the attractiveness of q to consumers who have higher willingness to pay, but whose reference levels are relatively similar, because they would switch from the loss to the gains domain. As a result, the firm incurs a discrete drop in profits. We quantify this lump-sum incentive cost and show its effects on the optimal contract design. It implies an additional downward distortion in quality levels to consumers who would otherwise be offered products that surpass their reference points, making the interaction between reference quality levels and offered qualities quite complex. The fact that the reference plan determines qualitative features of optimal contracts leads to the question of belief manipulation. This issue is explored in Section 5, where we impose ex ante and ex post consistency requirements on beliefs about future consumption outcomes. A constant reference plan is ex ante consistent if it coincides, in expectations, with actual purchased quality levels. There are many ex ante consistent reference plans. We find that the firm has a unique preferred ex ante consistent contract menu that is generated by the largest ex ante consistent reference plan. A monotone reference plan is ex post consistent if it coincides pointwise (i.e., for every state realization) with actual quality consumption. As in the previous case, there are many possible ex post consistent reference plans. Here we also show that the firm has a unique preferred ex post consistent contract menu. Intuitively, a higher (ex ante) ex post consistent reference plan increases the net total willingness to pay, as it makes the outside option less desirable. In both cases, preferred consistent contracts exclude fewer consumers from the market and have quality levels distorted upward from second-best levels under loss neutrality. While these upward quality distortions improve allocative efficiency for lowand intermediate-type consumers, under ex post consistent contracts buyers with high state parameters end up purchasing overly sophisticated goods. In practice, firms can manipulate reference points through advertising and other marketing practices. 5 Marketing efforts will be credible as long as they promote optimal consistent contracts. A higher ex ante or ex post consistent reference plan could improve or hurt consumer's surplus-note that we are comparing how different consistent reference plans affect loss -averse consumers. This is because in either case a higher reference plan implies higher quality offers for (potentially more) active consumers, which translates into more information rents to buyers with positive consumption levels, but it also implies a lower value of the outside option, which increases the net willingness to pay of active consumers. 6 We are able to specify conditions under which the positive information rents effect associated with a higher ex post consistent reference plan dominates the negative participation effect. In this case, the firm's preferred self-confirming contract menu is also the consumer's preferred contract menu. Section 6 offers some concluding remarks and a review of the related literature. To facilitate exposition, we have gathered all proofs in Section 7. Price discrimination under reference-dependent preferences Our model builds on Mussa and Rosen (1978) and Maskin and Riley (1984), but in our framework consumers derive utility from consumption and from comparisons between consumption and a state-contingent reference point. Following Tversky and Kahneman (1991), we consider loss-averse consumers. The firm A profit-maximizing monopolist produces a good of different characteristics captured by the parameter q ≥ 0. This can be interpreted as either a one-dimensional measure of quality (exclusive features in a luxury product line) or quantity (amount of data offered by a mobile operator). Paying tribute to Mussa and Rosen (1978), we maintain the first interpretation. The cost of producing one unit of the good with quality q is c(q) ≥ 0. We assume that the cost function c(·) defined on R + is (F1) increasing, with c(0) = 0, (F2) twice continuously differentiable, and (F3) strongly convex, i.e., there exists > 0 such that c (q) ≥ for all q > 0. The firm's problem is to design an optimal menu of posted quality-price pairs for potential buyers with differentiated demands. Consumers There is a continuum of consumers with quasi-linear preferences and unit demands for the good offered by the firm. Preference heterogeneity depends on the private type parameter θ ∈ = [θ L θ H ], where 0 ≤ θ L < θ H < +∞. The firm only knows the distribution F(·), with full support on and positive density f (·). We assume that the inverse hazard rate h(·) defined by h(θ) = (1 − F(θ))/f (θ) 6 We thank an anonymous referee for pointing out this effect, which was previously omitted from our analysis. is nonincreasing and continuously differentiable. Loss aversion We step outside standard monopoly pricing theory to consider buyers who exhibit reference-dependent preferences for the product attribute. Specifically, a θ consumer derives additional utility from comparing q to a type-contingent reference quality level r(θ) ≥ 0. A reference level may reflect (in)correct subjective expectations of future consumption, may be determined by past experiences, or determined by current aspirational considerations. At this stage, it is convenient to study a general reference formation process captured by the reference consumption plan r : → R + . We assume that r(·) is (R1) increasing, (R2) continuous and piecewise continuously differentiable, and (R3) admits bounded left and right derivatives everywhere on . Following Kőszegi and Rabin (2006), comparisons between consumption outcomes and reference points are evaluated in terms of the consumption valuation. Note that we depart from their work in assuming that each buyer assesses q relative to his own state-contingent reference level. The θ consumer has a gain-loss valuation given by where μ = η if q > r(θ) and μ = ηλ if q ≤ r(θ). The parameter η > 0 is the weight attached to the gain-loss valuation, and λ ≥ 1 is the loss aversion coefficient. Loss neutrality (λ = 1) is treated as the baseline scenario. The total valuation for the θ consumer is then m(q θ) + μ × m(q θ) − m(r(θ) θ) 7 One can accommodate Maskin and Riley's (1984) model by changing (C2) and (F3) to impose strong concavity on m(· θ) and convexity on c(·). A buyer's outside option consists of purchasing a substitute good of minimal quality in a secondary market. For simplicity, we let both quality and price of the substitute good be equal to zero, so that θ consumer's reservation utility equals −ηλm(r(θ) θ). Note this means buying the outside option feels like a loss. 8 His net total valuation is then Presented with a contract to buy quality q at price p, a θ consumer chooses to do so as long as his net total utility v(q θ) − p is nonnegative. This constitutes the (endogenous) participation constraint. Comment We provide the following interpretation of our framework. There is a mass of ex ante identical consumers interacting with the firm on a given time period. Prior to entering the market, consumers share a common reference plan based on (correct or incorrect) subjective beliefs about future consumption outcomes. Each consumer then receives a state signal that affects his willingness to pay and fixes his reference level according to the common reference plan. The firm only knows the distribution of signals. Thus, the firm designs a menu of individually rational and incentive compatible contracts {q(θ) p(θ)} θ∈ to maximize expected profits. Once contracts are posted, each buyer evaluates the quality offer of any contract relative to his type-contingent reference level and makes purchasing decisions accordingly. The consumption valuation, the gain-loss valuation, and the reference plan are common knowledge; in other words, the firm is fully aware of the consumers' behavioral bias. We take a partial equilibrium approach and ignore any budgetary restriction on consumer's behavior. Insofar as the total willingness to pay of loss-averse consumers is influenced by the reference plan, we focus on allocative efficiency alone when discussing welfare implications of loss aversion, considering loss neutrality as the baseline scenario. The solution of the firm's problem when λ > 1 is obtained by noticing that TS(q(θ) θ) coincides with S(q(θ) θ ηλ) when q(θ) ≤ r(θ) and with a constant-shifted S(q(θ) θ η) when q(θ) > r(θ). Since S(· θ η) has a strictly smaller slope than S(· θ ηλ) for all θ > θ L , the total surplus function exhibits a kink at q = r(θ). The quality level that maximizes profits is determined by the location of the kink relative to the two maximizers q(θ η) andq(θ ηλ). Figure 1 provides an illustration. If r(θ) is belowq(θ η), profits are increasing at the kink point and the firm choosesq(θ η)- Figure 1 Proposition 1. For reference plan r(·) and λ > 1, the complete information optimal contract menu {q fb (θ) p fb (θ)} θ∈ is given by and Proposition 1 shows the effects of loss aversion on price discrimination in the absence of screening issues. For a large reference quality level, the optimal quality is in the domain of losses and the firm exploits consumer's loss aversion by increasing its offer from the classic efficient level toq(θ ηλ). For a low reference level, the firm's optimal quality will be in the domain of gains and therefore coincides with the loss-neutral case. The reference plan entirely determines the shape of first-best quality offers for intermediate ranges. Since the reference consumption plan may in principle be very general, first-best contracts can take various shapes. In particular, a constant reference plan generates pooling in the intermediate range of the first-best contracts. To understand these results better we spell out the comparative statics effects of a change in the reference level. Proposition 2. The following statements hold under complete information for λ > 1. (i) Optimal quality offers are weakly greater than the loss-neutral efficient levels, and strictly greater when r(θ) >q(θ η) and θ > θ L . (ii) An increase in the reference level weakly increases the firm's profits. The effect is strict wheneverq(θ ηλ) > r(θ) and θ > θ L . The key observation behind Proposition 2 is that a change in the reference level affects how loss-averse consumers evaluate not only the contracted quality, but also the outside option. When r(θ) <q(θ η) = q fb (θ), the θ consumer compares his outside option in the loss domain with the firm's offer in the gains domain. An increase in r(θ) has countervailing effects: it increases the loss associated with the outside option but reduces the gain associated with the contract. The net effect expands the relative attractiveness of the contract: the quality offer is unchanged, but the consumer's willingness to pay, hence the optimal price, increases by As soon as the reference quality exceeds the loss-neutral efficient level, the latter and the outside option are in the loss domain and any further increase in the reference point reduces the value of both equally. However, since total surplus in the loss domain rises more steeply, there are larger surplus gains from quality. The firm captures these direct gains by increasing quality up to the reference level, which raises profits by Once the reference quality exceedsq(θ ηλ), all gains from loss aversion have been exhausted and the firm's offered quality and price are unaffected by further increases in reference levels. Note that none of these results holds for λ = 1. In the loss neutrality case, the optimal contract for the θ consumer consists of qualityq(θ η) at price (1 + η)m(q(θ η) θ), independently of r(θ). Price discrimination under incomplete information We now study optimal contract design when the realization of the state signal is private information. For the remainder of this section, we fix a reference consumption plan r(·) and derive the optimal contract menu induced by it. In Section 5 we focus on consistent reference plans. The design problem Given r(·), the firm's problem is to choose a menu of quality-price schedules {q(θ) p(θ)} θ∈ that maximizes expected profits and the individual rationality constraints A contract menu {q(θ) p(θ)} θ∈ that satisfies both constraints is said to be incentive feasible. When there is no risk of confusion, we denote by the indirect utility from an incentive feasible contract generated by r(·). From (1), observe that the value of the gain-loss coefficient μ takes in each side of (3) may differ, as it depends on comparison of q(θ) with r(θ) on the left-hand side and comparison of q(θ) with r(θ) on the right-hand side. Let r(θ) = q(θ) and assume r(·) is strictly increasing around θ. Then μ will change depending on whether q(θ) is selected by a lower type consumer who experiences a gain relative to his lower reference point, or a higher type consumer who experiences a loss relative to her higher reference point. This sudden variation in the valuation complicates the application of standard contract theoretic techniques, based on the integral representation of incentive compatibility, to characterize incentive feasible contracts. Figure 2 illustrates the source of the problem. Given an incentive feasible menu, U(θ) represents the maximum utility the θ consumer can obtain among all of the available options. Therefore, when we consider any particular bundle (q(θ ) p(θ )) and plot the mapping v(q(θ ) ·) − p(θ ) the indirect utility U(·) must lie everywhere above it, and coincide with it at θ = θ . When, as in Figure 2(A), v(q(θ ) ·) has partial derivative at θ , this pins down the derivative of the indirect utility. If this is true almost everywhere, then by the envelope theorem these partial derivatives can be integrated to recover U(·). However, when q(θ ) = r(θ ), the mapping v(q(θ ) ·) exhibits a kink at the point θ = θ and this can lead to an indeterminacy, as in Figure 2(B). 9 Alternatively, v(q ·) has bounded left and right partial derivatives at each θ ∈ , denoted, respectively, by ∂v − (q θ)/∂θ and ∂v + (q θ)/∂θ For each q ≥ 0, we define the correspondence ϕ(q ·) on by When r(θ) does not coincide with q, the partial derivative ∂v(q θ)/∂θ exists. Hence ϕ(q θ) is single-valued and given by When r(θ) = q and r(·) is strictly increasing at θ, ϕ(q θ) is a closed, bounded interval given by (see Section 7 for details) Because product quality is a choice variable, profit maximization may dictate setting q(θ) = r(θ) for a subset of consumers of positive measure-this, for instance, happens in the case of ex post consistent reference plans analyzed in Section 5.2. It follows that in equilibrium ϕ(q(θ) θ) may be multivalued and given by (5) on a nonnegligible subset of . We therefore characterize incentive feasible contracts based on an integral monotonicity condition and a generalization of the Mirrlees representation of the indirect utility. 10 Given a (measurable) quality schedule q : → R + , its associated correspondence ϕ(q(·) ·) is non-empty-valued, closed-valued, bounded, and measurable. Thus, it admits integrable selections, which we denote by δ(q(·) ·). Proposition 3. The menu {q(θ) p(θ)} θ∈ with associated indirect utility U(·) is incentive feasible if and only if there exists an integrable selection δ(q(·) ·) of the correspondence ϕ(q(·) ·) for which the following conditions are satisfied. We employ Proposition 3 to reformulate the firm's objective function in terms of a generalized virtual surplus. Ignoring momentarily the restrictions imposed by integral monotonicity, from the participation constraint one has U(θ L ) = 0 in equilibrium. The generalized Mirrlees equation yields Denote by μ(θ) the value of the gain-loss coefficient when the θ consumer selects the bundle (q(θ) p(θ)). From (5) and (6) it is clear that the firm uses the smallest possible selection, namely with μ(θ) = η if q(θ) > r(θ) and μ(θ) = ηλ otherwise (7) Using (7) in (6), replacing the resulting equation in the expression for expected profits, and integrating by parts, we obtain expected profits in terms of the virtual consump- The first line in the integrand of the above equation is the virtual total surplus from the θ consumer and is denoted accordingly by TS * (q(θ) θ). It expresses the trade-off between marginal and inframarginal revenues that the monopolist faces when increasing the quality allocated to this buyer. The second line, which we denote by LS(q(θ) θ), captures a novel effect in optimal contract design due to loss aversion. Write the firm's objective function as The next step of the analysis is to understand the trade-offs that stem from the interaction between the two components of the firm's profits. The optimal contract menu The monopolist's problem is to find a quality schedule q(·) that maximizes expected profits in (8), subject to the integral monotonicity condition. 11 We solve it in a way that parallels the complete information case to illuminate the new aspects arising from loss aversion. Define for θ ∈ and μ ∈ {η ηλ}, Our assumptions ensure that S * (· θ μ) is strongly concave. Moreover, q * (· μ) is continuously differentiable-except possibly at a type at which it turns from zero into 11 Verifying part (a) of Proposition 3 is complicated by the fact that the optimal selection changes depending on whether the quality offer is greater or less than the reference level. We defer this step entirely to Section 7. positive-and strictly increasing when it attains positive values. Also with the last inequality strict for all types for which q * (θ ηλ) is strictly positive. Analogously to the complete information setting (cf. Figure 1), TS * (q θ) coincides with S * (q θ ηλ) when q ≤ r(θ) and with an appropriate shift of S * (q θ η) when q > r(θ). In particular, for a fixed θ ∈ , TS * (· θ) is continuous in q but kinked at q = r(θ), and achieves its maximum at one of three points depending on the position of r(θ) relative to q * (θ η) and q * (θ ηλ). Therefore, a maximization based on the surplus component of the firm's objective function develops similarly to the complete information case, with the virtual total surplus accounting for screening-based incentive effects on top of the loss aversion effects. To gain insights on the second component of the firm's objective, notice that Thus, LS(q θ) represents a lump-sum cost incurred whenever the firm's offer exceeds the consumer's reference point. Increasing q(θ) above r(θ) moves θ's valuation from the loss domain to the gains domain, and this creates additional costs because theθ consumer, whose reference level r(θ) is above r(θ) but below q(θ), now views offer q(θ) as a gain instead of a loss. It follows that the value this consumer attaches to quality offer q(θ) suffers a discrete change, measured by Thus, in equilibrium, LS(q(θ) θ) represents the extra rents ceded to higher type consumers to discourage them from choosing q(θ), when this offer appears in the gains domain for these consumers. The combined effect of TS * (q θ) and LS(q θ) in the objective function implies that there is now, in addition to the kink at r(θ), a discontinuous jump downward (see Figure 3). We sketch the general solution to the firm's design problem below, leaving formal arguments for Section 7. Case 1. If r(θ) > q * (θ ηλ), then the latter constitutes the optimal offer. This is because q * (θ ηλ) is now the unique maximizer of TS * (q θ) and the lump-sum cost is zero. See Figure 3(A). Case 2. If q * (θ ηλ) ≥ r(θ) ≥ q * (θ η), the optimal offer is r(θ). In this case TS * (q θ) is strictly decreasing for qualities above the reference point and strictly increasing for qualities below the reference point, and the lump-sum cost is zero. See Figure 3(B). Case 3. If q * (θ η) > r(θ), the optimal offer is either q * (θ η) or r(θ). The unique maximizer of TS * (q θ) is q * (θ η), but now LS(q * (θ η) θ) is active. Thus, there is a trade-off between choosing q * (θ η) to capture efficiency gains and associated marginal revenues, or eschewing these and relaxing incentive constraints by offering r(θ) to ensure that this quality offer is viewed as a loss by higher types. When r(θ) is below but near q * (θ η), efficiency gains are small, hence the firm is more likely to offer r(θ). For larger differences between q * (θ η) and r(θ), the optimal choice may go the other way. This is illustrated in Figure 3(C-D). Incentive compatibility implies that the quality schedule must be monotone. Thus, for any subinterval c ⊂ with q * (θ η) > r(θ), the optimal offer corresponds to one of the following possibilities: either it assigns q * (θ η) for all θ ∈ c , or it assigns r(θ) for all θ ∈ c , or there exists a cutoff θ c ∈ c such that the firm offers r(θ) to each θ consumer below θ c and offers q * (θ η) to each θ consumer above θ c . Which option is chosen by the firm depends on the details of the model. Proposition 4. Fix a reference plan r(·) and λ > 1. The optimal incentive feasible menu {q sb (θ) p sb (θ)} θ∈ is given by if q * (θ η) > r(θ) and θ ≤ θ c q * (θ η) if q * (θ η) > r(θ) and θ > θ c with θ c ∈ cl c for any subinterval c ⊂ for which q * (θ η) > r(θ), and where the optimal selection in the price schedule is as in (7). We point out that in the above result, there may be finitely many subintervals c ⊂ for which q * (θ η) > r(θ), and each of these can be partitioned by a cutoff type θ c as described in (10). See the proof for details. The specifics of price discrimination under loss aversion exhibit novel elements, compared to the loss-neutral case. High reference plans for low-type consumers generate allocative efficiency gains as optimal offers get closer to the efficient qualities. In particular, there may be an increase in market coverage. High reference plans for hightype consumers, alternatively, generate quality distortions above and beyond the efficient levels. Moreover, it is possible that for a nonnegligible subset of buyers, the optimal quality schedule is determined entirely by the reference plan. This implies that the optimal contract menu may exhibit a degree of complexity-pooling for some mid-range consumers, preceded and followed by separating contracts, discontinuities in the optimal quality schedule-that responds entirely to the reference consumption plan and not to special features of the cost function or the distribution of types. As in the complete information case, we stress the fact that none of these results is obtained under loss neutrality. When λ = 1, the optimal quality schedule is q sb (·) = q * (· η), independently of the reference plan. Consistent reference plans The analysis of Section 4 allows for differences between optimal offers and reference levels expected by consumers. In this section we focus on correct belief formation, ruling out inconsistencies between expectation-based reference qualities and purchased qualities. We consider ex ante and ex post consistent reference plans and study their effects on profits and contracts. It will be clear from the exposition below that in both cases the lump-sum cost vanishes in equilibrium, which simplifies the construction of optimal contracts. However, ex post consistent reference plans require the generalized envelope techniques previously developed, as in this case quality purchased at every state will be equal to the reference level-hence the correspondence ϕ(q(·) ·) is multivalued for a set of types of positive measure. Ex ante consistent reference plans When the reference plan r(·) is a constant function, we slightly abuse notation and write r(·) = r. From Proposition 4, r generates an optimal menu {q sb (θ) p sb (θ)} θ∈ for which This follows because the lump-sum cost is zero when all consumers share the same reference point (cf. (9)). A reference plan r is said to be ex ante consistent if it generates an optimal quality schedule q sb (·) that satisfies We claim that the set of ex ante consistent reference plans is nonempty. The null reference plan r 0 = 0 generates the optimal schedule q * (· η). Any constant reference plan r ≥ q * (θ H ηλ) induces the optimal schedule q * (· ηλ). Thus, without loss of generality we restrict our analysis to constant reference plans lying between 0 and q * (θ H ηλ). Fix such a constant reference plan r. Since q * (θ ηλ) ≥ q * (θ η) for all θ and both q * (· ηλ) and q * (· η) are increasing functions, it is immediate to see that r determines two cutoff types τ 1 (r) and τ 2 (r) with θ L ≤ τ 1 (r) ≤ τ 2 (r) ≤ θ H via the relations q * (τ 1 (r) ηλ) − r = 0 and q * (τ 2 (r) η) − r = 0 We can express the optimal quality schedule generated by r, which, to enable us to perform comparative statics, we denote now by q sb (·; r), as q sb (θ; r) = ⎧ ⎨ ⎩ q * (θ ηλ) for θ < τ 1 (r) r for τ 1 (r) ≤ θ ≤ τ 2 (r) q * (θ η) for τ 2 (r) < θ. A standard application of the implicit function theorem permits us to deduce the continuity of the mappings τ 1 (·) and τ 2 (·) on the interval [0 q * (θ H ηλ)]. We express the expected quality schedule as a function of r by By continuity of E[q sb (θ; ·)], there must exist a fixed point 0 < r < q * (θ H ηλ) that solves (12). This argument shows that the set of ex ante consistent references is nonempty. Let r be an ex ante consistent reference plan. From (13), we obtain that Thus, r can be interpreted as a weighted average between the expected quality schedule that maximizes virtual surplus S * (q θ ηλ) for types below τ 1 (r) and the expected quality schedule that maximizes S * (q θ η) for types above τ 2 (r). A higher ex ante consistent plan increases the weight assigned to the former, thus generating efficiency gains for the firm, which increase expected profits. Because the set of ex ante consistent plans is compact (see Section 7), we obtain the following proposition. Proposition 5. Under complete information and for λ > 1, the unique preferred ex ante consistent quality schedule for the firm is generated by the largest ex ante consistent reference plan. The effect of a higher r on consumer welfare depends on whether or not a higher reference plan leads to changes in optimal offers. For intermediate types it may be possible that a higher reference plan yields higher quality consumption and, when the effects of single crossing are constant throughout consumption levels, this will increase consumer welfare. Things are different for low-type and high-type consumers. In particular, given two ex ante consistent plansr and r satisfyingr > r, we have that all consumers with types below τ 1 (r) and above τ 2 (r) maintain the same purchased quality, namely q * (θ ηλ) in the first case and q * (θ η) in the second, under either of the reference plans. However, the utility of these consumers net of the value of the outside option is lower underr (see the left-hand side of (14)), so these consumers are worse off. Ex post consistent reference plans A reference plan r(·) is said to be ex post consistent if the optimal quality schedule it generates satisfies q sb (θ) = r(θ) for all θ ∈ In this case, buyers correctly anticipate their future consumption outcomes and take those expectations as their reference points. We refer to both an ex post consistent plan and its associated optimal quality schedule by r(·). The set of ex post consistent reference plans is clearly nonempty: from Proposition 4, it contains every planr(·) for which q * (θ η) ≤r(θ) ≤ q * (θ ηλ) for all θ ∈ . Given this multiplicity, we ask which is the monopolist's preferred ex post consistent reference plan. From (8), given ex post consistent plan r(·), per-customer profits are given by This expression is strictly increasing in r(θ) for all 0 ≤ r(θ) ≤ q * (θ ηλ), and attains a unique maximum at q * (θ ηλ). It follows that the reference plan r * (·) = q * (· ηλ) constitutes the unique preferred ex post consistent plan for the monopolist. A higher ex post consistent reference plan generates two opposite effects on consumer welfare. First, an increase in the reference point (and consequent higher offer) increases the informational rents that the monopolist transfers to active buyers. Second, an increase in the reference point lowers the value of the outside option, which means that active buyers are worse off (in an ex post consistent contract menu, nonactive buyers expect to be excluded from the market). To analyze these countervailing forces, notice that because μ(θ) = ηλ holds at every state, the indirect utility for every θ consumer after discounting the value of the outside option is See (1) and (7), and condition (b) of Proposition 3, and recall that m(· θ L ) is everywhere zero by (C2). The first integral in the right-hand side of this expression captures the standard informational rents resulting from the screening process. It is positive for consumers buying a positive quality offer, and because of single crossing, its value increases with a higher reference plan. The second integral captures the value of the informational rents vis-à-vis the participation rents that the consumer concedes to the firm to avoid the outside option. Overall, the impact of a higher reference plan depends on the interaction of these two terms. However, when the effects of single crossing are independent of consumption levels, active consumers are also better off with a higher reference plan. Proposition 6. The following holds under incomplete information and λ > 1. (i) The unique preferred ex post consistent menu for the firm is q * (· ηλ). (ii) If ∂ 2 m(q θ)/∂q ∂θ is constant in q for all θ ∈ , then the unique preferred ex post consistent menu for consumers is q * (· ηλ). This result merits some comments. First, with the preferred contract menu, there are allocative efficiency gains for all θ consumers for whom q * (θ ηλ) lies strictly above q * (θ η)-the optimal quality offered to loss-neutral consumers-but belowq(θ η)the efficient quality for loss-neutral consumers. The θ consumers with q * (θ ηλ) > q(θ η) alternatively end up purchasing excessive quality levels. Second, notice that the firm exploits consumers' loss aversion in two different, albeit related, ways. A higher reference plan reduces the value of the outside option, thus driving up overall net (virtual) consumer surplus. But also, by offering a quality level equal to the consumer's reference point, the firm takes advantage of the higher marginal willingness to pay for each additional unit of quality, which is captured in the choice of the selection used in the Mirrlees representation of the indirect utility to construct the optimal price schedule in (11). Third, the fact that consumers also prefer q * (· ηλ) is somewhat counterintuitive. A higher reference point diminishes the attractiveness of the outside option, which increases the willingness to pay for quality in the primary market served by the firm (we are ignoring budgetary restrictions on the part of the consumers). Under incomplete information, the firm has to pass some of the extra surplus to consumers in the form of information rents, which are increasing in quality. When the effects of the single crossing conditions are constant, the information rents ceded by the firm to active consumers exceeds the extra participation rents extracted from those consumers when the value of the outside option worsens. Thus, active consumers are better off with a higher ex post consistent reference plan. Concluding remarks In this paper, we study optimal contract design by a revenue-maximizing monopolist who faces loss-averse consumers. We find that while general insights from standard price discrimination models are present, the reference consumption plan exerts considerable influence on specifics of the optimal contracts. This is due to the appearance of new effects generated by loss aversion under complete and incomplete information. Thus, depending on how potential buyers form their expectations of quality consumption, optimal contract menus may exhibit various distinct features: pooling for intermediate consumers, some discontinuities, efficiency gains, upward distortions from efficiency levels, etc. The expanded range of the optimal contracts is consistent with stylized observations in some industries (e.g., mobile communication, consumer electronics, luxury goods, etc.). Most of the older empirical literature testing reference-dependent price and quality effects consider memory-based models of the reference point formation process, e.g., Hardie et al. (1993), Briesch et al. (1997. There is however recent evidence of expectation-based reference points in effort provision both in the field, e.g., Crawford and Meng (2011) and Pope and Schweitzer (2011), and in the laboratory, e.g., Abeler et al. (2011) and Gill and Prowse (2012). In our monopoly pricing model with statecontingent reference qualities, there is a multiplicity of expectation-based, consistent reference consumption plans, both in the ex ante and ex post sense, many of which do not rule out marked complexities in optimal contracts. Alternatively, the firm's preferred ex ante and ex post consistent contracts exhibit (allocative) efficiency gains and an increased coverage at the low end of the market, and in the ex post case,exhibit an excess supply of quality compared to the efficient quality levels for the high end of the market. There are various ways in which the firm may induce consumers to adopt its preferred reference plan, for instance, by announcing salient characteristics of a product line prior to actual market introduction (with no mention of prices). This seems to accord with marketing practices spread across certain industries, where both product announcements and advertising campaigns tend to precede actual market introduction and stress quality attributes over prices. Thus, it is important to understand how, in practice, consumers' (correct) expectations of future consumption are influenced by these marketing campaigns, and by fashion and trend cycles, peer pressure, etc. This is especially important in settings where there are short product cycles due to innovation or in environments of oligopolistic competition where there may be more than one product attribute dimension that can be used as a tool to enter the market. We leave these questions for future research. Related literature Our work adds to the literature that investigates how profit-maximizing firms operate in a market where consumers have systematic deviations from traditional preferences; see DellaVigna and Malmendier (2004), Eliaz and Spiegler (2006), Heidhues and Kőszegi (2008) Galperti (2014), and Grubb (2009), among others. Following Kőszegi and Rabin (2006), we model the gain-loss valuation in terms of differences in the consumption valuation, but we differ in how comparisons take place. In some contexts it is reasonable to assume, as Kőszegi and Rabin (2006) and Heidhues and Kőszegi (2014), that all buyers share an ex ante stochastic reference point and evaluate each realization of stochastic consumption with each realization of the reference point. However, in other situations it is more appropriate to let each buyer assess his quality consumption relative to his state-contingent reference quality level, and this is the approach we follow here. Our work in this respect is closer to Sugden (2003); see also De Giorgi and Post (2011). Recent papers that follow Kőszegi and Rabin's (2006) approach include Rosato (2014), who studies how bait-and-switch tactics manipulate reference points and raise profits even when consumers rationally expect the bait and switch; Hahn et al. (2014), who study nonlinear pricing when consumers form reference points at an ex ante stage before learning their valuation but anticipating the eventual type-dependent consumption; and Eisenhuth (2012), who looks at optimal auctions for bidders with expectation-based reference points; see also Lange and Ratan (2010). Our work also is related to Orhun (2009), who considers a two-type model in which the reference point of the high-type consumer is influenced by the quality offered to the low-type consumer and vice versa. We consider contract design in response to an arbitrary reference plan, which enables us to study the monopolist's incentives to manipulate reference points, perhaps via advertising. Karle (2014) studies advertising to loss-averse consumers using a model of expectation-based reference point formation à la Kőszegi and Rabin (2006). In his model, advertising creates uncertainty about future consumption and this impacts reference point formation, whereas the logic of our model indicates that the firm will try to drive up each consumer's reference quality level unambiguously. Throughout this paper, we model reference points in terms of quality levels, departing from recent work in the area such as Herweg and Mierendorff (2013) and Spiegler (2012) that specifies reference points in terms of prices. Despite the evidence supporting the existence of reference price effects on consumer behavior, empirical work from the marketing literature suggests that loss aversion on product quality is at least as important as, if not more important than, loss aversion in prices. 13 This point is also suggested by experimental data reported by Fogel et al. (2004), who confirm the existence of loss aversion for quality in a laboratory setting, and Novemsky and Kahneman (2005), who stress that there is no loss aversion for monetary transactions that are expected to occur and thus are accounted for. Proof of Proposition 3. The equivalence between incentive compatibility of the menu {q(θ) p(θ)} θ∈ and parts (a) and (b) follows from Theorem 1 in Carbajal and Ely (2013). Condition (c) clearly holds when contracts are individually rational. Suppose now that (c) is also in place. Using condition (b) we express for any θ consumer. From (4) and (5), any integrable selection is such that δ(q(·) ·) ≥ 0 everywhere, so U(θ) ≥ 0 follows readily. Case 3. Suppose that q * (θ η) > r(θ). As before, the unique maximizer of the integrand of the profit function in (8) with μ(θ) = η is q * (θ η). Any deviation to a quality levelq > r(θ) does not change the value of μ(θ) to ηλ and thus will only decrease profits. Among deviations from q * (θ η) to quality levelsq ≤ r(θ) that change the parameter μ(θ) to ηλ, thus avoiding the lump-sum cost, the one generating the highest profits iŝ q = r(θ). The difference between profits at r(θ) with associated μ(θ) = ηλ and profits at q * (θ η) with associated μ(θ) = η is The sign of the above expression depends on the difference between gains associated with offering a quality level r(θ) and avoiding the lump-sum transfer to higher type consumers, and efficiency gains in virtual total surplus at μ(θ) = η derived from shifting quality from r(θ) to q * (θ η). Let θ > θ be two types for whom q * (θ η) ≥ q * (θ η) > r(θ ) > r(θ ). The monopolist either offers to them their respective reference quality levels or q * (θ η) and q * (θ η) to each of them, respectively, or it offers to the θ consumer his reference quality level and the quality level q * (θ η) to the θ consumer. The remaining possibility that q sb (θ ) = q * (θ η) and q sb (θ ) = r(θ ) is not incentive compatible. From Proposition 3, it suffices to show a violation of monotonicity: One can write the previous inequality as Since θ > θ and q * (θ η) > r(θ ), single crossing implies that this inequality is indeed satisfied. From the assumptions in Section 2 it follows that q * (· μ) is everywhere continuous and continuously differentiable except possibly at a type where q * (· μ) turns from zero to positive. Therefore the function f μ defined on by is continuous and piecewise continuously differentiable, with bounded left and right derivatives everywhere on . Let A ⊂ be the set of types for which f μ (θ) = 0 and f μ (θ) = 0. Since f μ is continuous, it follows that θ ∈ A is an isolated point and thus A is a discrete subset of a compact set; hence it is finite. It follows that there are finitely many subintervals c ⊂ for which q * (θ η) ≥ r(θ). The construction of the optimal quality schedule q sb in (10) follows from these arguments. It remains to show that the informational constraints, expressed as conditions (a)-(c) of Proposition 3, are in place. One immediately sees from the expression for incentive prices in (11) that both (b) and (c) are in place. To show that condition (a)-integral monotonicity-is satisfied, let θ θ ∈ be two consumer types such that θ < θ and suppose q sb (θ ) < r(θ ) ≤ r(θ ) < q sb (θ ) holds: all remaining cases are similarly proven. By construction of the optimal quality offers, there exists a type θ c , with θ ≤ θ c < θ , for which one has r(θ ) ≤ r(θ c ) = q sb (θ c ) ≤ r(θ ). Moreover, we can choose θ c so that q sb (θ) ≤ r(θ) for all θ ≤ θ ≤ θ c and q sb (θ) > r(θ) for all θ c < θ ≤ θ . We first write the valuation differences in a suitable form, which is the first inequality of the integral monotonicity condition of Proposition 3. To obtain the second inequality, we write v(q sb (θ ) θ c ) − v(q sb (θ ) θ ) and similarly v(q sb (θ c ) θ ) − v(q sb (θ c ) θ c ) Combining expressions (24) and (25) with (21) above, we obtain the desired inequality. Thus, integral monotonicity is satisfied. Proof of Proposition 5. In the main text we showed that the set of ex ante consistent reference plans, or equivalently the set of solutions to the fixed-point (12), is nonempty. Because E[q sb (θ; ·)] is a continuous function on the compact interval [0 q * (θ H ηλ)], it follows that the set of ex ante consistent reference plans is also closed. To see this more explicitly, take a sequence of fixed points {r n } and assume that it converges tô r ∈ [0 q * (θ H ηλ)]. By continuity, we have that E[q sb (θ; r n )] → E[q sb (θ;r)]. Since r n = E[q sb (θ; r n )] and r n →r, it follows thatr = E[q sb (θ;r)], as desired. Under constant reference plan r, the lump-sum cost is always zero. Thus, using (8), we express expected profits of the firm as a function of r by sb (r) = τ 1 (r) θ L S * (q * (θ ηλ) θ ηλ) dF(θ) + τ 2 (r) τ 1 (r) S * (r θ ηλ) dF(θ) + θ H τ 2 (r) S * (q * (θ η) θ η) + (ηλ − η)m * (r θ) dF(θ) Expected profits are continuous on r, and because the set of ex ante consistent reference plans is a closed subset of [0 q * (θ H ηλ)], it follows that there exists a preferred ex ante consistent reference plan. To show that the unique preferred ex ante consistent reference plan for the firm is the largest among all ex ante consistent plans, we argue that profits from every θ consumer are strictly increasing in r. These four cases exhaust all relevant comparisons. (ii) For any given q and θ, by single crossing we have ∂ 2 m(q θ)/∂q ∂θ > 0. Thus, an increase in the reference point for the θ consumer to a new ex post consistent level increases the value of the first integral in the right-hand side of (14). Alternatively, when the function ∂ 2 m(q θ)/∂q ∂θ is constant in q for all θ, we obtain that the integrand in the second integral of (14) vanishes. Note also that m(q θ L ) = 0 for all q ≥ 0, so this term does not affect our conclusion.
2015-03-27T04:16:54.000Z
2016-05-01T00:00:00.000
{ "year": 2016, "sha1": "6fcbe97a2b9b59e40835e96ab1ece9c02b5725c3", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.3982/TE1737", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "eb32df944d12a4a9306569eb3be9ba3e5906cd74", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
206644121
pes2o/s2orc
v3-fos-license
Supercurrent in the quantum Hall regime A novel promising route for creating topological states and excitations is to combine superconductivity and the quantum Hall (QH) effect. Despite this potential, signatures of superconductivity in the quantum Hall regime remain scarce, and a superconducting current through a QH weak link has so far eluded experimental observation. Here we demonstrate the existence of a new type of supercurrent-carrying states in a QH region at magnetic fields as high as 2Tesla. The observation of supercurrent in the quantum Hall regime marks an important step in the quest for exotic topological excitations such as Majorana fermions and parafermions, which may find applications in fault-tolerant quantum computations. The interplay of the quantum Hall effect with superconductivity is expected to result in novel excitations with non-trivial braiding statistics such as Majorana fermions and non-abelian Majorana anyons [1][2][3][4] . When a quantum Hall region is contacted by two superconducting electrodes, the gapped QH bulk prevents the flow of a supercurrent. However, it was predicted more than 20 years ago that the supercurrent may still be mediated by QH edge states [12] . Due to its chiral nature, a single edge can only conduct charge carriers in one direction, so both edges have to be involved in establishing supercurrent between the two contacts. This situation is fundamentally different from the Josephson junctions made of two-dimensional topological insulators, where each edge can support its own supercurrent [13][14][15][16] . Indeed, contrary to the case of topological insulators, the magnetic field in the QH regime breaks time-reversal symmetry, which is essential for the s-wave pairing of conventional superconductors. Nonetheless, we observe a robust supercurrent in the quantum Hall regime, which we attribute to an unconventional form of Andreev bound states circulating along the perimeter of the QH region and involving electron and hole trajectories separated by several micrometers. We performed transport measurements on four Josephson junctions (J 1−4 ) made of graphene encapsulated in boron nitride and contacted by electrodes made of a molybdenum-rhenium alloy [ Fig. 1a] [11] , a type II superconductor with a high upper critical field of H c2 =8 T. The high quality of these heterostructures allowed us to observe Fabry-Perot oscillations of the junctions' resistance and critical current, indicating that the transmission of charge carriers between the contacts is ballistic [17] . The supercurrent is uniformly distributed along the width of the contacts, as evidenced by the regular Fraunhofer pattern [18] measured at small magnetic fields [17] . All junctions demonstrate supercurrent in the QH regime; for consistency, we choose to present data measured on sample J 1 , which has a distance between contacts L = 0.3 µm and a width of the contacts W = 2.4µm (see Figure 1b). Recent preprint reported on the observation of supercurrent through encapsulated graphene in moderate magnetic fields, when the diameter of the cyclotron orbit is larger but comparable to the length of the junction, 2r C ≥ L [19] . (Here, r C = k F /eB is the cyclotron radius.) This supercurrent has been attributed to Andreev bound states made of closed trajectories connected by several elastic and Andreev reflections, which yield pockets of superconductivity at random values of density and field. We further explore this regime in the Supplementary Information. In the main text, we demonstrate that a completely new regime emerges at even larger magnetic fields, when r C is much smaller than the device dimensions and the mean free path. In this regime, the bulk of the junction is gapped by Landau quantization so that a current may only flow along the edges. Figures 1c and d show the differential resistance of the sample, R ≡ dV /dI, plotted vs. back gate voltage, V G , and magnetic field, B. The resistance is measured in a four-terminal configuration where four MoRe electrodes merge into two contacts on each side of the junction ( Figure 1b). The map in Figure 1c is measured with an AC excitation current I AC = 50 pA applied on top of a large DC current of I DC = 6 nA, which suppresses supercurrent and highlights the QH features. As B increases, a fan diagram characteristic of the quantum Hall effect in graphene emerges: resistance plateaus follow contours of constant filling factor ν ≡ nh eB = ± 2, 6, 10,. . . [20] This quantization becomes visible as soon as B exceeds the red parabolic contour 2r C = L because device dimensions prevent the development of the quantum Hall effect at lower fields. The dark region under the parabola (2r C > L) indicates a vanishing differential resistance as a supercurrent of tens of nA may flow in this semiclassical regime [19] . (c-d) Fan diagrams of the differential resistance dV /dI plotted vs. gate voltage V G and magnetic field B. Panel (c) is measured at a finite current bias of I DC = 6 nA which suppresses superconductivity in the QH regime and reveals the quantized plateaus. Panel (d) is measured at zero DC current and shows superconducting pockets extending beyond the semiclassical parabolic region of 2r C ≥ L. (e) I − V curves measured in a superconducting pockets at B = 1 T and the filling factor in the range of ν ≈ 2. The supercurrent branch is clearly visible at the lowest temperature (40 mK) for I < 0.5 nA. (f) The temperature dependence of the corresponding differential resistance, dV /dI. The resistance reaches maximum at I S = 0.5 nA, at which point the junction switches from the superconducting to the normal branch. (g) dV /dI measured as a function of temperature at I = 0, showing gradual suppression of superconductivity at elevated temperatures due to the phase diffusion. Figure 1d shows R(V G , B) measured simultaneously with Figure 1c using exactly the same AC excitation of 50 pA, but without applying a DC current. Strikingly, pockets of supercurrent extend far into the quantum Hall regime. They are visible as dark spots of vanishing resistance above the parabolic contour. These pockets occur at somewhat random values of V G , but are highly reproducible as the gate voltage is swept back and forth. To check that these regions do indeed correspond to a supercurrent, in Figure 1e we show the I − V curves measured in one of the superconducting pockets at at B = 1 T and V G = −4.7V. The curves demonstrate a clear supercurrent branch, which extends up to I < 0.5 nA at the lowest temperature of 40 mK. We stress that at that particular point, r C ≈ 25 nm L/2 = 150 nm. The corresponding differential resistance (dV /dI) vanishes in the same range of currents ( Figure 1f). The maximum of resistance is reached at I S = 0.5 nA, at which point the sample switches from the superconducting to the normal branch. The curves in Figures 1e,f are extremely sensitive to temperature, the supercurrent being washed away by T ∼ 500 mK. This energy scale is orders of magnitude below the critical temperature of MoRe (≈ 10 K) and the energy splitting of the lowest Landau levels in graphene (>100 K). It is however close to the Josephson energy, E J = I C /2e, which is tens of mK for critical currents of a few nA. For temperatures comparable to the Josephson energy, the apparent switching current, I S , is expected to be suppressed by thermal fluctuations with respect to the true critical current I C . This explains the observed I S of only 0.5 nA in Figure 1f. The thermal fluctuations also result in phase diffusion [18] , which yields a finite junction resistance even at zero DC current (Figure 1g). To further illustrate the coexistence of the QH and the superconducting pockets, we show the differential resistance of the same junction measured as a function of V G and the current bias I at B = 1.4 T (Figure 2a). The QH plateaus are visible in Figure 2a as vertical stripes of different color. Pockets of superconductivity appear around zero bias as dark minima of dV /dI. (At this field, the cyclotron radius r C ≈ 15 √ ν nm is much smaller than device dimensions throughout the map.) The solid black line in Figure 2b shows the cross-section of the dV /dI map taken with a finite current bias of I DC = 3 nA, high enough to suppress any superconducting features. Plateaus are clearly visible close to quantized values of R = h/(νe 2 ), with ν = 2, 6, 10, . . . The deviations from perfect quantization are common in two-probe measurements [21] . The gray line corresponds to the cross-section measured at zero DC current, which clearly shows the regions of suppressed differential resistance formed on top of the plateaus due to superconductivity. Figures 3a-c show the differential resistance measured at three superconducting pockets as a function of the bias current and magnetic field, which is varied in steps of 0.1 mT around B = 1 T. The critical current exhibits a robust interference pattern as the magnetic field is varied, with a period of 0.5 mT. Remarkably, this value is close to the period of the Fraunhofer pattern measured on the same junction at very low fields (B < 10mT), when the current distribution along the width of the junction is uniform (see Figure S3). However, the current distribution becomes spatially inhomogeneous in the intermediate magnetic fields of tens of mT and beyond, resulting in a very irregular pattern of the supercurrent vs. magnetic field (see Ref. [19] and Figure S9). Therefore, the periodicity recovered at high field must be attributed to a very different mechanism. Since at 1 T the bulk of graphene is clearly gapped, the periodic supercurrent must be mediated by the edge states. However, the edge states with opposite momenta are located on the opposite sides of the sample, separated by 2.5 to 4.5 µm in our junctions. This scale greatly exceeds the coherence length of the MoRe electrodes (a few nanometers), which prevents the direct coupling of the edges through a simple Andreev reflection. Below, we discuss a mechanism that couples the edge states through the hybrid electron-hole modes which are formed at the interfaces between the superconducting contacts and the QH region [22,23] . Due to the pairing gap, single particles cannot enter the superconducting electrodes and have to form edge states along the superconductor-QH interfaces (Figure 3d). The electron and hole states propagate in the same direction and are hybridized by the superconducting proximity, resulting in chiral hybrid modes. Quasiclassically, one can picture the hybrid electron-hole mode as a skipping orbit in which an electron and a hole are converted into one another on each bounce from the superconductor [23] (Figure 3d). Depending on the transparency of the interface, such a mode could have various degrees of mixing between its electron and hole components. In particular, for perfectly transparent interfaces, it becomes a neutral mixture of the two carrier types, similar to the Majorana modes. The hybrid modes provide a coherent reservoir of correlated electrons and holes, which is spread over the length of the superconducting electrode and thus could couple the edge states on the opposite sides of the sample [24] (Figure 3d). Specifically, an electron approaching the top contact along the right edge of graphene must be converted to the hybrid electron-hole mode, which then propagates along the graphene-superconductor interface to the left. Here, it has a finite amplitude of coupling to the left QH edge state as a hole, which then flows to the bottom contact. The loop is completed by the hole conversion into the hybrid mode at that contact, and by its subsequent coupling to the original electronic state at the bottom right corner [24] . This mechanism shuttles one Cooper pair between the contacts, coupling in the process the single-particle edge states separated by microns. Note that the process of an electron entering and a hole leaving the hybrid mode may be viewed as a perfect crossed Andreev reflection over distances of several microns. To substantiate this scenario, we study the dependence of the superconducting features on the magnetic field and the gate voltage (Figure 3e,f). Here, panel (f) is measured at I DC = 3 nA, exceeding the supercurrent, while panel (e) is measured at zero DC current and shows suppressed resistance when supercurrent flows between the contacts. Clearly, the normal features in Figure 3f are almost field-insensitive, while the superconducting features in Figure 3e exhibit the same magnetic field periodicity as in Figures 3a-c. Remarkably, the phase of these features depends on V G . Indeed, the quantized phase of Andreev bound states in panel (e) is made of two contributions: the Aharonov-Bohm phase due to the magnetic flux through the junction, and the phase accumulated by carriers completing the loop trajectory. The first term yields the magnetic field periodicity of Figure 3a-c and 3e. The second term is determined by the carrier momentum and therefore depends on V G . To keep the total phase constant, the contributions of the two phases have to cancel, resulting in the diagonal contours of constant phase in Figure 3e. Note that Figure 3e rejects a hypothetical alternative scenario in which each edge would support a separate superconducting path, like in the case of 2D topological insulators. Indeed, their interference phase would only depend on B and not on V G , resulting in the decoupling of the gate voltage and magnetic field dependencies. This would give rise to vertical strips in Figure 3e, contrary to our observations. The nature of hybrid mode propagating along the superconducting interface likely explains the extreme sensitivity of the supercurrent to V G . Indeed, mesoscopic details of the superconducting interface (such as the contact roughness, fluctuations in the interface transparency, or the presence of disorder) should strongly affect the relative phase and amplitudes of the electron and hole components of the hybrid mode. These in turn determine the coupling of the hybrid mode to the QH edge states at the corners of the sample (Figure 3d), likely resulting in the mesoscopic fluctuations of the superconducting current as V G is varied. In conclusion, we have measured supercurrent through a quantum Hall region, which is mediated by the Andreev bound states encompassing the edge channels on the opposite sides of the sample. These states are decidedly noninvariant under time reversal, and observing them makes an important step toward the realization of artificial superconducting hybrids in the quest for Majorana fermions and other exotic topological excitations proposed in Refs. [1][2][3] . Fractional quantum Hall states in graphene [25,26] , which emerge in fields as small as 5 T [27] , should bring these proposals into the realm of possibility. We also anticipate that control of these excitations will be greatly facilitated in 2D graphene nanostructures, where edge channels can be easily manipulated, split, and combined by the application of gate voltages. Supplementary information for "Supercurrent in the quantum Hall regime" Device Fabrication Graphene and boron nitride flakes are exfoliated on separate silicon wafer pieces with a 300 nmthick thermally grown oxide without prior oxygen plasma treatment. A 2×2 mm piece of polydimethylsiloxane (PDMS) is then adhered on a glass slide and treated in oxygen plasma for 1 min (60W, 250mTorr). A 1 µm thick film of polypropylene carbonate (PPC) is spincoated on a bare piece of a silicon wafer and baked at 80 • C for 10 minutes. The PPC film is then mechanically peeled from silicon, which is facilitated by thicker PPC edge beads near the substrate edges, forming a relatively rigid frame. The PPC film is then carefully deposited on the PDMS stamp and baked 10 min at 80 • C. In order to pick up flakes from their original substrate, the PDMS stamp is brought into contact with the flake as slowly as possible and baked at 50 • C; the flake is then usually picked up by the stamp as it is lifted. In order to deposit the assembled stack on the final substrate, the stamp is then baked at 90 to 110 • C and peeled off very slowly as the stack stays on the substrate. The resulting stacks usually have smooth defect-free terraces over several microns, separated by bubbles of trapped adsorbates [S1] . When the distribution of bubbles is not suitable for the fabrication of a defect-free device, we found that mild heating of the stacks at temperatures ranging from 200 • to 250 • C often allows bubbles to migrate and rearrange into a different, more favorable distribution as terraces of BN/Graphene/BN "self clean". It is critical that the final mesa of the device is positioned in the middle of a terrace free of defects as evidenced by Raman spectroscopy [Fig. S1b]. 'Electrical contacts to the flakes are patterned using e-beam lithography. The superconducting contacts are patterned with relatively thick PMMA (450 nm), then reactive-ion etched in a CHF 3 /O 2 mixture (flow rates 40/6 sccm) at 1 Pa and 60 W power. Etch time varies between 90 and 210 seconds depending on the initial thickness of the stack. Superconducting contacts are then directly deposited using the same PMMA mask in a DC magnetron sputterer, which results in self-aligned quasi-one-dimensional contacts [S2] . The target consists of Molybdenum Rhenium alloy (50/50 wt%) with 99.9% purity. The chamber pressure during sputtering reaches 2 mTorr, with a power of 160 W and a rate of approximately 50 nm/min. A schematic of the final device is shown on Figure S1a. Table 1 lists the devices included in this study with their geometric parameters. Here, L is the distance between the superconducting contacts, and W is their width. 2 Evidence for ballistic transport at zero field Figure S1c shows dV /dI (V G ) for junction J 3 measured at 6 K, above the critical temperature of the device. A narrow Dirac peak is visible at V G ≈ -2 V, separating the electron and hole-doped regimes. The electron-hole asymmetry is typical for two-terminal resistance measurements and comes from the work function mismatch between the metal contacts and graphene; in our case, it yields n-doping of graphene at the contacts. Figure S2a shows the voltage V across the junction as a function of current bias I and the gate voltage V G measured at 1.4 K. Current flows without dissipation below the switching current I S (dark blue region indicating a vanishing voltage V across the junction), beyond which point the junction switches to the normal state and a finite voltage appears. As expected, the switching current is minimal at the charge neutrality point. Furthermore, it is significantly smaller in the p-doped regime compared to the n-doped regime. The suppression is a result of the PN junctions formed close to the contacts. The partial reflections from the PN junctions also induces Fabry-Perot interference pattern, which can be observed in oscillations of the critical current ( Figure S2b). Oscillations are expected whenever the phase accumulated across the junction k F L is a multiple of π. We observe a periodicity ∆k F ≈ 6.4×10 6 m −1 , in good agreement with the expected 4.8×10 6 m −1 for a 650 nm-wide cavity. The measured δk F would correspond to an effective cavity length of 490 nm. This length corresponds to the distance between PN junctions induced by the contacts in the hole doped regime and is therefore expected to be shorter than the actual length of the junction. The existence of these Fabry-Perot oscillations suggests that electrons contributing to supercurrent travel ballistically, as observed recently in short Josephson junctions [S2−S5] . The resonant transmission of charge carriers through the cavity is also observed in the bias dependence of dV /dI, similar to Ref. S4 [ Fig. S2c]. 3 Homogeneity of the current distribution As a small magnetic field is applied perpendicular to the plane of the Josephson junction, the conventional Fraunhofer-like interference pattern appears in I C (B), indicating that the supercurrent is uniformly distributed across the width of the junction ( Figure S3) [S6] . The critical current vanishes whenever an integer multiple of magnetic flux quanta Φ 0 ≡ h/2e are threaded through the junction. The flux though graphene is enhanced due to flux focusing by the superconducting leads [S7] , which in our case are wider than the graphene region. We can estimate the effective flux focusing area as the area of the MoRe region that is closer to the graphene interface than to any other edge. Depending on the width of the the two contacts C 1,2 , this area is on the order of W 2 /2 or W × (C 1 /2 + C 2 /2), (whichever is smaller). Once flux focusing is included, the expected periodicity is close to our observations, summarized in Table 1. Figure 3d and e to display magnetic interference patterns. Green squares show the locations used to determine the temperature dependence of superconductivity. The yellow rectangle is the area used for Figure 2 in the main text. Figure S5 shows an alternative representation of the coexistence of superconductivity and the quantum Hall effect for junctions J 1−3 (panels a, b and c respectively.) Instead of showing two different maps at zero and finite DC bias we plot alternating lines measured at zero DC bias and finite DC bias on the same fan diagram. Even lines are measured at finite bias and show the conventional quantum Hall effect. Odd lines are measured at zero bias and often show a lower differential resistance (darker), which vanishes in superconducting pockets. We cannot extract a clear dependence on junction geometry yet, but signatures of superconductivity are stronger in the shortest junction J 1 and weaker in the longest, J 2 . 5 Additional temperature dependence of the supercurrent We determined the temperature dependence of the differential resistance for several superconducting pockets in the quantum Hall regime. The main purpose of these measurements is to verify that the switching current and the Josephson energy scale directly extracted from the I − V curves are meaningful. Indeed, although very small switching currents are sometimes taken to represent the true critical current (which determines the Josephson energy), it is clear that the nA currents, with the Josephson energy in the range of tens of mK, will be strongly affected by thermal fluctuations. In this regime, thermal excitation of the phase causes the phase diffusion; measuring its temperature dependence allows for an independent verification of the Josephson energy. Figure S6a shows the temperature dependence of dV /dI vs. I, measured on junction J 1 at V G = 4 V and B = 1 T from 60 to 500 mK. An Arrhenius fit to the minimum of dV /dI vs. T yields the Josephson energy scale of approximately 70 mK. This corresponds to a critical current of I C ∼ 3 nA, compared to the measured switching current of I S = 1 nA. Indeed, at the temperature of 40 mK, so that kT /E J ≈ 0.5, the thermal fluctuations are expected to make the measured switching current several times smaller than the true critical current. Additional data measured on a different pocket is shown on Figure S6c 6 Bias and gate dependence of the supercurrent Figure S7a shows dV /dI(V G , I) measured at 1 T and 65 mK for device J 4 . In that regime, r C is much smaller than the dimensions of the device and the bulk of the graphene sheet is in the quantum Hall regime. Vertical strips of constant resistance in Figure S7a correspond to the quantum Hall plateaus. When applying a DC current of 2 nA, supercurrent is suppressed and plateaus of dV /dI(V G ) are observed [ Fig. S7b]. Plateaus are better defined in the electron doped regime as a result of the better transmission at the contacts. Regions of suppressed resistance are visible around zero current. In magnetic field, these pockets demonstrate the same types of interference patterns as shown in Figure 3 of the main text. We discuss them in some detail in Figure S8 below. 7 Additional interference patterns in the quantum Hall regime Figure S8 demonstrates the oscillations of the critical current in magnetic field, similar to Figure 3 of the main text, here measured on J 4 . Figures S8a,b show the supercurrent oscillations in superconducting bubble at V G = 0 V and V G = 55 mV around 1 T. Figure S8(c,d) shows another example of the interference pattern as a function of V G and B, also measured on J 4 with an AC excitation of 50 pA at zero bias (c) and a finite bias of 1.5nA (d). While no field dependence is noticeable in the normal state, an interference pattern with a period of 0.65 mT is visible at zero bias. Similar to data shown in the main paper, constant phase contours depend on both B and V G . Figure 8 Transport in the semiclassical regime The primary focus of our work was the quantized regime, where the cyclotron orbit r C = k F eB is much smaller than both the mean free path and the device dimensions. Here we discuss magnetotransport in the semiclassical regime r C L, where the deflection of electron hole trajectories is weaker, yet sufficient for the supercurrent not to be uniform in space. Figure S9a shows the Landau fan diagram measured on J 3 with a large excitation current of 5 nA. The excitation current is sufficient to destroy superconductivity in the quantum Hall regime, but a larger supercurrent can persist in the semiclassical regime, under the parabola corresponding to the condition 2r C = L. This supercurrent has been attributed to Andreev bound states made of closed trajectories connected by multiple elastic and Andreev reflections [S4] , as schematically shown on Fig. S9b. Figure S9c demonstrates supercurrent in both the semiclassical and the QH regimes. Although the magnitude of the supercurrent changes between the two regimes (notice the change of the vertical scale between the left and the right parts of the map), the transition between the two regimes is gradual, and both parts of the map demonstrate mesoscopic variations of supercurrent as a function of the gate voltage. To further illustrate the semiclassical behavior, we demonstrate the voltage across the junction V (B, I) measured in small steps around 100 mT at V G = 8.4 V ( Figure S9d). In this regime, r C ≈ 920 nm > L/2 = 320nm. Regions of superconductivity are clearly visible, corresponding to V = 0, with a switching current ranging from 5 nA to 40 nA. These superconducting regions are much less periodic in field and vary greatly in amplitude, in contrast with the QH regime. We illustrate the dependence of these superconducting pockets on B and V G , by measuring them using the same AC excitation current of 1 nA at a) zero DC current, b) a medium DC current of 6 nA, and c) a large DC current of 100 nA [ Fig. S10(a-c)]. Strikingly, at zero bias the sample remains in the superconducting regime throughout the map with very rare spots of finite resistance. At I DC = 6 nA, we observe a random patchwork of superconducting regions similar to Ref. [S4]. This indicates that superconducting regions mostly close and reopen at random fields and densities, but a superconducting current of at least hundreds of pA remains throughout most of this region. At large bias (100 nA), the junction is in the normal state over the entire map, which becomes mostly flat, as expected.
2015-12-30T19:52:49.000Z
2015-12-30T00:00:00.000
{ "year": 2015, "sha1": "f41b0c34e201ba5c50f4c61983a483dd92a4d386", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1512.09083", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f41b0c34e201ba5c50f4c61983a483dd92a4d386", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
238418824
pes2o/s2orc
v3-fos-license
Variation in influenza vaccine assessment, receipt, and refusal by the concentration of Medicare Advantage enrollees in U.S. nursing homes Background: More older adults enrolled in Medicare Advantage (MA) are entering nursing homes (NHs), and MA concentration could affect vaccination rates through shifts in resident characteristics and/or payer-related influences on preventive services use. We investigated whether rates of influenza vaccination and refusal differ across NHs with varying concentrations of MA-enrolled residents. Methods: We analyzed 2014–2015 Medicare enrollment data and Minimum Data Set clinical assessments linked to NH-level characteristics, star ratings, and county-level MA penetration rates. The independent variable was the percentage of residents enrolled in MA at admission and categorized into three equally-sized groups. We examined three NH-level outcomes including the percentages of residents assessed and appropriately considered for influenza vaccination, received influenza vaccination, and refused influenza vaccination. Results: There were 936,513 long-stay residents in 12,384 NHs. Categories for the prevalence of MA enrollment in NHs were low (0% to 3.3%; n = 4131 NHs), moderate (3.4% to 18.6%; n = 4127 NHs) and high (>18.6%; n = 4126 NHs). Overall, 81.3% of long-stay residents received influenza vaccination and 14.3% refused the vaccine when offered. Adjusting for covariates, influenza vaccination rates among long-stay residents were higher in NHs with moderate (1.70 percentage points [pp], 95% confidence limits [CL]: 1.15 pp, 2.24 pp), or high (3.05 pp, 95% CL: 2.45 pp, 3.66 pp) MA versus the lowest prevalence of MA. Influenza vaccine refusal was lower in NHs with moderate (−3.10 pp, 95% CL: −3.53 pp, −2.68 pp), or high (−4.63 pp, 95% CL: −5.11 pp, −4.15 pp) MA compared with NHs with the lowest prevalence of MA. Conclusion: A higher concentration of long-stay NH residents enrolled in MA was associated with greater influenza vaccine receipt and lower vaccine refusal. As MA becomes a larger share of the Medicare program, and more MA beneficiaries enter NHs, decisionmakers need to consider how managed care can be leveraged to improve the delivery of preventive services like influenza vaccinations in NH settings. Background Despite Medicare coverage with no out-of-pocket cost to the beneficiary, and CMS vaccination requirements for nursing homes (NHs), influenza vaccination coverage in NHs remains suboptimal [1,2]. At 73.1%, during the 2018-2019 influenza season, vaccination coverage among adults living in NHs fell short of the Healthy People 2020 goal of 90% [3]. The coronavirus disease 2019 (COVID-19) pandemic has not only placed a spotlight on the vulnerability of NH residents to morbidity and mortality from respiratory infections, but it also has emphasized racial and socioeconomic inequities in patient care and outcomes [4,5]. Influenza vaccination can reduce influenza severity and prevent hospitalizations to help avoid the compounded effects of influenza and COVID-19 co-circulation on the healthcare system and society [6,7]. The widespread provision and acceptance of influenza vaccines is central to any effective strategy to mitigate the spread of influenza infection in NHs. Thus, efforts to promote vaccine uptake in NHs require careful consideration of several factors, including resident profiles, facility attributes, and type of health insurance coverage. Enrollment in Medicare Advantage (MA) is growing and is projected to increase to 51% of all Medicare beneficiaries in 2030 from 34% in 2019 [8]. Simultaneously, there is rising enrollment of racial and ethnic minorities in MA, and the health profiles of MA and Traditional Medicare (TM) enrollees are increasingly similar over time [9][10][11][12][13][14]. The historical selection of healthier beneficiaries into MA has diminished (if not reversed) because Medicare implemented changes reducing incentives for MA plans to select enrollees with more favorable risk profiles [15]. MA is distinct from TM in that the Centers for Medicare and Medicaid Services (CMS) pay private health insurance plans on a fixed capitated fee to provide health benefits for Medicare beneficiaries. Features of MA plans such as their payment models, coordinated care, and outreach programs urging high-risk members to get vaccinated may encourage screening and preventive care use to prevent costly medical services [16]. Research among community-dwelling beneficiaries has found higher rates of preventive services use (e.g., mammography screening, annual influenza vaccinations, and cholesterol testing) in MA compared with TM [17][18][19]. However, these studies did not consider NH populations and rely on data from more than a decade ago. MA plans and the characteristics of their enrollees have changed over time, as has the population of individuals receiving care in NHs [20,21]. The composition of MA beneficiaries in a NH and its relationship with the proportion of residents vaccinated has not been characterized. Yet, this is an important lens through which to examine and address gaps in influenza vaccination coverage in NHs, especially as MA plans may be selectively contracting with NHs, such as those that are larger and are part of a chain [22]. Furthermore, the direction of relationship between MA concentration and NH vaccination rates is uncertain. The shifts in the composition of MA enrollees and the payment model that incentivizes preventive services potentially present opposing possibilities for vaccination rates in NHs. A 'financial incentives' hypothesis may suggest that an increased proportion of NH residents enrolled in MA produces higher influenza vaccination rates; participating plans that promote the health of their enrollees in NHs may emphasize preventive services such as influenza vaccinations [23,24]. Another hypothesis informed by previous research proposes that the racial composition of NHs, based on the percentage of Black residents, contributes to individual-and facility-level variation in vaccination coverage [2,25,26]. Then, the proportion of non-White beneficiaries would increase in NHs with more MA enrollees lowering vaccination rates owing to disparities in care. In this context, we aimed to determine how measures of influenza vaccination offer, receipt and refusal differ among NHs with varying concentrations of residents enrolled in MA. Study design and data sources We conducted a national retrospective cohort study of 100% older adult Medicare beneficiaries residing in NHs during the 2014-2015 influenza season (October 1, 2014- March 31, 2015). We selected this period to identify the study population because it overlaps with the period over which influenza vaccination is entered on the Minimum Data Set (MDS) when received from October 1-March 31. To maximize generalizability, we included all free-standing NHs in the 50 U.S. states, District of Columbia, and Puerto Rico, excluding hospital-based facilities because of significant case-mix and structural differences [27]. We analyzed long-stay (≥100 days) NH residents who were ≥ 65 years of age. The 100-day cutoff is informed by Medicare reimbursement policy covering up to 100 days of post-acute skilled nursing facility (SNF) care during each benefit period [28]. We used Medicare enrollment data combined with MDS version 3.0 clinical assessments, and facility level data. We obtained NH organizational and aggregate resident characteristics from Certification and Survey Provider Enhanced Reports (CASPER) and LTCFocus.org (LTCFocus: Long-Term Care: Facts on Care in the US) data. We used CMS's Nursing Home Compare for overall and domain-specific (staffing, quality, inspections) star rating data. NH MA concentration We determined a beneficiary's status of MA coverage at the time of NH admission using Medicare enrollment data. We calculated the percentage of residents in each NH who were enrolled in MA. We used the rank procedure to create a dummy variable categorizing NHs into tertiles (low, moderate, high) based on their percentage of MA enrollees. Outcomes We used the MDS to ascertain influenza vaccination status and reasons for vaccine nonreceipt. Although the study population included residents in a NH between October 1st -March 31st, MDS assessments can be submitted with influenza vaccination during those dates through June 30. Therefore, in line with the Nursing Home Compare influenza vaccination quality measures, we assessed all MDS assessments for eligible residents from October 1, 2014 to June 30, 2015 [29]. To reduce misclassification of vaccination status we counted beneficiaries who received an influenza vaccine outside the NH during the current influenza season as vaccinated. This consideration is particularly relevant for short-stay residents who are more likely to be vaccinated in the hospital or elsewhere compared with long-stay NH residents. We examined three vaccination measures at the NH level: 1) percentage of residents assessed and appropriately provided the seasonal influenza vaccine; 2) percentage of receipt of influenza vaccine; and 3) percentage of refusal of influenza vaccine. The percent of residents appropriately assessed and provided influenza vaccination was defined as the sum of the percent vaccinated (i.e., appropriately provided), percent offered and refused, and the percent not eligible/contraindicated (i.e., appropriately assessed). Other possible reasons for non-vaccination that we do not report due to small cell sizes are "inability to obtain influenza vaccine due to a declared shortage" if vaccine is unavailable at the NH and "none of the above." Covariates Our analysis adjusted for NH-level variables that capture the demographic (age, sex, race/ ethnicity) composition of residents and their physical and clinical attributes (e.g., acuity index, activities of daily living scale, cognitive function scale, and comorbidities including serious mental illness and heart failure) as well as facility structural (e.g., for-profit ownership, bed count, occupancy rates, rurality, payer mix) and quality (overall star rating) characteristics. These were selected based on prior literature and substantive knowledge. The overall star rating is a composite score (ranging from 1 to 5) that takes into account a NH's performance on staffing, health inspections, and care quality measures. We included the Herfindahl-Hirschman index, which measures the concentration of NH beds in a county, as a covariate to account for variation in NH availability. Additionally, we controlled for the county-level MA penetration rate since MA markets vary substantially. MA penetration is defined as the share of Medicare beneficiaries enrolled in MA plans per county. We used MA penetration data from September 2014 which is the month prior to the start of our observation period [30]. We imputed the state average MA penetration rate for counties with missing or suppressed penetration values due to small sample sizes. Statistical analysis We compare the characteristics of NHs with different concentrations of residents with MA coverage. To assess the relationship between MA concentration and influenza vaccination rates, we conducted a linear regression analysis specified to account for clustering of residents within facilities and facilities within counties using the Huber-White sandwich estimator via generalized estimating equations. We specified an unstructured working correlation structure. In the model we included the dummy variable for the NH's MA concentration and the above-described covariates. Variables with a p-value < 0.05 were considered associated with the outcome. Stability analysis We carried out a stability analysis to determine the robustness of the results by applying an alternate and stricter definition for MA enrollment that required MA coverage during the entire observation period instead of only at admission. This analysis provides information on the extent to which switching from MA to FFS after admission affects the results. Analysis of Short-Stays We additionally analyzed short-stay (<100 days) residents as they can be co-located with long-stay residents in NHs and account for an increasing share of NH residents [31]. Therefore, influenza vaccination for short-stay residents has implications for NH-wide efforts to prevent and control the spread of influenza infection. Several reasons warrant the separate analysis of long-stay and short-stay residents. First, prior research found differences in risk factors for influenza infection and outcomes between these groups [32][33][34]. Second, there are distinct care goals between short-and long-stay residents [35,36]. Short-stay residents receive recuperative and rehabilitative skilled nursing care immediately following hospitalization prior to returning home. Whereas, long-stay residents predominantly receive custodial care including assistance with activities of daily living. Finally, there are potential differences in how reliably influenza vaccination status is captured in the MDS depending on the duration of a resident's NH stay. A short-stay resident is more likely to be vaccinated outside the NH and there is a possibility of undercounting if influenza vaccination status is not communicated to the NH upon admission. We present the short-stay results in the online supplementary materials. Software, data use Agreement, and ethics approval Data preparation and analyses were conducted using SAS version 9.4 (SAS Institute, Inc., Cary, NC). The Brown University Institutional Review Board approved this study. Long-Stay cohort From a national total cohort of 1,690,642 Medicare beneficiaries ≥ 65 years of age, we identified 936,513 long-stay residents living in 12,384 unique Medicare-certified NHs between October 1, 2014 and March 31, 2015 (Table 1). At the resident level, the overall prevalence of MA enrollment at the time of NH admission was 21.4% among long-stay residents. When NHs were classified into three groups by their prevalence of residents enrolled in MA, the groups were classified at the following thresholds: low MA concentration (0% to 3.3%), moderate MA concentration (3.4% to 18.6%), and high MA concentration (>18.6%). The range for the prevalence of MA in NHs within the highest MA concentration category was 18.64% to 100%. There were 13 NHs, representing 0.3% of 4126 NHs in the highest concentration category, with an MA prevalence more than 90%. Resident and NH characteristics by MA concentration Resident and facility characteristics varied by the prevalence of MA beneficiaries in NHs. As the prevalence of MA-enrolled residents increased, the beneficiaries tended to be older in age, and more racially and ethnically diverse. NHs with the highest prevalence of MA enrollees were more often larger, part of a chain system, and located in urban settings. The resident acuity index varied minimally across categories of MA prevalence. However, NHs with increasing MA prevalence had residents with more limitations in activities of daily living, greater cognitive impairment, but lower levels of serious mental illness than NHs with lower MA prevalence. The majority (89.2%) of NHs with low MA prevalence had a high overall star rating of 4 or 5 compared with about half of NHs in the other MA categories who met the same ratings. The prevalence of MA in NHs was higher for facilities located in counties with greater MA penetration rates. Influenza vaccination rates by MA concentration On average, 96.9% of long-stay residents in NHs with a low prevalence of MA-enrolled residents were assessed and appropriately considered for influenza vaccination compared with 94.7% of residents in NHs with the highest prevalence of MA. While the unadjusted rates of influenza vaccine receipt were similar, vaccine refusal decreased as the prevalence of MA enrollees in a NH increased (Fig. 1). NH variables that were positively associated with higher rates of appropriate assessment and provision and influenza vaccination included mean age, occupancy rate, high NH quality star rating, percent with serious mental illness, percent paying with Medicaid, and the Herfindahl-Hirschman index. In contrast, for profit and chain ownership and increasing percent of Black residents were associated with decreased assessment and appropriate provision, and influenza vaccination. See Table 3 for the covariate estimates and the corresponding 95% confidence limits. Stability Analysis: Alternate MA enrollment definition Changing the definition of MA enrollment yielded substantively similar results to the main analysis. There appeared to be a clearer dose response relationship when MA was defined on the basis of enrollment throughout the entire observation period rather than at the time of admission. See supplementary Table S1. Short-stay analysis See supplementary materials (Tables S2-S4 and Figure S1) for a summary of the short-stay results. Discussion This study investigated influenza vaccination receipt and nonreceipt among older adults in NHs, and their variation on the basis of the concentration of residents enrolled in MA. We found that although nearly all long-stay residents were assessed and appropriately considered for influenza vaccination (95.5%), influenza vaccine receipt was lower (81.3%) largely due to high refusal rates (13.4%) when the vaccine was offered. Additionally, although crude estimates were similar, in adjusted models we found that as the concentration of MA enrollees increased so did receipt of influenza vaccination among long-stay residents. Our finding that NHs with a greater share of MA enrollees have higher influenza vaccination coverage rates among long-stay residents are consistent with perspectives that MA plans promote preventive care use. Given that nearly all the attention on MA efforts to improve preventive care use has targeted community-dwelling beneficiaries, the extent to which MA plans conduct health promotion efforts in NHs is unknown. Individual MA plans conduct care coordination and health promotion efforts for their beneficiaries with varying rigor and success. As such, MA beneficiaries may not experience these benefits uniformly as MA plans are not created equal [24]. The processes that MA plans have in place for outreach and education for providers and patients in NH settings deserve attention in efforts to increase vaccination rates. The importance of addressing this knowledge gap is magnified by the growing enrollment in MA [8], expensive costs of post-acute and long-term care [37], and high risk of morbidity and mortality due to respiratory infections in NH residents and older adults generally [38,39]. The COVID-19 pandemic adds further imperative to explore levers (e.g., care coordination and initiatives to promote preventive care) at the MA plan level to improve NH influenza vaccination coverage. While improving uptake of the annual influenza vaccine is a perennial challenge [2], the availability of a vaccine for COVID-19 means that it will be even more critical to ensure high vaccination rates among NH residents -a population that has experienced disproportionately high rates of COVID-19 cases and deaths [40]. Since the composition of residents in a NH often includes a mix of post-acute short-stay and long-stay residents [41], effective influenza mitigation strategies should also target improving the assessment and appropriate provision of the vaccine to short-stay residents [34]. This may require NHs to maintain vaccine supplies over a longer period during influenza season. Doing so could create a low-barrier opportunity for NHs to improve their influenza vaccination performance by extending their efforts to offer and vaccinate short-stay residents. Such targeted efforts could be especially beneficial for NHs with large proportions of short-stay residents. In addition, our results suggest actions to improve overall NH Compare star ratings (targeting 4 or 5 starts) could contribute to better vaccination rates. While the quality domain of star ratings includes NH vaccination coverage, this is unlikely to fully explain the strong independent associations of the overall star rating with vaccination rates in multivariable analyses. This study has limitations. First, this is a cohort study focusing on a single influenza season (2014)(2015). Nonetheless, the findings provide foundational evidence that point to the relevance of further investigation through longitudinal and more recent data. Second, we relied on facility-level resident acuity and comorbidity measures from the CASPER database rather than resident-level MDS clinical assessments. However, by using CASPER variables we avoided making assumptions that would be required to handle missing data particularly for short-stay residents who more frequently have missing information on MDS-derived variables. In addition, our findings may not generalize to beneficiaries younger than 65 years, residing in the community, or with insurance coverage other than Medicare. In conclusion, this study found that higher concentration of MA beneficiaries in NHs is associated with increased rates of influenza vaccination receipt among long-stay residents after adjusting for covariates. Vaccine refusal when offered was lower as the prevalence of long-stay MA beneficiaries increased. As the MA program continues to grow and more MAenrolled beneficiaries enter NHs, concerted efforts by MA plans and NHs will be essential to improve influenza vaccination rates and reduce vaccine refusals. This importance is magnified in the COVID-19 era when mitigating the transmission of respiratory infections is of critical importance for the health of NH residents and staff. Supplementary Material Refer to Web version on PubMed Central for supplementary material. Funding This work was supported by a grant to Brown University from Sanofi Pasteur (project identifier: 005705). The authors' institution retained the right to publish and publicly present all results. Sanofi Pasteur was not involved in establishing the scope of the study, creating the initial protocol, designing the study, or performing analysis, but was involved in suggesting edits to the final study protocol and reviewing the final manuscript. Incorporation of any edits suggested by Sanofi Pasteur was not compulsory. Dr. Zullo is also supported, in part, by grants from the National Institute on Aging (R21AG061632 and R01AG065722) and National Institute of General Medical Sciences (U54GM115677). Fig. 1. Unadjusted vaccination receipt and non-receipt by Medicare Advantage concentration among long-stay residents, 2014-2015. Moyo Table 3 Estimates from multivariable regression models of vaccination rates among long-stay nursing home residents, 2014-2015. Vaccine. Author manuscript; available in PMC 2022 March 12.
2021-10-07T19:08:33.025Z
2021-10-07T00:00:00.000
{ "year": 2022, "sha1": "24cd1b40c227805e72a2fefbb722e0e1aba6dd97", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.vaccine.2021.12.069", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "744e4845a1817bb0fab363511ce0b2dd6f2665ff", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15291894
pes2o/s2orc
v3-fos-license
CPT-11 sensitivity in relation to the expression of P170-glycoprotein and multidrug resistance-associated protein. The relevance of P170-glycoprotein (P-gp) and multidrug resistance-associated protein (MRP) for the sensitivity to CPT-11 was investigated in human malignant cell lines as well as in human tumour xenografts. In vitro, the P-gp-positive sublines BRO/mdr1.1 (transfected with MDR1) and 2780AD were slightly cross-resistant against carboxylesterase-activated CPT-11. Cross-resistance against SN-38 was present in 2780AD cells, but not in BRO/mdr1.1 cells. The P-gp modulators BIBW22BS, verapamil and dexniguldipine partly reversed the resistance against CPT-11 in the P-gp-positive sublines. BIBW22BS was the most effective modulator in the reversal of the resistance against carboxylesterase-activated CPT-11 as well as against SN-38 in the 2780AD subline. In contrast to doxorubicin and vincristine, the BRO/mdr1.1 xenografts were at least as sensitive to CPT-11 as the BRO xenografts. The 2780AD xenografts were slightly less sensitive than the parent tumours, but there was no difference in topoisomerase I DNA unwinding activity. Therefore, the high retention of the multidrug-resistant phenotype of 2780AD cells in vivo may be the cause of the low cross-resistance against CPT-11. The MRP-positive subline GLC4/ADR was cross-resistant against carboxylesterase-activated CPT-11 and SN-38. GLC4/ADR cells, however, demonstrated a twofold lower topoisomerase I activity than GLC4 cells. Cross-resistance against the camptothecin derivatives was not apparent in the MRP-transfected subline of SW1573/S1. In conclusion, P-gp-positive cells show a low cross-resistance against CPT-11/SN38, which is only apparent with high P-gp expression in vivo. MRP does not seem to play a role in the sensitivity to CPT-11. however, that some of the semisynthetic camptothecin analogues, because of their positive charge at physiological pH, might be affected by P-gp expression. This is based on the observation that P-gp preferentially exports positively charged hydrophobic natural compounds (Zamora et al, 1988). Little is known on the role of MRP in resistance against camptothecin analogues. In the present experiments, we compared the activity of CPT-1I and its metabolite SN-38 in P-gp-and MRP-positive sublines and their parental cell lines. Several P-gp modulators were studied for their ability to reverse the resistance against CPT-l1 and SN-38 in vitro. The resistant sublines and parent cell lines were analysed for differences in topoisomerase I gene expression and topoisomerase I activity. In addition, we compared the efficacy of CPT-l1 in nude mice implanted with P-gp-positive xenografts with that in mice bearing the parental P-gp-negative xenografts. MATERIALS AND METHODS Drugs were kindly provided by Rhone-Poulenc Rorer (Vitry sur Seine, France). CPT-11 was available as a solution of 20 mg ml-'. SN-38, as a powder, was dissolved in dimethylsulphoxide (DMSO; Acros, Geel, Belgium) to a final concentration of 10 mm. Vincristine (Eli Lilly, Amsterdam, The Netherlands) was purchased as a solution of 1 mg ml-'. Doxorubicin (Farmitalia Carlo Erba, Nivelles, Belgium) was dissolved in water at a concentration of 2 mg ml-'. Carboxylesterase (3.1.1.1), isolated from porcine liver, was purchased from Sigma (Zwijndrecht, The Netherlands). BIBW22BS (from Dr Karl Thomae, Biberach an der Riss, Germany) was first dissolved in 0.1 M hydrochloric acid and then diluted in 0.9% sodium chloride to a final concentration of 2 mm at pH 2.7. Verapamil (Knoll, Amsterdam, The Netherlands) was provided as a solution of 2.5 mg ml-'. Dexniguldipine was obtained from Byk Gulden, Konstanz, Germany; the powder was dissolved in 0.5 ml of 5% polyethyleneglycol (PEG) 400 supplemented with 0.5 ml of 0.01 M hydrochloric acid to a final concentration of 10 mm. Drugs and resistance modulators were further diluted in tissue culture medium when investigated for their antiproliferative effects in vitro. presence of the selecting drug until 3 days before the experiments. All cell lines were free from Mycoplasma contamination as tested regularly with the Mycoplasma TC rapid detection system with a 3H-labelled DNA probe from Gene-Probe (San Diego, CA, USA). Proliferation inhibition experiments Experiments to measure the inhibition of proliferation were carried out in 96-well microtitre plates and the percentage of viable cells at the end of the incubation period was determined using the 3-(4,5dimethylthiazol-2-yl)-2,5-diphenyl-tetrazolium bromide (MTT) assay. In short, 3000-5000 cells per well in 100 gl of medium were plated and grown for 24 h, drugs (100 ,ul) were added and the cells were cultured for an additional 96 h. Then the medium was removed and 50 ,ul of MTT (0.4 mg ml-') (Sigma) diluted in phosphate-buffered saline were added. The plates were incubated for 4 h and the blue dye formed was dissolved in 200 gl of DMSO. The absorbance was measured at 540 nm using a Labsystems Multiskan Bichromatic plate reader (Labsystems, Helsinki, Finland). The results were expressed as IC50 values, which are the concentrations of the drug required to induce 50% inhibition of cell growth of treated cells compared with the growth of control cells. The resistance factor (RF) was expressed as the ratio of the IC50 of the resistant subline divided by the IC50 of the parent cell line. In control cultures, cells grew exponentially during the incubation period. All drug concentrations were tested in four replicate wells and the experiments were performed at least four times. In vivo sensitivity Female nude mice (Hsd: athymic nude-nu) were purchased at the age of 6 weeks (Harlan CPB, Zeist, The Netherlands). The animals were housed in filter-top cages under sterile conditions. Cages, covers, bedding, food and water were sterilized and changed weekly. Animal handling was done in a laminar down-flow hood. For the animal experiments, permission was obtained from the University Ethical Committee (project number Onc 94-01). Xenografts were established from cell lines grown in tissue culture medium. Mice were inoculated subcutaneously (s.c.) with 1 x 107 cells in both flanks (passage 1). Solid tumours arising at the inoculation site were transferred as tissue fragments of 2-to 3mm diameter through a small incision into both flanks of 8-to 10week-old mice. A previous study demonstrated the retention of the multidrug-resistant phenotype in s.c. BRO/mdrl.1 xenografts > 15 serial passages (Jansen et al, 1994). For the 2780cells, a CPT-11 sensitivity in relation to P-gp and MRP expression 361 6.2* SW1573/S1 2.7 (±0.4) x 10-6 3.7 (± 1.2) x 10-i 1.9 (±0.5) x 10-8 SW1573/S1 (MRP) 3.9 (± 0.5) x 10-6 1.4 1.3 (± 0.4) x 10-7 3.5 1.5 (± 0.3) x 104 0.8 aRF, resistance factor expressed as the ratio of IC resistant cells to IC50 parent cells. *Significant, P < 0.05. partial loss of multidrug resistance was found in passage 2 or higher (unpublished data). Therefore, experiments with 2780AD xenografts were carried out in passage 1. Tumour growth was measured weekly in three dimensions with slide callipers by the same observer. The tumour volume was expressed by the equation length x width x height x 0.5 in mm3. At the start of treatment (day 0), groups of 5 or 6 tumour-bearing mice were formed to provide a mean tumour volume of approximately 150 mm3 in each group. For in vivo use, CPT-l 1 was further diluted in 0.9% sodium chloride to 2 mg ml. CPT-1 1 was administered intraperitoneally on days 0, 1, 2, 3 and 4 as this schedule was more effective than a weekly x 2 schedule (Jansen et al, 1997b). The 20 mg kg-' dose was the maximum-tolerated dose of CPT-1 1 given daily x 5, as based on the occurrence of a reversible weight loss of approximately 10% of the initial weight within the first 2 weeks after day 0. Vincristine 1 mg kg-' and doxorubicin 8 mg kg-' given intravenously weekly x 2 were the maximum-tolerated doses as described earlier (Jansen et al, 1994). For the evaluation of drug efficacy, the tumour volume was expressed by the formula VT/VO, where VT is the volume on any given day and V0 is the volume on day 0. The ratio of the mean relative volume of treated tumours over that of control tumours multiplied by 100% (T/C%) was assessed on each day of measurement. Anti-tumour effects were expressed as the maximum percentage of growth inhibition (100-T/C%). Topoisomerase I gene expression Total cellular RNA was isolated from exponentially growing cells and from frozen xenograft tissue with RNAzol B (Campro Scientific, Veenendaal, The Netherlands). a-32P-labelled RNA complementary to topoisomerase I cDNA 703-bp sequence (nucleotides 835-1538) (Juan et al, 1988) inserted into pGEM3 was transcribed from FokI-linearized DNA using T7 polymerase. The RNAase protection assay was carried out as described (Giaccone et al, 1995). In all experiments, a probe for y-actin was included to control for RNA loading. The hybridized probe was visualized after gel electrophoresis through a denaturing 6% acrylamide gel. For autoradiography, the gel was exposed at -70°C to a Kodak BIOMAX MR film for 3 days. The amount of topoisomerase I mRNA relative to the amount of y-actin was calculated by densitometric scanning of the autoradiograms. Topoisomerase I gene expression was determined at least twice in each cell line and twice in four separate xenografts originating from a cell line. British Journal of Cancer (1998) 77(3), 359-365 0 Cancer Research Campaign 1998 xenograft tissue was lysed on ice for 10 min in nuclear buffer supplemented with Triton-X, 1 nM phenylmethylsulphonyl fluoride (PMSF) (Merck, Amsterdam, The Netherlands) and 0.2 ,UM dithiotreitol (DTT) (Sigma). Nuclear enzymes were extracted from cell nuclei by incubation with nuclear buffer containing 0.4 M sodium choloride for 30 min on ice. After centrifugation, the enzyme solution was diluted with an equal volume of 87% glycerol and stored at -70°C for a maximum of 1 week. Topoisomerase I activity was determined by measuring the relaxation of supercoiled pBR329 plasmid DNA by incubation of serial dilutions of nuclear extracts (1-100 jg) at 37°C for 30 min. Supercoiled and relaxed DNA were separated on a 1% agarose gel by electrophoresis and visualized by ethidium bromide staining. One unit of topoisomerase I activity was defined as the complete relaxation of 1 gg of supercoiled pBR329 plasmid DNA per min at 37°C. DNA topoisomerase I activity was measured at least four times in each cell line and at least twice in four separate xenografts of a cell line. Statistics Differences in drug sensitivity, topoisomerase I mRNA expression and topoisomerase I activity between the multidrug-resistant sublines and the parental cell lines were evaluated with the Student's t-test. Antiproliferative effects of CPT-1 1 in vitro Most malignant cell lines and drug-resistant sublines described in Table 1 have been characterized earlier for their sensitivity to vincristine and doxorubicin (Jansen et al, 1994). Except for vincristine in GLC4/ADR and SW1573/Sl (MRP), all sublines were resistant against vincristine and doxorubicin. The resistance factors (RFs) were highest in 2780AD cells and amounted to 2867 for vincristine and 971 for doxorubicin. The low resistance calculated for doxorubicin in SW1573/S1 (MRP) cells was not significantly different from that in SW1573/S1 cells. The antiproliferative effects of CPT-l 1 are listed in Table 2. The efficacy of CPT-11 was also measured in the presence of an excess of carboxylesterase (1 jg ml-1). Carboxylesterase increased the antiproliferative effects of CPT-l 1 by 18to 218-fold, whereas the antiproliferative effects of SN-38 were 142to 2300-fold higher compared with those of CPT-l1 alone. BRO/mdrl.l, 278WAD and GLC4/ADR were slightly cross-resistant to CPT-11 and carboxylesterase-activated CPT-11. Cross-resistance to SN-38 was present in the 2780AD and GLC4/ADR sublines, but not in the BRO/mdrl.1 subline. In the SW1573/S1 (MRP) subline, crossresistance to CPT-11, carboxylesterase-activated CPT-11 and SN-38 was not evident. P-gp modulators The effects of P-gp modulators on the reversal of resistance were investigated in the P-gp-positive sublines BRO/mdr. 1.1 and 2780k" at concentrations that were not toxic, as established earlier (Jansen et al, 1994). The dipyridamole derivative BIBW22BS (1 gM) and the calcium-channel blockers verapamil (10 gM) and dexniguldipine (1 gM) did not increase the antiproliferative effects of carboxylesterase-activated CPT-l1 and SN-38 in the parent cell lines BRO and A2780 (Table 3). In BRO/mdrl. I and 2780(W cells, the addition of the modulators resulted in a slight, but significant, increase in the antiproliferative effects of CPT-11. BIBW22BS had the highest potency, but the reversal of CPT-11 resistance was not complete. As an illustration, the IC, (± s.e.m.) of CPT-11 in BRO/mdrl.l cells in the presence of BIBW22BS was 1.4 (± 0.2) x 106 M, while that in BRO cells was 8.7 (± 1.5) x 10-7 M (P < 0.05). The respective values were 2.5 (± 0.4) x 106M in 2780AD cells and 1.0 (± 0.2) x 10--M (P < 0.01) in A2780 cells. BIBW22BS was the only compound that could partly reverse the resistance against carboxylesterase-activated CPT-11 in 2780,, cells; the IC-, values were 3.6 (± 1.2) x 108M in 2780(D cells and 7.7 (± 2.0) x 10-9 M (P < 0.05) in A2780 cells. Complete reversal ofresistance against SN-38 was obtained in 2780(D cells in the presence of BIBW22BS, as the IC-, values were not significantly different; these were 5.8 (± 2.6) x 10-9 M in the 2780AD cells and 1.8 (± 0.6) x 10-9 M (P > 0.1) in the A2780 cells. Verapamil decreased the antiproliferative effects of CPT-l 1 plus carboxylesterase in all cell lines. It was found that verapamil at 10 gM decreased the carboxylesterase activity, whereas BIBW22BS (1 gM) and dexniguldipine (1 gM) did not affect the enzyme activity (data not shown). In vivo sensitivity Previously, the activity of vincristine and doxorubicin has been determined in the P-gp-positive xenografts and the corresponding parent xenografts (Jansen et al, 1994). A summary of these data is given in Table 4, showing the retention of the resistance against vincristine and doxorubicin in the P-gp-positive tumours. In contrast with the remarkable difference in sensitivity for vincristine and doxorubicin, there was no difference in sensitivity to CPT-11 in BRO/mdrl.1 xenografts compared with BRO xenografts. In both experiments, the volume-doubling time of treated tumours was 27 days (Figure 1). The 2780(1 xenografts were slightly less sensitive to CPT-11 than A2780 tumours; the volume-doubling times were 5 and 11 days respectively (Figure 1). The drug caused a reversible weight loss of 10-11%; there were no toxic deaths. Topoisomerase I gene expression and enzyme activity The expected 84-bp transcript size for topoisomerase I mRNA was detected in all cell lines (Table 5). The topoisomerase I mRNA Table 4 Human P-gp-negative and P-gp-positive xenografts and drug sensitivitya levels in the sublines were expressed as a ratio of the level in the parent cell lines. The difference in sensitivity to CPT-l 1 or SN-38 did not relate with the extent of topoisomerase I mRNA expression in the P-gp-positive or the MRP-positive sublines and the parent cell lines. Also in vivo, no difference in topoisomerase I mRNA expression was observed between the P-gp-positive tumours and the parental tumours. Quantitation of the levels of ATP-independent topoisomerase I DNA unwinding activity of each cell line is presented in Table 5. The nuclear extracts from the multidrug-resistant sublines showed a lower DNA-relaxing activity than that in the parent cell lines, which was significant only in 278OW and GLC4/ADR cells. The topoisomerase I activity was also determined in BRO, BRO/mdrl.l, A2780 and 278OW xenografts. Between the P-gppositive and the parental xenografts, no significant differences were found in topoisomerase I unwinding activity. The enzyme activity in the xenografts was higher than that in the corresponding cell lines. The values in A2780 and 278OW xenografts did not reflect the difference in topoisomerase I activity in the corresponding cell lines. DISCUSSION Several mechanisms of resistance against topoisomerase I inhibitors have been described: altered topoisomerase I gene expression or structure, low protein levels of the enzyme, reduced topoisomerase I activity, P-gp-mediated resistance and, for CPT-11, reduced conversion of the drug to its active metabolite. In this study, we investigated the relevance of the drug transporters P-gp and MRP for the sensitivity to CPT-1 1, to carboxylesteraseactivated CPT-11 and to SN-38 in vitro and, for CPT-ll, in P-gppositive tumours in vivo to obtain more insight in these membrane proteins accounting for CPT-11 resistance. In vitro, the addition of an excess of carboxylesterase to CPT-1I did not result in similar antiproliferative effects to those obtained with SN-38. In a previous study in five unselected human colon cancer cell lines, we also demonstrated a difference in efficacy between carboxylesterase-activated CPT-11 and SN-38 (Jansen et al, 1997a). Explanations may be that various nutrients in tissue culture medium might inhibit the activation of the enzyme or that the carboxylesterase extract from porcine liver is not a good substitute for the endogenous carboxylesterase converting CPT-11 in other species. Of interest, however, regardless of the dose of CPT-11 administered to patients, the proportion of SN-38 formed was low and varied between 1.3% and 5.8% (Abigerges et al, 1995). An explanation may be the complicated metabolic pathway of CPT-1 1, as at least 15 metabolites have been detected in the bile of a patient (Lokiec et al, 1996). By adding an excess of carboxylesterase in vitro it is possible that, apart from SN-38, other less active metabolites are being formed. In the P-gp-positive sublines, cross-resistance against CPT-1 1 was even more pronounced in the presence of exogenous carboxylesterase. In 2780O" cells, SN-38 was also cross-resistant, but this was not the case in BRO/mdrl.1 cells. Hendriks et al (1992) have shown for topotecan and Mattem et al (1993) for topotecan, SN-38 and 9-aminocamptothecin that drug accumulation and cytotoxicity were reduced in P-gp-positive CHRC5 cells relative to the parental AuxB 1 cells. It has been suggested that the positive charge of the camptothecin analogues topotecan and CPT-11 could affect the efflux of these compounds by an increased binding affinity to P-gp (Chen et al, 1991). Mattem et al (1993), however, have demonstrated that a positive charge was not required for P-gp-mediated drug resistance, as 9-aminocamptothecin and SN-38, which are uncharged at physiological pH, were cross-resistant in CHRC5 cells. In our experiments, the low level of cross-resistance of SN-38 in the 2780A1 cells could indeed be due to P-gp, which is highly overexpressed in these cells, while the BRO/mdrl.I cells express a less intense multidrug-resistant phenotype. Another explanation may be that the BRO/mdrl.I subline, transfected with the MDR1 gene, has a well-defined single mechanism of multidrug resistance (P-gp overexpression), whereas the 2780AD subline was selected by a stepwise increasing concentration of doxorubicin. This provides an in vitro P-gp-mediated resistance model, which could also contain a mechanism of resistance that affects the sensitivity to SN-38. Unlike the BRO/mdrl. 1 cells, the 2780AD cells showed a reduced topoisomerase I activity compared with that of the parental cells. The Pgp-positive cells, however, were far less cross-resistant against the camptothecins than the typical multidrug-resistance compounds, such as vincristine and doxorubicin. The addition of the P-gp modulators BIBW22BS, verapamil and dexniguldipine reversed the resistance against CPT-1 1 in the P-gp-positive sublines. Hendricks et al (1992) have shown that incubation with quinidine or with verapamil, modulators of P-gpmediated multidrug resistance, increased both the accumulation and the cytotoxicity of topotecan. In a previous study, we have demonstrated that BIBW22BS had a higher potency in the modulation of P-gp than verapamil, bepridil or flunarizine in vitro (Jansen et al, 1994). Indeed, the relatively high resistance against SN-38 in the 2780AD cell line was circumvented only by BIBW22BS. In vivo, we found an almost equal growth inhibition induced by CPT-l1 in BRO and BRO/mdrl.1 xenografts. Other investigators have also demonstrated that camptothecins have therapeutic activity in multidrug-resistant tumours in vivo. Houghton et al (1993) have reported for topotecan and CPT-11 that the efficacy was similar in both human rhabdomyosarcoma parental tumours and P-gp-positive Rhl21VCR and Rh18/VCR tumours. Similar results were obtained by Tsuruo et al (1988), who have demonstrated an almost equal activity of CPT-l 1 in P388 leukaemiabearing mice and in mice bearing P388 cells resistant against vincristine and doxorubicin. Our 2780AD tumours were less sensitive to CPT-11 than the A2780 tumours. In both BRO/mdrl. 1 and 2780AD xenografts, topoisomerase I activity was similar to that in the parental xenografts. A likely explanation for the lower sensitivity of 2780AD tumours to CPT-ll/SN-38 is that 2780AD cells grown in vivo retain a highly resistant phenotype to drugs affected by P-gp (Table 4). The clinical relevance of this finding is not important, as P-gp expression in patients' tumours is much lower than in 2780AD cells. Another protein that may affect drug sensitivity is the more recently characterized MRP. Cross-resistance against carboxylesterase-activated CPT-11 and SN-38 was observed in GLC4/ADR cells, whereas in the SW1573/Sl(MRP) cells crossresistance was not evident. Hasegawa et al (1995) have demonstrated that T24/ADM-1 and T24/ADM-2 human bladder cancer cells, both overexpressing the MRP gene, were not cross-resistant against CPT-11, whereas the cells showed cross-resistance against doxorubicin and etoposide. Thus, it is probable that CPT-1 1 is not a substrate for MRP and the cross-resistance in the GLC4/ADR subline might be related to other factors of relevance for sensitivity to CPT-l 1. Indeed, GLC4/ADR cells showed a twofold reduction in topoisomerase I activity compared with the enzyme activity in the parental cells. It is uncertain whether the SW1573/Sl(MRP) subline provides a good model for MRPmediated drug resistance, as we did not find a significant difference in sensitivity to doxorubicin and vincristine between the SW1573/S l(MRP) cells and the parental cells. Zaman et al (1994) have also reported a modest cross-resistance against doxorubicin and vincristine of 2.7and 5.3-fold, respectively, in the SW1573/S 1(MRP) cells. A relation between the topoisomerase I gene expression and the sensitivity to carboxylesterase-activated CPT-11 or SN-38 could be expected. In this respect, Niwa et al (1995) have found a correlation between topoisomerase I mRNA and the sensitivity to CPT-11 in various human cancer cell lines, which displayed natural differences in sensitivity to CPT-1 1. Our group (Jansen et al, 1997a) as well as the group of Goldwasser et al (1995) have demonstrated that there was no relation between topoisomerase I mRNA expression and sensitivity to camptothecins. In the present study, the extent of topoisomerase I mRNA expression was similar in the multidrug-resistant sublines and the parental cell lines, which did not reflect the differences in sensitivity to CPT-1 1 and SN-38. Consistent with our finding, the expression of the topoisomerase I gene in the multidrug-resistant KK47/ADM and T24NVCR human bladder cancer sublines was similar to that in the parental cell lines, although there was an approximately threefold resistance against CPT-1 1 (Hasegawa et al, 1995). As a relationship seems to be present between the cellular topoisomerase I activity and the sensitivity to camptothecins, it would appear that resistant cells have a reduced topoisomerase I activity. In the panel of five unselected human colon cancer cell lines, we have indeed found a positive correlation between the DNA topoisomerase I activity and the sensitivity to carboxylesterase-activated CPT-l1 and to SN-38 (Jansen et al, 1997a). Goldwasser et al (1995) have demonstrated a positive correlation between camptothecin sensitivity and the amount of drug-stabilized cleavable complexes. Reduction of topoisomerase I activity has also been described in a number of cell lines with acquired resistance against camptothecins (Chang et al, 1992;Woessner et al, 1992). In the two sublines 2780AD and GLC4/ADR with acquired resistance against doxorubicin, we found a 2to 2.5-fold lower topoisomerase I activity, which may partly explain cross-resistance against carboxylesterase-activated CPT-11 and SN-38 in vitro. Reduced enzyme activity without decreased levels of topoisomerase I mRNA in camptothecin-resistant cells may be caused by a gene mutation or may be the result of rearrangement, deletion or hypermethylation of one of the topoisomerase I alleles (Gupta et al, 1995). The reason for a reduced topoisomerase I activity in 2780AD and GLC4/ADR cells remains to be established. In conclusion, the presence of P-gp was related to a low degree of cross-resistance to CPT-1I and to SN-38 in vitro. In vivo, the contribution of P-gp to the resistance against CPT-1 1 appeared to be dependent on the extent of overexpression. Nevertheless, showed superior anti-tumour activity in P-gp-positive xenografts compared with vincristine and doxorubicin. The presence of MRP did not seem to affect the sensitivity to CPT-1 1 and SN-38 in vitro. Therefore, CPT-l 1 should be considered as a potentially effective agent in the treatment of multidrug-resistant tumours. ACKNOWLEDGEMENT This study was supported by the Dutch Cancer Society (project grant VU 94-708).
2014-10-01T00:00:00.000Z
1998-02-01T00:00:00.000
{ "year": 1998, "sha1": "32309fe6c81a3f11e98c7391b1308c79b1f88832", "oa_license": null, "oa_url": "https://www.nature.com/articles/bjc199858.pdf", "oa_status": "BRONZE", "pdf_src": "Anansi", "pdf_hash": "32309fe6c81a3f11e98c7391b1308c79b1f88832", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
270472713
pes2o/s2orc
v3-fos-license
Digital Transformation in Corporate Banking: Toward a Blended Service Model Digital technologies challenge incumbent firms to rethink their established approaches to customer relationships. This article examines how a corporate bank reconfigured its relationship-oriented business model to benefit from digital transformation. The case analysis reveals a gradual transition toward a blended service model that first replaces, then complements, and finally augments physical with digital in increasingly complex customer interactions. While replacing and complementing human-enabled services with digital offerings are necessary steps of the digital transition, the associated competitive advantages are perceived as unlikely to endure. In contrast, augmenting human-enabled services with sophisticated digital technologies holds the potential for sustainable competitive advantage. experience with a self-service option to manage daily banking transactions. 2eeting such multifaceted requirements necessitates that corporate banks keep pace with evolving technology and enhance their digital capabilities. 3Established players who fail to adapt face the risk of being replaced by more agile rivals or losing market share to digital-savvy new entrants. 4vertheless, it seems that digital transformation is not yet pervasive but more of an emerging force in corporate banking.Only a minority of incumbents have fully embraced digitalization so far, and corporate customers are digitally underserved compared to the retail segment. 5In fact, many wholesale banks rely on outdated, cumbersome processes and traditional service models. 6On the other hand, the bulk of incumbent banks plan to significantly increase the amounts invested in digitalization 7 and consider technology-enabled offerings as crucial to improving their services. 8cognizing the digital imperative and allocating funding are essential but not sufficient steps toward a digital future.Corporate banks pursuing a digital strategy have to face complex competitive, technological, and demand dynamics, 9 making it difficult to decide where to invest and which initiatives to prioritize.In particular, finding the right combination of the "high-tech" digital elements and the "high-touch" physical and human elements is a key challenge for service firms like corporate banks. 10Such ambiguity warrants a better understanding of the potential benefits of digital transformation over different time horizons, as well as the impact of digital transformation on corporate banks' established business models, customer relationships, and competitive advantage. To provide such understanding, this article analyzes digital transformation in the context of a European corporate bank, looking into how it can contribute to value creation and enhanced competitiveness.To help support business banksas well as service firms more generally-along their digital path, this article investigates how digital transformation impacts the sources of competitive advantage, revealing the potential sources of value creation in digitally transformed corporate banking and a new, blended service model centered around the amalgam of human-enabled and digital service.The findings highlight the gradual rather than sudden nature of digital transformation, the importance of mutual transparency as a unique value source that digital technologies enable, and the possibility of achieving sustainable competitive advantage by moving from first replacing, to then complementing, and finally augmenting human with digital. Digital Transformation and Competitive Advantage Digital transformation is a change process that emerges through the adoption of "combinations of information, computing, communication, and connectivity technologies." 11The adoption of digital technology changes the competitive landscape by disrupting the status quo, but it also creates novel opportunities for value creation. 12Digital transformation can bring benefits to organizations in terms of both operational efficiency and organizational performance more broadly. 13Operational efficiency includes reduced operating costs, 14 optimized administrative processes, 15 and faster decision-making. 16Broader performance effects can include faster growth and better innovativeness, 17 as well as an upgraded customer experience. 18Digital transformation frequently presents both pressures and opportunities for renewing the firm's business model, 19 and navigating the digital transformation therefore requires the organization to take a strategic approach. Digitally enabled value can be created not only by better addressing customers' needs directly but also by making use of the business ecosystem more broadly. 20First, to better understand clients' behavior, customer touchpoints are converted into customer sensor points that are designed to collect and store data on customer behavior, and businesses develop their analytics capabilities to make use of this pool of customer data.The extracted insights are then employed to drive evidence-based decisions and create more personalized customer experiences. 21However, companies with a narrow focus on a linear value chain are at a disadvantage 22 because value creation from digital transformation often occurs at the level of the ecosystem, which frequently includes complementors and even competitors. 23Digital-savvy companies can outperform their competitors by engaging customers more continuously and collaboratively instead of having only episodic, detached transactions with them. 24en facing a digital disruption, service firms can opt for a "high-tech" approach, fully digitizing their service processes, or a "high-touch" approach, aiming to gain competitive advantage by emphasizing the physical and human elements of the customer experience. 25For established firms, a combination of the two is often thought to be the best approach, allowing them to gain the advantages of digital while keeping the advantages of their established physical competences. 26However, how to go about implementing such a hybrid between digital and human is not trivial, as established firms usually cannot pivot without destroying some of the value created by their legacy business. 27Therefore, an incremental approach of experimenting with a broader range of smaller digital initiatives, taking calculated risks, and building digital competences gradually may work best for digital non-natives. 28 Competitive Advantage and Digitalization in Corporate Banking According to the commonly accepted view, it is hard to build competitive advantage in corporate banking.New product launches are quickly imitated, and the pressure on profit margins due to intense rivalry means that reducing prices is not rewarding either. 29In such circumstances, customer relationships are believed to be a prime source of competitive advantage. 30Banks tend to place increasing emphasis on customer relationships as competition increases 31 because it provides shelter from price wars 32 and helps protect their competitive position. 33Previous studies have identified three crucial groups of resources in such a relationship-oriented strategy: skilled employees, who are the primary customer contacts and the best positioned to evaluate market developments 34 ; sector-specific expertise enabling meaningful conversations with customers 35 ; and proprietary information on clients gathered via multi-level communication. 36cause personal contact with customers is central to the value proposition of corporate banks, there is a conflict between banks' economically rational intention to migrate smaller, less profitable relationships to digital channels and their clients' limited willingness to adapt to such channels. 37Even for larger, technologically savvy corporates, digital technologies have been considered a complement to rather than a replacement for personal interactions, as fully automated processes make customers perceive themselves as underserved. 38Consequently, commercial banks' inclination toward digitization of routine tasks and services can weaken the relationship-banking model and result in a transaction-oriented attitude in customers. 39Transaction-oriented customers tend to unbundle their financial services needs and aggressively shop around after price and quality, placing less importance on relationships, 40 which undermines relationship-based competitive advantages.As the physical, digital, and social worlds are converging, corporate banks are confronted with the challenge of finding the optimal combination of these realms to create value. 41rras's reverse product cycle model 42 offers insight into how digital transformation may impact the evolution of competitive advantages in the banking sector.The reverse product cycle theory suggests that the innovation process in service industries is the reverse of that observed in product-based sectors.First, initial investments in the new technology fuel incremental process innovations.Subsequently, as firms acquire knowledge of the new technology, they move toward more radical process innovations directed at improving the quality of services rather than decreasing costs.Finally, the accumulated experience of using digital technologies enables progress toward radical product innovation and developing a new generation of services.Parallelly, the competitive emphasis shifts from cost advantages to product differentiation.However, it is debatable whether competitive advantage can be sustained in a quickly evolving digital environment or if the maximum firms can aspire to is temporary advantages. 43 Method The backbone of this research is an exploratory case study conducted in the business banking division ("Organization") of a universal bank operating in an EU country ("Case Bank").Case Bank belongs to a leading European banking group ("Parent Bank") that considers Central Europe its strategic market.Case Bank is one of the seven large national banks, and it offers a wide range of financial solutions for retail and corporate clients.The latter segment is targeted by the Organization, which generated 42% of Case Bank's profit in 2020.With an estimated 10% market share in lending and deposits, the Organization is a relevant player in its home country's corporate banking market.Assurances of anonymity prevent more detailed information from being disclosed. According to its strategy, the Parent Bank aspires to become a data-driven organization; it aims to shift its omnichannel approach to a digital-first distribution model and plans to invest heavily in Central European entities' digital transformation.Although digitalization has already been an important pillar of Case Bank's strategy since 2017, developments have focused predominantly on the retail segment.In 2020, digital gained even more ground in concurrence with the Parent Bank's strategy, and a multi-year digital roadmap was outlined by the Organization, providing an illustrative empirical setting for the present research. This study is primarily informed by semi-structured interviews conducted in two phases.The first 18 interviews were conducted within the Organization between November 2020 and January 2021.The interviewees included informants from all management layers and relevant functional domains.Involving executives enabled us to capture narratives from those who are directly involved in strategy formulation and allocate resources for digital initiatives.Interviewees from middle and lower management, responsible for strategy execution and delivery of digital projects, added a more operational perspective.As far as functional areas are concerned, informants were selected from three domains.Front office managers are the most knowledgeable about the relationship aspect of corporate banking and directly sense market trends as well as possible changes in customer behavior.People from the corporate strategy and product support departments were involved to capture insights about the links between strategy and ongoing digital developments.Finally, informants from IT provided a technological perspective on digital transformation.Sampling was continued until theoretical saturation was reached, that is, when further interviews did not yield significant new insights. 44In the second phase, the emerging internal perspective was triangulated by data collected from external stakeholders between January and February 2021-including two informants in charge of digitalization and fintech at the central bank of the Case Bank's home country; two advisors from the Parent Bank's Strategy, Transformation, and Innovation Directorate; and three of the Organization's corporate clients.The interviews were conducted either faceto-face (3) or online (20) and lasted between 22 and 59 minutes.The interviews were recorded and transcribed.The list of interviewees is presented in Table 1. Secondary data obtained from two studies carried out by external research firms (commissioned by the Organization) was also used to supplement the customer perspective.The first survey was conducted in October 2019, involved 521 corporate clients, and assessed their satisfaction regarding their personal relationship with the Organization.The second study was carried out in mid-2020 to inform the re-design of Case Bank's corporate e-bank.Besides collecting data on Europe-wide best practices in corporate e-banking solutions, it gathered user insights about these services through 16 semi-structured interviews with existing and target customers of the Organization and a survey involving 71 respondents.These secondary data were used to complement the primary interview data. In addition, field observations were used as additional sources of information.Between August 2020 and January 2021, the first author was embedded in two of the Organization's salient digitalization projects: the data-driven lead management initiative and the international workstream aiming to develop a digital business dashboard (DBD) for corporates.Participating in 22 online project meetings allowed the first author to collect observations recorded in the form of field notes made either during or after the meetings.All the collected data were analyzed using the Gioia methodology 45 to derive a conceptual model of digital transformation in the case firm. Process of Digital Transformation Our findings, depicted in Figure 1, reveal a process wherein, rather than a sudden transformation, digital infiltrates corporate banking step by step, triggering changes in the Organization's relationship-oriented business model.The gist of the strategic response is spotting new value sources and engaging in service model innovation to unlock them by blurring the boundaries between physical and digital customer interfaces while relying on enablers of transformation.These findings are further elaborated in the following sections and illustrated using quotes from the interviews.The interview quotes are coded with phase and interview numbers as in Table 1, for example, phase 1, interview number 7 as 1-7. Pre-digital Competitive Setting To better understand the initial context, it is essential to grasp a sense of the competitive environment before the emerging digital transformation.In Case Bank's home market, corporate banks operate in "a not very consolidated competitive environment, in the sense that there are many banks fighting for their place in the market" (1-17)."The market is competitive, especially in terms of prices and credit conditions" (1-16).Banks offering identical products "have to fight for clients.Not only do large firms have a strong bargaining position, but smaller firms can also maneuver between banks" (1-2). The Organization pursues a relationship-oriented approach.As one informant puts it, "We have very much focused on the client experience, and . . .differentiating ourselves based on the outstanding personal bond with the client" (1-17).The positioning also builds on being perceived as an independent, professional service provider: "Our number-savvy, advisory image is an objective judgment of the market, and it is definitely our advantage" (1-6). Differentiation advantage derives from the skilled workforce, conservative credit culture, and professional ownership background.Respondents referred to these firm resources as follows: Differences in the skills of relationship managers make the playing field somewhat uneven.If every bank had an average of adequate human resources, competition could be even more challenging (1-16). The fact that we have a professional and conservative approach to corporate lending is part of our culture; hence it is difficult to copy (1-10). We benefit from the expertise of the group, and what it brings us in terms of the technologies and the sophistication of our commercial approach (1-15).When reflecting on the impact of digitalization on the competitive landscape, informants consistently reported that corporate banks had made little headway with digital transformation so far.They believe that instead of creating profound and sudden disruption, digitalization is gradually infiltrating business banking: I would not call it disruption, in the sense that it's not something that comes from one day to the other. . . .Actually when you look at the pace at which it goes, there is very little surprise (1-15). Infiltration of Digital The findings suggest that no digital "big bang" is experienced, but the infiltration of digital is more gradual.The interviews revealed several factors contributing to the tempered way technology is shaping the landscape.Durable entry barriers safeguarding incumbents' competitive position offer some explanation for the slow pace of change.The stringent and complex regulatory burden discourages potential newcomers, or as one informant noted, "We complain about regulation, but in a world like today, I am convinced that Google and Apple and Facebook are not banks yet because they don't want to deal with this complicated mess.So, regulation is, in a sense, our friend" (1-14). While a few cracks have appeared in the barriers allowing fintechs some access to the market, the adoption of digital substitutes has remained limited due to the gap between the trust vested in banks and fintechs.This standstill was described by one of the informants as follows: "I do not see that [corporate banks] are very active in digitalization.No start-ups have entered who would take the market, and for the banks, this status quo is a good deal" (2-4). If not mediated by new entrants, technology can bring changes directly to the operation of corporates and banks.The data suggest that the speed and scope of the transformation are determined by opposing forces that drive or restrain digital adoption.These forces are tied to macro-level trends in terms of socioeconomic accelerators and decelerators and to firm-specific features in the form of organizational accelerators and decelerators. Socio-economic accelerators and decelerators.Digital expectations were found to trickle down from managerial levels as the firm's decision-makers themselves experience the benefits of simple, frictionless digital offerings in their daily lives as retail customers.Subsequently, they look for similar solutions for their company to manage daily banking tasks.As one informant put it, "young, dynamic, 'techie' leaders put the bar higher in terms of digital expectations" (1-1).This trend was magnified as society received a digital boost during the recent coronavirus lockdown periods: "The current pandemic has very much caused customers to think that it is vital for them that the bank can handle their needs through remote digital access" (1-5). Informants noted, however, that as a headwind to digitalization, firms have a strong preference for personal contact with their corporate banker in case of complex needs: "Trust and personal nexus are essential in corporate banking. . . .Recently, when there has been no opportunity for personal contacts, we have been less successful in client acquisitions" (1-6). Organizational accelerators and decelerators.The informants suggested that digital adoption is related to the size of the customer firm: Tech-savvy large corporates are leading the pack, while midcaps are lagging."Larger corporates, especially international ones, have digital protocols dictated by their parents, and we as a bank have to comply with them" (1-3)."In the midcap segment, where the organizational structure is more developed, the [credit] limits are larger, and the cooperation is more complex, digitalization expectations are sidelined for the time being" (1-16). On the other hand, it appears that smaller businesses with resource constraints and simple, if any, dedicated financial departments are inclined to manage their financials efficiently, thereby creating some need for digital banking: "There is usually no dedicated finance manager in smaller businesses; hence, they need digital solutions so that the managing director can focus on the core business" (1-5). Overall, client pull has prompted limited digitalization efforts from corporate banks so far."Digital service in the corporate segment is far from advanced, but there hasn't really been a demand for it" (1-9).This was seen to stem from a more extensive, complex, and tailored product range, compared to retail banking.Furthermore, as some informants experienced, resource allocation is skewed toward meeting compulsory regulatory requirements and projects of the retail segment where pressures for digital adoption are more immediate.However, informants saw a gradual change in client pull in the corporate banking segment as well, as "these needs are popping up" (1-9). Technology-Enabled Value Sources ◾ Efficiency-Efficiency emerged from the data as the principal source of value, consistently mentioned by almost all respondents."It's clear that the first thing you think of is efficiency, it is the first stage of our strategy" (1-15).Informants also reported that gains in the form of decreased costs and accelerated processes could be reaped by the bank and customers: "The laborintensive services that we still have are going to be processed in an automatic straight-through way, and as such will create a lot of efficiency both on the side of the bank and also on the side of the client" (1-17). ◾ Customer Engagement-Interviews and observations pointed to three particular ways the Organization intends to strengthen the bond with customers to decrease the probability of switching to a competitor.First, with the introduction of the DBD, Case Bank aims to win the battle for customers' digital face time.Second, there is a well-articulated intention to provide a superior customer experience aided by technology: "I think this is where the win-win of best client experience comes from.I mean, you provide a win to the clients by delighting them, . . .and consequently, the clients will be loyal, . . . the clients will want to do more things with you" (1-17).Third, building digital interfaces via machine-to-machine or multi-banking solutions is expected to result in system lock-in that can considerably increase switching costs. ◾ Mutual clarity-Informants emphasized that streamlined digital processes combined with dashboard-like solutions can improve the transparency of the customer journey, thereby easing clients' frustration from not being able to follow up on the progress of their requests: "Treasurers will be able to see everything on their own, and they will get the information very quickly. . . .For them, the greatest added value is to have an overall picture of the company's financial state and an immediate overview of their most important banking matters" (1-11).Another aspect of clarity noted by the informants is that gaining automatic access to deeper layers of customer data contributes to a better understanding of corporates' financial standing and business operations, which can make credit decisions more transparent and predictable.This is also something that the clients expect: "I think the decision points should be clear so that I can be sure, that if I have an open book with my bank, and my company fulfills certain conditions and the bank sees it transparently, I will get that extra credit almost automatically" (2-4). ◾ Complements-Digital complements, such as e-invoicing, cash flow forecasting tools, and working capital management applications, enhance the value of traditional banking products.The design of the DBD was seen to facilitate adding such beyond-banking services.An external stakeholder further highlighted that third parties could also support corporate banks by taking over non-core, low-value-adding tasks, thus easing the administrative burden: "Fintechs can support banks in back-office digitalization, Know Your Customer tasks, and with chatbots" (2-1). ◾ Novelty-The Organization intends to explore digital innovation along three main themes: creating novel customer-facing channels, improving the content of advice provided, and deploying a new service model.First, developing the unique, customizable dashboard that can serve as a landing platform for the planned digital assistant and the lead management engine is a salient example of new approaches to digital interaction with customers."What could be a differentiator is offering the Parent Bank's innovative, AI-powered, digital assistant for the corporates over time" (2-2). ◾ Second, there is a clear intention to exploit proprietary data by supporting clients with customized insights that go beyond information related to core banking products.The client informants confirmed the potential value of such insights: "I would even pay for tailored analyses, cash flow forecasts, or comparative industrial benchmarks derived from the bank's information to help me make decisions" (2-5). ◾ Finally, an upgraded service model will embrace these innovative directions."It will not be enough to have a human connection in the future.We will have to continue to upgrade our service model with convenience, with speed, with digital solutions that not only make the life of the client easy, but make them feel that we are on the top of the game" (1-17). Service Model Innovation Informants revealed that the renewed value path is envisioned as a blend of interactions via physical and digital channels."Until artificial intelligence replaces owners and CEOs, personal contact has a place in corporate banking.However, digitalization will step by step carve out a growing slice of this relationship cake" (1-3)."In business banking, the human factor, the physical service, the personal bond with our clients will remain crucial . . .but it will be more and more supported by digital innovation" (1-17). The service model innovation process encompasses three themes distinguished by the density of personal and digital interactions, leading to a blended service model.First, digital replaces human in simple, daily routine tasks, where manual intervention adds little value."We will have to provide all the basic services digital, where the relationship manager has no added value other than collecting the papers" (1-18)."The back-office part, where the ugly things happen, is masked from the client.However, it is energy-consuming to operate this manually.Digitization needs to start there to make the biggest impact" (1-1). Beyond automating back-office processes that are less visible to customers, this approach also covers digital self-service that allows customers to directly manage their basic transactions and administrative tasks without the involvement of the bank's representative.It was observed that Case Bank systematically mapped its corporate customer support services and identified those that can be transformed into self-service functionalities of the DBD under development. Second, digital complements human in more complex processes and as an additional channel of interaction.In semi-automated processes, humans would stay in the driver's seat as they make credit decisions or negotiate with clients but could be assisted by descriptive analysis that collects, processes, and visualizes information in the preparatory phase."80-90% of the work that is still today done manually can at some point be automated, . . .but the local knowledge still is a factor because sometimes these decisions hinge on other soft things you see when you visit the clients" (1-14)."Banks have started to realize that digital is not only a channel, but it can support the relationship manager in the form of a dashboard that provides a 360-degree overview of the customer" (2-1). It was also noted that by introducing DBD, the Organization aims to supplement the relationship manager with an online touchpoint, reflecting a shift from a single point of contact toward an omnichannel approach."DBD will be like the mobile app in retail: the digital interaction portal between the client and the bank" (1-17)."It will make digital applications easily accessible while creating a transparent, customizable workspace for customers" (1-9).Third, at the most sophisticated stage, digital augments human.This augmentation builds on leveraging advanced technology, such as artificial intelligence, to improve the timing and content of sales efforts and the added value of personal advisory, thereby repositioning the relationship managers' role.One informant illustrated it with a metaphor: "I can compare it to Formula 1, where cutting-edge technology works under well-skilled drivers to reach their goals in the elite league of motorsport" (2-1). Observations concerning the Organization's lead management project indicate that the backbone of this approach is the exploitation of proprietary customer data and contextual information with the help of predictive and prescriptive, rather than purely descriptive, analytical methods.In the case of proactive lead management, it boils down to detecting or even predicting the financial needs of corporate customers and satisfying them with relevant and timely product offers."We should look around what is available outside of that individual experience of the corporate banker.What is available in the sector, in internal and external databases, and how we can use that to provide insights and assistance for the clients to do their business better" (1-17).Data-driven digital processes are also used to allocate tasks and commercial leads to the most appropriate employees based on skills, availability, and performance. The Organization uses AI to generate customer insights and spot triggers that indicate particular types of customer needs.In simple cases, digital journeys where digital replaces human are created, and in the case of slightly more complex needs, digital is used to complement human by sending a commercial signal to the most appropriate employee to deal with the client.However, the most unique and sophisticated client processes are driven by digitally augmented humans, with customer insights empowering relationship managers to be better equipped when responding to client needs."Digital client journeys are created for simple offers, while augmented humans conclude complex deals" (2-2). Informants believed that digitalization could reinvigorate relationship management so that corporate bankers would be able to engage in more complex, meaningful deals and advisory-type conversations with customers."If there will be a game-changing event, what we call a moment of truth, for instance an acquisition or a merger and the CEO or CFO wants to discuss it with a bank, then . . .we should be helped by technology to give much better advice than we have ever been able in the past" (2-2)."In a future bank-corporate relationship, a high value-adding personal relationship must be built on the digital foundation" (1-18).Digital augmenting human is seen as a central building block for competitive advantage in the future: "In the endgame, data-based algorithms steer and support the organization" (2-2). Enablers of Transformation Respondents also highlighted the importance of execution capability to successfully architect the envisaged blended service model."Strategic visioning is there, maybe the directions we have come up with are good, but we need to be able to execute. . . .And for that, we need to build the necessary competencies, expertise, and routine" (1-16).The informants noted several enablers that were seen as crucial for successfully executing the digital transformation.First, threshold resources are the fundamental assets required to embark on digital projects, including financial resources, IT capacity, and relevant technical skills.The scarcity of IT capacity and shortcomings in essential technical skills (such as business acumen, project management, and IT development) can hinder execution even if funding is ensured."Sometimes despite the availability of a budget, the project does not progress in the absence of IT capacity" (1-7). The informants emphasized that the scale and speed of digitalization are also determined by the Organization's ability to manage complexity."We gave priority to four flagship projects because there is only so much complexity that we can manage" (1-15).The interviewees repeatedly expressed the importance of a systems approach that considers both the constraints of existing systems and the way ongoing and future digital developments can intertwine."In the early phase of the digital dashboard project, the project team identified 17 legacy systems and 4 dependent projects that need to be taken into account when designing a minimum viable product" (First author's field notes). Respondents added that corporate banks are often envied for possessing a massive amount of unique customer data.However, putting the data to work is not a straightforward exercise, as the bulk of the information is unstructured and stored in fragmented data sources.Leveraging data requires that they are aggregated, analyzed, and subsequently translated into relevant insights.This is, at times, still a bottleneck: "We all see today it is a human-based decision, you need to know your client. . . .However, this is just because it is still today unstructured data, and we haven't found a way of structuring that" (1-15). The interviews also revealed that an overarching transformational mindset is fundamental to navigating digital transformation successfully."Banks transforming systematically and adopting a digital mindset pervading the whole organization will excel, while those who look at digitalization as a bunch of imposed IT projects will lag" (2-2). The case indicates that such a mindset is rooted in customer centricity, which has long been a pillar of Case Bank's strategy.The attitude of "serving customers in the best possible way" (1-17) requires that customer needs and pain points are sensed in the front office and translated into digital initiatives that can be developed by IT.However, observations suggest that cross-functional cooperation can easily become cumbersome if different functions cannot understand each other's drivers and logic but stick to the specific interests of their silos: "Even after many years spent with projects, communication between various functions is not clear enough to have the same understanding. . . .You say a sentence, and it is understood one way by IT and a completely different way by Business" (1-12). Informants added that digital upskilling of salespeople is a vital ingredient, allowing them to deliver novel services to corporates and use data-enabled insights to strengthen the advisory-based bond.Finally, the cultural shift must be fostered by leaders ready to champion the transition to digital, raising consciousness about it and mobilizing the whole Organization toward the new service model."I would call it modern leadership.You need that leadership talent of people who want to build a [digital] story, communicate it, and carry the entire organization into the belief and transformation of the behavior" (1-15). Toward a Blended Service Model The thematic dimensions discussed earlier describe the influencing factors and main components of the Organization's planned digital journey.However, it is equally important to understand the interactions and connections between them.Figure 2 depicts the process of responding to the sensed digital trends by altering the relationship-oriented approach and moving toward a blended service model, as suggested by the interview data. As Figure 2 shows, the relationship-oriented business model can be transformed incrementally through the infiltration of digital, following the order depicted in the center of the figure.Our findings suggest that an incumbent may move from simple digital developments to more comprehensive ones step by step, following a path from "digital replacing human" through "digital complementing human" to "digital augmenting human.""We digitize basic transactions, payments, and the administrative stuff first. . . .Then we are moving into the more sophisticated customer experience. . . .There is this gradual stepping up the ladder regarding where we go in terms of complexity" (1-15). There are three important features of this roadmap.First, the stages are closely interrelated, and the linkages between them define the trajectory.Digitized products where digital replaces human are a prerequisite for the platform where digital complements human: "To have a DBD, we need to have self-service functions first that we can add to it; hence, we need to get to a certain level in the first pillar.The relationship between the two cannot be reversed" (1-18).On the other hand, the omnichannel approach where digital complements human will have an essential role in sensing client needs and behavior that can trigger further digitized products."It can help us know more about our customers' needs, and thus we could sharpen the direction of future developments" (1-9).Moving further in the roadmap, informants noted the importance of DBD and the processes where digital complements human in structuring customer data, which is essential for data-driven solutions where digital augments human."We have numerous applications and e-channels, which boils down to unstructured, scattered information about our clients.We aim to address this with our integrated dashboard" (1-12).On the other hand, the processes where digital augments human feed digital leads back to the DBD platform, which was again seen as a necessary condition for the more advanced stage to be viable in the first place."By the time [lead management] will go live, we will need a platform that digital leads can land on.Therefore, it is a prerequisite to have a well-functioning dashboard" (1-18). Second, threshold resources provide the foundation, and the higher-level enablers of complexity management and transformational mindset gradually kick in as the digital journey progresses.As the transformation moves toward more sophisticated stages, there is more complexity to manage, and the transformational mindset becomes increasingly crucial as the significance of the human factor increases.While we found that digital solutions that replace manual, repetitive, administrative tasks are easy to adopt, omnichannel journeys where clients may switch back and forth between digital and human interactions require bankers who understand how to use digital tools effectively.The most challenging part was found to be the effective utilization of data to digitally augment human, as this requires that bankers trust the model that defines digital insights, that the delivery of the digital insights is well orchestrated, and that the digital insight provided to the physical channel is effectively applied.The most crucial enablers thus change from resources to management to mindset as the firm advances on the roadmap. Third, certain value sources seem to gravitate to specific phases of the cycle.Efficiency gains were most often mentioned in connection with replacing manual processes with digital ones.Digitized products and novel digital channels were considered to engage customers via convenience and system lock-in.As digital will gradually complement the human interface, clarity is expected to improve thanks to semi-automated decisions and processes managed on digital platforms.The DBD was seen as essential for capturing the value of complements through its planned connectivity to third-party applications.Finally, the augmented human approach is expected to involve the most radical and novel innovations. implications for Digital Transformation and Competitive advantage While the right way of combining digital with human has long been seen as a key characteristic of successful service models in the digital future, for example, in the "digital first" school of thought, 46 what has generally received less attention is the process of how incumbent service firms can get there.Our findings suggest that the digital transformation process should not only be gradual and conducted in small, manageable steps but also give important insight into the roadmap that this process should follow.Specifically, the process model depicted in Figure 2 can point incumbent service firms to the necessary building blocks of different stages of the digital transformation process in terms of the types of service innovation to pursue, the most important enablers of transformation, and potential value sources that can be expected to be the most salient in different stages of the transformation process. Our findings further help to establish important links between digital transformation and competitive advantage.As an informant suggested, "the essence of competition will not change.The question is how corporate banks can amplify their strength via digital" (2-1).In this regard, the three elements of the blended service model-digital replaces human, digital complements human, and digital augments human-were found to have different implications for the sustainability of the associated competitive advantages. 47placing daily, routine human tasks with their digital counterparts emerges from the analysis as a threshold capability."Basic self-service, basic products via digital . . .will be a minimum expectation to be able to start the game" (1-18).Digital technologies by themselves exhibit low barriers to entry 48 and require only moderate organizational efforts to implement, and thus are likely to become commodities.As one informant concluded, "This is not so much about gaining a competitive advantage but avoiding the disadvantage" (1-3).Routine tasks where digital replaces human can therefore, at most, be a source of competitive parity. In applications where digital complements human, the Organization strives for a sequence of first-mover advantages by deploying the DBD as an innovation platform.According to the business concept, "DBD is expected to bolster new product launches by decreasing development costs and accelerating time-to-market of innovative products and services" (Case Firm documents).However, even if successful, the associated advantages are anticipated to be temporary because of the imitability of the products."Whoever can digitize faster is sure to gain some competitive edge . . .for a while until others catch up, and it will be a must afterward" (1-5)."These digital products can now be copied quickly, especially if you develop them in an agile organization" (1-7).These findings echo the arguments that the rewards of pioneering digital changes are not necessarily enduring 49 and suggest that approaches where digital complements human can provide, at best, a temporary competitive advantage. In contrast, approaches where digital augments human arose in the findings as potential sources of sustainable competitive advantage.Two resourcebased arguments support this implication.First, unstructured information was found to be a roadblock on the way toward making sense of data.Therefore, leveraging data could be the privilege of business banks that can cross this barrier by connecting the dots within their complex systems.Second, adopting a transformational mindset arose from the interviews as a prerequisite and a key challenge of reaching the most advanced phases of digital transformation where digital augments human.However, once a culture that integrates human and digital becomes a part of the "unspoken, unperceived common sense of the firm," 50 it is also difficult to imitate.Thus, if corporate banks are able to combine their traditional resources of skilled employees and proprietary data with digital capabilities to support their clientele with data-driven, personalized services, this can provide a sustainable advantage.The consequent improvement of the business relationship results in even more customer data, further strengthening the resource position barriers 51 and increasing the durability of competitive advantage. Managerial recommendations Four specific suggestions stem from the findings, which can be instrumental for managers in incumbent service firms to successfully guide their organizations through a digital transformation. ◾ Define a digital roadmap-Although sporadic digital initiatives may have a viable stand-alone business case, launching them in an uncoordinated manner carries the risk of disregarding the logical dependencies between the different kinds of initiatives and thus limiting the benefits gained.Moreover, adding new elements to the already opaque banking systems generates more complexity, decreasing the understandability of the system and potentially leading to unmanageability. 52Such pitfalls could be avoided with a comprehensive digital business strategy, including a roadmap that guides and connects digital initiatives.Such a digital roadmap should consider the firm's target market, existing resources and capabilities, and how much the intended service model is skewed toward high tech versus high touch.Based on this, the roadmap should mark out which stage of service model innovation will be crucial (are the benefits to be gained from replacing, complementing, or augmenting human with digital), articulate the most important resources needed, and outline the main intended value sources.For example, a professional service provider targeting the largest companies with bespoke services would likely create and capture the most value by focusing on augmenting human, whereas a firm offering standard services to a wider clientele might want to focus on efficiency gains by replacing human with digital. ◾ Be lean-Even though you should have an overall digital roadmap, it is important to not try to do everything at once.While, in the long term, digitalization is inherently disruptive, 53 our findings indicate that its organizational impact can be more gradual.This is in line with arguments that established firms pursuing a future-proof business model should opt for a piecemeal digital transformation. 54Although digital trends can fundamentally transform entire industries, our findings suggest that responding to such trends is best done gradually, rather than attempting to implement a sudden and disruptive transformation.Adopting tools from the "lean startup" methodology, 55 such as taking small, calculated risks by launching several "minimum viable product" (MVP)-type digital initiatives of limited scope and adjusting based on the associated customer feedback, is generally better than attempting to transform the entire service model in one go. ◾ Be receptive to a broad range of value sources-Not all sources of value from digital transformation can be directly translated into monetary terms, and decision-making should incorporate the strategic, less quantifiable aspects as well when evaluating digital initiatives.To complement the more well-known value sources from digital transformation, including efficiency improvements, 56 customer engagement, 57 complements, 58 and novelty, 59 we identify mutual clarity and transparency as an additional value source that has thus far received little scholarly attention.Transparency alleviates uncertainty, for example, by increasing the feeling of progress when queuing 60 and making customers less sensitive to their wait time. 61It further helps customers to better understand and appreciate the work done on their behalf, 62 which can increase the perceived value of the service, customer satisfaction, willingness to pay, and loyalty. 63Transparency also helps in building trust, 64 which is a vital value creation ingredient in relationship-based service industries such as corporate banking. 65The finding that digitalization helps unlock the value of mutual clarity also appears to be transferable to other industries.Providing online delivery updates based on carrier information and allowing customers to track and trace the full path of the order has become a standard operating model in B2C services.Similar requirements are trickling into B2B markets where the stakes and the value of predictability are even higher, and businesses are becoming less tolerant of non-transparent processes.Our findings suggest that firms looking to benefit from digitalization should consider the transparency that digital technologies allow as a potentially important source of value creation. ◾ Know where your competitive advantage lies-Once there is a better view of the potential sources of competitive advantage, decisions on how to develop digital capabilities can be better underpinned.Involving external developers or considering partnerships with fintechs might be valid choices for an incumbent in the case of fully automated solutions that are expected to be copied and are therefore unlikely sources of long-term competitive advantage.On the other hand, data-driven solutions considered central to the augmented human approach are best developed in-house to protect the capabilities contributing to sustainable competitive advantage.Understanding when to build, when to buy, and when to partner helps optimize scarce internal development capacity.In particular, the finding that digitization of daily banking services offers little if any potential for achieving competitive advantage might foster cooperation between fintechs and incumbents.While fintechs struggle to cross the trust gap and reach scale, established banks fight against complexity and face the imperative of allocating scarce resources to projects that most probably will not lead to competitive advantage.This setting should motivate fintechs to reposition themselves from substitutes to complementors and encourage incumbent banks to consider fintechs as potential partners in daily banking services, with incumbents aiming to gain their competitive advantage from more advanced digital applications that augment their human processes. A final practical observation of note is that we found surprisingly little opposition to digital transformation in any management layer within the Organization.In fact, all interviewed stakeholders considered digital transformation as an imperative, being essential to improve or even maintain competitiveness.Thus, while implementing the blended service model entails a lot of practical complexity and, as a change process, requires special management attention, 66 the importance of digital transformation appears to be sufficiently well established in the industry to not require any active management of cognitive inertia and resistance within the firm. In conclusion, the model of blended service innovation presented in this study can help us understand digital transformation and its repercussions on competitive advantage in the context of professional B2B service providers.Being based on a single-case study, our findings only highlight one potential model of digital transformation, and the process may need to be adjusted based on contextual factors such as the size of the firm, its current level of digital competence, or the industry environment it is facing.However, the findings suggest that the dynamics between relationship-oriented firms and the increasingly digital environment warrant further managerial and scholarly attention to maximize the positive impact of digital advancements and to realize their potential for generating competitive advantage. Figure 1 . Figure 1.Elements of digital transformation in the case firm. Figure 2 . Figure 2. Building blocks of a blended service model. TaBle 1 . List of interviewees.
2024-06-14T13:33:52.999Z
2024-05-01T00:00:00.000
{ "year": 2024, "sha1": "2498cea43bd3d0e0b025e741ef5e3e368d30c5bc", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/00081256231207429", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "2498cea43bd3d0e0b025e741ef5e3e368d30c5bc", "s2fieldsofstudy": [ "Business", "Computer Science" ], "extfieldsofstudy": [] }
226569584
pes2o/s2orc
v3-fos-license
Designing a methodological concept for the diagnosis of early development of the main wheat diseases pathogens The studies presented in the article were carried out in 20182019 on the experimental field of the All-Russian Research Institute of Biological Plant Protection. The aim of the research was to assess the feasibility of diagnosing the early development of major diseases pathogens based on the results of ground-based spectrometry and the use of phytomonitoring technology, taking into account the genotypes of different winter wheat varieties. There were three options of the experimental plots for the research: the 1 protected against diseases by fungicides, the 2 with an artificial infectious background, the 3 with the natural development of diseases. According to the results of data analysis, the most significant changes in the spectral characteristics of the studied plant backgrounds were noted at the time of the first signs of disease in the form of a decrease in the spectral brightness coefficient in the near infrared range. Using special tools in the experimental plots, the following pathogens were identified before the appearing of disease symptoms: Blumeria graminis (DC.) Speer f. sp. tritici Marchal , Puccinia striiformis West., Pyrenophora tritici-repentis Died., Puccinia triticina Erikss. Data on the diseases development, plant infestation by pathogens are compared with spectrometric measurements. Wheat is the main grain crop cultivated around the world. The global area occupied by wheat is 215.5 million hectares. According to the Ministry of Agriculture of the Russian Federation, in 2018 winter wheat occupied 15.3 million ha, which corresponds to the figure a year earlier. In Krasnodar Krai winter wheat occupies an area of 1.5 million hectares. This crop is exposed to a large range of leaf-stem pathogens [1-3]. The vital activity of phytopathogens causes a loss in the quality and quantity of grain, reducing yield to 90 % [4, 5]. To provide perfect plant protection against pests, timely and accurate phytosanitary monitoring is very important [6]. Effective phytosanitary monitoring is possible only with the early detection of aerogenic infection and the sources of its origin [7]. There are studies * Corresponding author: kremenoks@mail.ru Wheat is the main grain crop cultivated around the world. The global area occupied by wheat is 215.5 million hectares. According to the Ministry of Agriculture of the Russian Federation, in 2018 winter wheat occupied 15.3 million ha, which corresponds to the figure a year earlier. In Krasnodar Krai winter wheat occupies an area of 1.5 million hectares. This crop is exposed to a large range of leaf-stem pathogens [1][2][3]. The vital activity of phytopathogens causes a loss in the quality and quantity of grain, reducing yield to 90 % [4,5]. To provide perfect plant protection against pests, timely and accurate phytosanitary monitoring is very important [6]. Effective phytosanitary monitoring is possible only with the early detection of aerogenic infection and the sources of its origin [7]. There are studies https://doi.org/10.1051/bioconf/20202100002 XI International Scientific and Practical Conference "Biological Plant Protection is the Basis of Agroecosystems Stabilization" on the detection of spores of Fusarium pathogens and rust pathogens in the air over wheat and sugar beet crops using special tools and methods [8 -10]. The development of remote methods for diagnosing the crops conditions using data from remote sensing of the Earth is an extremely promising area nowadays [11][12][13]. There are some studies to identify differences in the development of phytopathogens in different crops based on hyperspectral measurements [14][15][16]. There are some known results of the studies on the development of remote methods for the detection of rust diseases in winter wheat crops based on changes in the spectral characteristics of plant objects [17]. Successful work was carried out to model the ecological niche of wheat septoria using remote sensing [18]. Thus, the development of fundamental scientific and methodological basis for the early diagnosis of the main wheat diseases pathogens using remote hyperspectral measurements and monitoring tools is extremely important. The aim of the research is to develop methodological basis for diagnosing the early development of economically significant pathogens of wheat diseases based on analysis of ground spectrometric data and monitoring tools, taking into account the characteristics of the genotypes of different winter wheat varieties. To achieve this goal, we organized test plots of four winter wheat varieties (Kuren, Bonus, Aksinya, Krasnodarskaya 99) on the experimental fields of ARRIBPP, which are characterized by different degrees of resistance to leaf-stem diseases [2]. Each site was divided into three zones: the 1 st -protected against diseases by fungicides (clean background), the 2 nd -with an artificial infectious background of brown rust (infected), the 3 rd -with the natural development of diseases. To develop brown and yellow rusts in the experimental plot, the method of artificial infection of winter wheat plants with spores of these phytopathogens was used [19][20]. Infection of winter wheat plants was carried out on April 16 in the "beginning of stem elongation" phase (phase Z 30-32). The development of yellow leaf spot, septoria and powdery mildew pathogens occurred upon a natural infectious background. The development of a clean background (without disease) was carried out by 2-replication treatment of the selected area with the Falcon KS systemic fungicide: 1st treatment 04.25.2019 (phase "flag-leaf"), 2nd 9.05.2019 (phase "beginning of flowering" Z 61). Disease recording was carried out, starting from the moment of the initial disease signs, which were noted on April 30, 2019 in the "flag leaf" phase (Z 40-47) and subsequently to the phase of "milk-wax ripeness of grain" (Z 75) with an interval of 10-12 days. The degree of plants damage by diseases was evaluated as a percentage according to the international methods [19][20]. Spectrometric measurements of wheat crops on test spots were carried out daily from the moment of artificial infection with phytopathogens until the appearance and amplification of visible symptoms of the disease using a FieldSpec 3 Hi-Res spectroradiometer [21]. Along with hyperspectral measurements, air samples were taken over crops and in crops of various varieties on a natural and artificial infectious background using the portable determinant of plant infestation OZR-1mp [7]. In order to identify specific spectral ranges indicating the signs of changes caused by exposure to harmful objects, we analyzed the changes in the morphology of the spectral signatures of the spectral brightness coefficient (SBC) of plant objects depending on their actual condition, taken into account during field surveys ( According to the results of the graphic data analysis, it was found that the most significant differences in the reflectivity of the studied objects appear in the near infrared range of the spectrum, therefore, an analysis of the change in the SBC values for all plots for the entire measurement period (April 26, 2019 -May 26, 2019) in the 800 nm channel was carried out in comparison with the data on the development of diseases and the degree of plants infestation in the crops of test plots ( Table 1). The 800 nm channel is the center of the near infrared range and is often used by various types of film systems [22]. During the incubation period before the appearance of external diagnosed signs of the diseases development from April 26 to April 29, 2019, there were no significant changes in the reflectivity of winter wheat plants, indicating the influence of pathogens on the crops condition. The SBC values of the compared plant backgrounds in the 800 nm channel did not differ significantly. At this time, with the help of special spore-trapping devices, the following pathogens were identified on test plots: Blumeria graminis, Puccinia striiformis, Pyrenophora triticirepentis, Puccinia triticina. Powdery mildew spores are predominantly quantitative compared with other pathogens. Moreover, the number of spores detected on the infected and natural backgrounds was 2-3 times higher than the values of control plots treated with the fungicide. On all test plots without exception, spores of yellow rust in an amount of 1 to 9 were recorded. Single brown rust spores were found mainly on crops plots of the pathogen susceptible varieties Aksinya and Krasnodarskaya 99. No spores of septoria were found on any plot, while spores of tan spot were present on plots of all three backgrounds in the amount of 1 up to 3 pcs. The first visible changes in the spectral characteristics of the studied plant backgrounds, which appeared in the form of a decrease in the spectral brightness coefficient (SBC) on infected plots of all varieties, were noted by analyzing data obtained from April 30 to May 2, 2019. At that moment the first single signs of yellow rust were recorded on the resistant variety Kuren, as well as single signs of P. tritici-repentis in the plots of the remaining varieties. https://doi.org/10.1051/bioconf/20202100002 XI International Scientific and Practical Conference "Biological Plant Protection is the Basis of Agroecosystems Stabilization" The nature of the quantitative distribution of powdery mildew spores didn't change compared with the previous period before the appearance of external signs of disease. The number of brown rust spores increased to 2-5 pcs. for a plot. The largest number of yellow rust spores was recorded on the infectious backgrounds of the varieties Kuren and Aksinya amounting to 23 and 33 pcs. respectively, and the average amount in the remaining plots https://doi.org/10.1051/bioconf/20202100002 XI International Scientific and Practical Conference "Biological Plant Protection is the Basis of Agroecosystems Stabilization" varied from 1 to 17 pcs. The number of pyrenophorosis spores also increased up to 2-4 pcs. with a maximum quantity of 11 pcs. on the natural background of Aksinya variety. During the period of the most intensive development of diseases from May 10 to May 25, 2019, powdery mildew spores predominated, while the development of the disease, regardless of variety, on all plots averaged 3-5 %. The most intensive development of brown rust amounting up to 5 % was observed on the infected and natural backgrounds of the pathogen-susceptible variety Krasnodarskaya 99. The highest number of pathogen spores from 33 to 54 pcs. was recorded on the same backgrounds. The maximum development of yellow rust was 10% and it was recorded on the infected and natural backgrounds of the varieties Kuren, Bonus and Aksinya. Therefore, the number of spores of yellow rust found on these plots was the largest and ranged from 10 to 18 pcs. The minimum percentage of pathogen development was noted on the control backgrounds of all four varieties. The development of Septoria spot reached 1-3 % on all plots. Tan spot was also observed on all plots with a degree of development of 1 %, and the average number of spores varied from 1 to 7 pcs. The greatest number of pathogen spores amounting up to 11-13 pcs. was found on the plots with crops of the Aksinya variety, which is susceptible to this pathogen. Despite the greater development of the pathogenic background of infected and natural plots compared with the control, the values of the spectral response in the 800 nm channel were ambiguous. Thus, the SBC values of the control background of the Kuren variety was characterized by a lower rate compared to infected and natural backgrounds. On the remaining varieties, the SBC values of the three compared backgrounds were almost equal. Most likely, this can be explained by the imposition of other factors causing inaccuracies in the measurements: uneven seeding, the effects of external harmful objects, planned field treatment, inaccuracies in observing the spectrometry technique. A key feature of the studies is the identification of the spectral characteristics of plants damaged by pathogens, not under the laboratory conditions, but in their natural environment. This work showed the difficulty of collecting and analyzing information under these conditions, but also indicated their fundamental feasibility. We found that, according to the multimodal data of ground-based spectrometry, it is possible to detect changes in the conditions of winter wheat crops in the early stages of pathogen development. Using special tools in the experimental plots, the following pathogens were identified before the appearing of disease symptoms: B. graminis, P. striiformis, P. tritici-repentis, P. triticina. The system of the remote monitoring of phytopathogen spores using special equipment has shown its viability and the ability to monitor economically significant diseases in detail. These studies are supposed to be continued in 2020, taking into account the experience of 2019 and the development of the methods allowing to improve the reliability of the results. The research was supported by the RFBR grant and the Administration of Krasnodar Krai No. 19-416-230043 r_a
2020-06-25T09:06:04.567Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "ad8018228adbd77b6b11f67ee2e3bad8f18e9207", "oa_license": "CCBY", "oa_url": "https://www.bio-conferences.org/articles/bioconf/pdf/2020/05/bioconf_bpp2020_00002.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a5b5ae9c52df76cefa29b756d31bf1f00c9f9629", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
2774057
pes2o/s2orc
v3-fos-license
The diquark and elastic pion-proton scattering at high energies Small momentum transfer elastic pion-proton cross-section at high energies is calculated assuming the proton is composed of two constituents, a quark and a diquark. We find that it is possible to fit very precisely the data when (i) the pion acts as a single entity (no constituent quark structure) and (ii) the diquark is rather large, comparable to the size of the proton. Introduction In this note we study the quark correlations inside the nucleon -forming a diquark [1] -in the context of elastic pion-proton scattering at low momentum transfer. The interest in this process is a consequence of Ref. [2] where the elastic protonproton scattering, assuming the proton is composed of quark and diquark, was discussed. We found that (i) it was possible to fit very precisely the ISR elastic pp data [3] even up to −t ≈ 3 GeV 2 and (ii) the diquark turned out to be rather large, comparable to the size of the proton. Moreover, we found that the quark-diquark model of the nucleon in the wounded [4] constituent model [5] allows to explain very well the RHIC data [6] on particle production in the central rapidity region. Given above arguments, it is interesting to explore the model for another process. The natural one is elastic pion-proton scattering. Following [2] we consider the proton to be composed of two constituents -a quark and a diquark. As far as the pion is concerned we consider two cases. The first one treats the pion as an object composed of two constituent quarks, the second one treats the pion as a single object i.e. an object without constituent quark structure. For both cases we evaluate the inelastic pion-proton cross-section, σ(b), at a given impact parameter b. Then, from the unitarity condition we obtain the elastic and consequently the elastic amplitude in momentum transfer representation: With this normalization one can evaluate the total cross section: and elastic differential cross section (t ≃ −|∆| 2 ): Our strategy is to adjust the parameters of the model so that it fits best the data for elastic pion-proton cross-section. In this way the model can provide some information on the details of proton and pion structure at small momentum transfer. Pion as a quark-quark system We follow closely the method presented in [2] where the elastic and inelastic protonproton collision was studied. Consequently, the inelastic pion-proton cross-section at a fixed impact parameter b, σ(b), is given by: where D p (s q , s d ) and D π (s q1 , s q2 ) denote the distribution of quark (s q ) and diquark (s d ) inside the proton and the distribution of quarks (s q1 ,s q2 ) inside the pion, respectively. σ(s q , s d ; s q1 , s q2 ; b) is the probability of inelastic interaction at fixed impact parameter b and frozen transverse positions of all constituents. The schematic view of this process is shown in Fig. 1. π Figure 1: Pion-proton scattering in the quark-diquark model. Pion as a quark-quark system. Following [2] we parametrize σ ab (s) using simple Gaussian forms: We constrain the radii R ab by the condition: where R a denotes the quark or diquark's radius. 3 From (7) we obtain the total inelastic cross sections: σ ab = πA ab R 2 ab . Following [2] we assume that the ratios of cross-sections satisfy the condition: σ qd /σ qq = 2, what allows to evaluate A qd in term of A qq . For the distribution of the constituents inside the proton we take a Gaussian with radius R: where the parameter λ has the physical meaning of the ratio of the quark and diquark masses λ = m q /m d (the delta function guarantees that the center-of-mass of the system moves along the straight line). One expects 1/2 ≤ λ ≤ 1. For the distribution of quarks inside the pion we take a Gaussian with radius d: It allows to define the effective pion radius R π : Now the calculation of σ(b), given by (5), reduces to straightforward Gaussian integrations. The relevant formula is given in the Appendix. Introducing this result into the general formulae given in Section 1 one can evaluate the total and elastic differential pion-proton cross-sections. Our strategy is to adjust the parameters of the model so that it fits the data best. We have analyzed the data for elastic π + p scattering at two incident momenta p lab = 100 GeV and 200 GeV [9]. An example of our calculation is shown in Fig. 2, where the differential cross section dσ/dt at p lab = 200 GeV, evaluated from the model, is compared with data [9]. The model compared to data [9] on differential cross section for elastic π + p scattering at p lab = 200 GeV (not all points plotted, for clarity). Pion as a quark-quark system. From Fig. 2 one sees that it is possible to fit very precisely the data up to −t ≈ 1 GeV 2 . However, the model, with the pion as a two-quark system, predicts the diffractive minimum which is not seen in the data. In the next section we show that assuming the pion to be a single entity (no constituent quark structure) we are able to remove this problem. The relevant values of the parameters are given in Table 1 It is intriguing to notice that the values of the most interesting parameters, R q and R d , are not far from those obtained in [2] were elastic pp scattering was studied in similar approach. Again we observe that the diquark is rather large. Pion as a single entity In the present section we assume that the pion interacts as a single entity i.e. the pion has no constituent quark structure. The schematic view of the pion-proton scattering in this approach is shown in Fig. 3. The inelastic pion-proton cross-section σ(b) at a fixed impact parameter b reads: with D p (s q , s d ) given by (8) and σ(s q , s d ; b) expressed by: In analogy to the previous approach the inelastic differential quark-pion, σ qπ (s), and diquark-pion, σ dπ (s), cross sections are parametrized using simple Gaussian: In this case we constrain the radii R aπ by the condition: R 2 aπ = R 2 a + R 2 π where R a denotes the quark or diquark's radius and R π denotes the pion's radius. This gives: − A qπ A dπ xy xy + yr + λ 2 xr e −b 2 x+y+(1+λ) 2 r xy+yr+λ 2 xr , 4 The model is almost insensitive to the value of λ (provided that 1/2 ≤ λ ≤ 1). π Figure 3: Pion-proton scattering with the pion as a single entity. where x = R 2 q + R 2 π , y = R 2 d + R 2 π and r = R 2 /(1 + λ 2 ). Introducing this result into the general formulae given in Section 1 one can evaluate the total and elastic differential pion-proton cross-sections. From (13) we deduce the total inelastic cross sections: σ aπ = πA aπ R 2 aπ . As before we demand that the ratios of cross-sections satisfy the condition: σ dπ /σ qπ = 2, what allows to evaluate A dπ in term of A qπ . It turns out that the model in this form works very well indeed i.e. it is possible to fit very precisely the data even up to −t ≈ 3 GeV 2 . We have analyzed the data at two incident momenta of 100 and 200 GeV [9]. The results of our calculations are shown in Fig. 4. The relevant values of the parameters are given in Table 2. Again the most interesting observation is the large size of the diquark, comparable to the size of the proton. Discussion and conclusions In conclusion, it was shown that the constituent quark-diquark structure of the proton can account very well for the data on elastic π + p scattering. Figure 4: Pion acts as a single entity. The model compared to data [9] (not all points plotted, for clarity) on differential cross section for elastic π + p scattering at p lab = 100 GeV (rescaled by a factor 10 −2 ) and 200 GeV. confrontation with data allows to determine the parameters characterizing the proton and pion structure. We confirm the large size of the diquark, while the pion seems to interact as a single entity i.e. without constituent quark structure. Several comments are in order. (a) We compared the model only to elastic π + p scattering data, however, there is no statistically significant difference between π + p and π − p data [9] at any t value (at least up to −t ≈ 3 GeV 2 ). (b) The pion seems to interact as a single entity. It suggests that during pionnucleus collision the pion produces the same number of particles no matter how many inelastic collisions it undergoes. 5 Other needed integrals can be obtained by putting some of the x 1 , x 2 , y 1 or y 2 = 0.
2014-10-01T00:00:00.000Z
2007-01-04T00:00:00.000
{ "year": 2007, "sha1": "59c435a814a587e4cf6a15e7a7d1a268b8eab49e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "59c435a814a587e4cf6a15e7a7d1a268b8eab49e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
256448557
pes2o/s2orc
v3-fos-license
Contributions of Multilevel Family Factors to Emotional and Behavioral Problems among Children with Oppositional Defiant Disorder in China Oppositional defiant disorder (ODD) is one of the most prevalent childhood mental health disorders and is extremely affected by family factors. However, limited studies have addressed the issue from the perspective of family systems. The current study examines the associations between multilevel family factors (i.e., family cohesion/ adaptability at system level, mother–child and father–child attachment at a dyadic level, and child self-esteem at an individual level) and emotional and behavioral problems among children with ODD in China. The participants were 256 Chinese children with ODD and their parents and class master teachers. A multiple-informant approach and structural equation model were used. The results revealed that system level factors (family cohesion/adaptability) were associated with child emotional and behavior problems indirectly through factors at the dyadic level (mother–child attachment) and the individual level (child self-esteem) in sequence. Mother–child, but not father–child, attachment, mediated the linkage between family cohesion/adaptability and the emotional problems of children with ODD. Moreover, child self-esteem mediated the association between mother–child attachment and child emotional and behavioral problems. The findings of the present study underscored that multilevel family factors are uniquely related to emotional and behavioral problems in children with ODD. Introduction Oppositional defiant disorder (ODD) is characterized by a recurrent pattern of angry/irritable mood, argumentative/defiant behavior, and vindictiveness toward authority figures or adults as well [1]. Previous findings indicated that children with ODD have comorbid emotional and behavioral problems, such as depression and aggressive behavior [2]. Indeed, 45.8% of those with a lifetime diagnosis of ODD met the criteria for depressive disorder [3], and depression was a key contributor to behavioral problems in childhood [4]. Additionally, childhood ODD has been associated with an increased risk of conduct disorder ([CD]; [5]), which has a high probability of developing antisocial personality disorder in adulthood [1]. Due to the significant risk that emotional and behavioral problems pose to adjustment in typically developing children [6,7], children with ODD who have comorbid emotional and behavioral problems might be at a much higher risk for child outcomes [1,8,9]. Therefore, it is necessary to examine the influential factors on emotional and behavioral problems in children with ODD. Studies examining these links At the system level of family environment, researchers have underscored the contribution of family function to the emotional and behavioral problems of children with ODD [13]. Olson (2000) pointed out that family cohesion and adaptability are two core components of family function. Family cohesion refers to the emotional connection among family members [14,15], while family adaptability refers to the ability of a family system to change its power structure, role relationships, and relationship rules in response to situational and developmental stress [14]. According to the McMaster family functional model theory [16], a poor family function might lead to less open communication and a steady accumulation of negative emotions. Children in such a family environment would gradually learn the negative interaction pattern, which increases the risk of physical diseases and problem behaviors. Empirical research has been carried out on the role of family cohesion/adaptability in child problem behaviors as well. For example, Lavigne and colleagues (2012) found that family cohesion/adaptability attenuated the risk of emotional and behavioral problems in children with ODD. On the contrary, family conflict appeared to facilitate emotional and behavioral problems in children [11]. Factor at Dyadic Level Associated with Emotion and Behavioral Problems in Children with ODD Regarding the dyadic level, studies were inclined to explore dysfunctional parent-child interactions independent of other family-related interactions [17,18]. Research concerning attachment theory has revealed that parent-child attachment was associated with children's problem behaviors [6]. Attachment theory implies that parent-child attachment is prominently linked to child problem behaviors by shaping the internal work model of the child. If the caregivers are lacking in sensitivity or are even frightening to the children, children would be insecurely attached and more likely to develop a maladjusted internal work model [19]. Repeated experiences of the insensitivity of the caregiver would lead to dysfunctional cognition about the self and others and negative expectations in interpersonal interactions, which might enhance the risk for problem behaviors. Empirical studies consistently indicated that a lower level of parent-child attachment was associated with more emotional and behavioral problems in children [20,21]. Similar findings have been reported for children with ODD as well [10]. Attachment theory asserts further that children could form multiple attachments, i.e., they might develop distinct attachment with their mothers and fathers [22]. As such, it is important to point out that researchers have emphasized the need to separate the father and mother when examining their respective contributions to child development [7]. For one thing, mothers and fathers play different caregiving roles in families and have distinct interaction patterns with children. Therefore, mother-child and father-child attachments might associate with their children's development uniquely [23]. According to the dominant hypothesis [24], a child's attachment to his or her mother plays a pivotal part in his or her psychological development, due to the fact that the mother and child have more opportunities to spend time together [25]. According to the specificity hypothesis [26], the attachment a child forms with either his or her mother or father has distinct effects on his or her development [27]. Furthermore, during middle childhood, mothers and fathers are inclined to interact with children separately, increasing the opportunities for mother-child and father-child attachment to play different roles in child development [28]. However, mixed findings have emerged regarding the links of mother-child and fatherchild attachment with problem behaviors in children. Specifically, some have argued that mother-child attachment was closely related to child emotional problems [29], while Pan et al. (2016) proposed that father-child attachment was more crucial than mother-child attachment in predicting psychological health in Chinese children. However, Carter (2014) argued that secure attachments with both mothers and fathers protected children from worse emotional symptoms [30]. We do not yet have a full understanding of how motherchild and father-child attachments predict emotional and behavioral problems in children with ODD uniquely. Therefore, this study intends to distinguish the two different parenting roles and discuss them separately. Factor at Individual Level Associated with Emotion and Behavioral Problems in Children with ODD With regard to individual level factors, according to the multilevel family factors model and the research findings, several individual child factors are associated with the development of child ODD symptoms, such as child individual characteristics (children's temperament), cognitive factors (social cognition), and emotion-related factors (emotion regulation), etc. [12,31,32]. This research focused on child self-esteem, which has been found to play a predictive role in co-occurring emotional and behavioral problems. Given that children with ODD frequently receive negative social feedback throughout their development, they were more likely to experience lower levels of self-esteem [8,9]. Based on the self-esteem theory of depression, low self-esteem is one of the most important susceptibility qualities for depression [33] and the social bonding theory points out that low self-esteem contributes to less consistency in social norms and more problem behaviors [34]. Other studies have also revealed that child self-esteem is linked to the overall outcomes of children, including emotional and behavioral problems. For instance, a longitudinal study conducted by Leeuwis and colleagues (2014) indicated that low self-esteem was a strong predictor of subsequent internalizing symptoms in children [35]. Lin and colleagues (2014) also found that lower self-esteem was associated with higher levels of depression and more aggressive behaviors in children with ODD [36]. However, less is known about the role of child self-esteem in the emotional and behavioral problems of children with ODD in the family context. As such, the present study aims to explore how child self-esteem, as an individual level factor, is related to other family factors at system a level and a dyadic level, and ultimately to child emotional and behavioral outcomes. Interplay among Factors at Three Level According to the person-context interaction theory [37], the environmental factors vary from distal to proximal and the processes of interplay between the distal and proximal environmental factors affect the development of an individual. Magnusson and Stattin (1998) the functioning and development of proximal factors, as well as the individuals. In the family system, family function is a distal environmental factor, parent-child attachment is a proximal environmental factor, and child self-esteem is the most proximal factor for children [12,38]. As such, family function might directly and indirectly predict child problem behaviors via the parent-child relationship and the child's self-esteem. Additionally, the parent-child relationship might directly and indirectly predict child problem behaviors via the child's self-esteem. Indeed, families with poor function tended to have poor communication, which would lead to less parent-child interaction and a lower level of parent-child attachment. Consequently, children might be likely to form maladaptive internal work models and develop low self-esteem. All of these might cause or exacerbate problem behaviors in children. Some preliminary empirical evidence supported these assumptions. For instance, the effect of family cohesion/adaptability on child depression is likely to be mediated by parenting (dyadic level), parental depression, and child temperament (individual level) [39]. Liu and colleagues (2018) found that child self-esteem mediated the spillover effects between family cohesion/adaptability and the emotional problems of children [40]. Additionally, existing evidence supports that child self-esteem serves as a mechanism explaining the link between parent-child attachment and the emotional and behavioral problems of children [41,42]. However, it remains unknown whether the processes of interplay among three different system levels could be associated with emotional and behavioral problems in children with ODD uniquely. In the current study, we include family cohesion/adaptability at a system level, mother-child and father-child attachment simultaneously at a dyadic level, and child self-esteem at an individual level to explore the role that family factors at different levels play in the emotional and behavioral problems of children with ODD. Influence of Chinese Culture Since the cultural context affects both the whole family and the individual family member, it is essential to comprehend the associations between family factors and child outcomes in the cultural context. In various aspects, Chinese culture differs from Western culture. First, Chinese parents are typically more involved in their children's upbringing than American parents [43]. Under the influence of Confucianism, Chinese society adopted hierarchical parent-child interactions and disciplinarian parental socialization [43], which may contribute a substantial parental effect on child development. Second, it should be noted that most children in the present study were the only-child in their families. As the only child, some families adopted a "child-centered" parenting style [44]. While this parenting style might improve the quality of parent-child attachments, it might also increase the emotional and behavioral problems in children [45]. For one thing, a child-centered approach means that parents might spoil their children and fail to discipline children's daily behaviors, which increases the risk of behavior problems. For another, a child-centered approach would make parents place high expectations on their children and expect them to achieve excellent school performance, while paying less attention to their children's psychological needs. All of these factors might increase children's emotional and behavioral problems. Third, there is a well-known expectation in some Asian, African, or economically underdeveloped countries, "men outside the home, women inside (Nan Zhu Wai, Nv Zhu Nei)", because of traditional gender roles, and China is one of them [46]. Traditionally, Chinese mothers tend to take on full caregiving responsibilities in the household, while fathers are responsible for providing the financial necessities for the household. This division of household labor may contribute to a closer bond between children and their mothers than their fathers. Additionally, Chinese society expects married women to be "good wives and mothers (Xian Qi Liang Mu)", while the men are expected to "earn money to support the family". The different cultural expectations make the mothers devote more time and energy to raising children than the father. Therefore, Chinese mothers may have a greater influence on their children's socio-emotional development. In contemporary Chinese families, however, parental notions on the parent-child attachment and its influence on children are shifting [47]. Modern Chinese mothers demonstrate less closeness and connection with their children compared with mothers of the previous century [47]; fathers play a less "authoritarian" role in the family and have more intimate interaction with children [48]. Nevertheless, it is plausible that traditional and contemporary parenting practices coexist in Chinese households, and it is unknown how the effects of parent-child attachment on children's development differ depending on the role of the parent. The Present Study The current study examined how multilevel family factors were differently related to emotional and behavioral problems in Chinese children with ODD. Specifically, we included family cohesion/adaptability as the system level factor, mother-child and fatherchild attachment concurrently as the dyadic level factors, and child self-esteem as the individual level factor (see Figure 1 for the proposed model). Three problems would be explored: (a) whether family cohesion/adaptability is significantly related to the emotional and behavioral problems of children directly or indirectly through both dyadic level (mother-child and father-child attachment) and individual level (self-esteem of children) factors; (b) whether child self-esteem would mediate the linkages between mother-child and father-child attachments and emotional and behavioral outcomes; (c) whether the mother-child attachment would be more closely linked to the emotional and behavioral problems of children rather than the father-child attachment. mothers than their fathers. Additionally, Chinese society expects married women to be "good wives and mothers (Xian Qi Liang Mu)", while the men are expected to "earn money to support the family". The different cultural expectations make the mothers devote more time and energy to raising children than the father. Therefore, Chinese mothers may have a greater influence on their children's socio-emotional development. In contemporary Chinese families, however, parental notions on the parent-child attachment and its influence on children are shifting [47]. Modern Chinese mothers demonstrate less closeness and connection with their children compared with mothers of the previous century [47]; fathers play a less "authoritarian" role in the family and have more intimate interaction with children [48]. Nevertheless, it is plausible that traditional and contemporary parenting practices coexist in Chinese households, and it is unknown how the effects of parent-child attachment on children's development differ depending on the role of the parent. The Present Study The current study examined how multilevel family factors were differently related to emotional and behavioral problems in Chinese children with ODD. Specifically, we included family cohesion/adaptability as the system level factor, mother-child and fatherchild attachment concurrently as the dyadic level factors, and child self-esteem as the individual level factor (see Figure 1 for the proposed model). Three problems would be explored: (a) whether family cohesion/adaptability is significantly related to the emotional and behavioral problems of children directly or indirectly through both dyadic level (mother-child and father-child attachment) and individual level (self-esteem of children) factors; (b) whether child self-esteem would mediate the linkages between mother-child and father-child attachments and emotional and behavioral outcomes; c) whether the mother-child attachment would be more closely linked to the emotional and behavioral problems of children rather than the father-child attachment. Based on the theoretical backgrounds and empirical research, three hypotheses were proposed: (a) family cohesion/adaptability is significantly related to the emotional and behavioral problems of children directly or indirectly through both dyadic level (mother-child and father-child attachment) and individual level (self-esteem of children) factors, (b) child self-esteem would mediate the linkages between mother-child and father-child attachment and emotional and behavioral outcomes, and (c) mother-child attachment would be more Based on the theoretical backgrounds and empirical research, three hypotheses were proposed: (a) family cohesion/adaptability is significantly related to the emotional and behavioral problems of children directly or indirectly through both dyadic level (mother-child and father-child attachment) and individual level (self-esteem of children) factors, (b) child selfesteem would mediate the linkages between mother-child and father-child attachment and emotional and behavioral outcomes, and (c) mother-child attachment would be more closely linked to the emotional and behavioral problems of children than father-child attachment. Procedure There were six primary steps in the recruitment process. First, to obtain the informed consent of schools. Using a convenience sampling method, we reached out to the primary principals and school psychologists of 20 cooperative primary schools and invited them to participate in this study. Of these schools, 14 elementary schools in Beijing (8), Shandong Province (2), and Yunnan Province (4) agreed to participate. The three areas are located in the North, East, and Southwest of the mainland and represent developed, developing, and undeveloped regions in China. All 14 of these primary schools are day schools, four are priority primary schools, and thirteen are situated inside the city. The range of pupils was between 300 and 5000. (2 schools have fewer than 1000 students and 7 schools have more than 2000 students). We first obtained the consent of the principals and school psychologists of these schools. Second, to obtain the informed consent of class master teachers. We asked the school psychologists to issue research invitations and informed consent forms to class master teachers of grades first through fifth. Eventually, 187 class master teachers signed informed content and agreed to participate in our study. Third, nomination. These 187 class master teachers were asked to nominate the children who might have ODD symptoms in their classes, according to the eight-item ODD assessment checklist (DSM-IV-TR, 2000), children who displayed four or more symptoms for at least 6 months with damaged relationship functions were nominated. Fourth, confirmation. Two clinical psychologists from the research team interviewed each participating class master teacher to confirm the accuracy of the nomination. The confirmation criterions were based on DSM-IV-TR diagnostic criteria: (a) elementary students in grades one through five; (b) the child shows four or more symptoms of ODD; (c) the identified ODD symptoms have lasted for six months or more; (d) the child exhibits serious impairment across psychosocial functioning domains; and (e) without intellectual disability and other disorders, such as dyslexia, autism spectrum disorder, etc. Only children with both clinical psychologists' diagnoses of ODD were recruited into this study. Eventually, 305 children were identified to have ODD of the total 7966 children. Fifth, to obtain the informed consent of parents. Invitation letters and informed consent were sent to 305 parents of children identified with ODD symptoms. A total of 282 pairs of parent and child gave informed consent and assent forms were obtained (92.5% participation rate). Finally, these 282 children were asked to forward a package containing a parent survey to his or her primary caregiver. The primary caregiver (either the mother or the father, each family decide for themselves according to the reality of raising children) were invited to fill out the survey and to return their completed surveys to the class master teacher within one week. After parents signed informed consent, children completed the student questionnaire in a school conference room or music room, while trained researchers stayed in the room to provide assistance and explain the meaning of sentences when necessary. Specifically, children in grades 3, 4, and 5 were supervised by one teacher and one clinical psychology researcher. Due to the possibility that children in Grades 1 and 2 (ages 6-7) might have trouble comprehending the questionnaire, four to five teachers and researchers were assigned to guarantee that they could assist each child individually if children had difficulty completing questions. Both survey methods required children to independently complete questionnaires; the only difference was the number of researchers. Previous research has shown that the findings of a self-administered survey and an individual interview are compatible and comparable [49]. Class master teachers were also invited to complete a questionnaire to assess the behavior of each child in the study. A total of 256 parent-child dyads completed data collection, which included at least one of their parents and class master teachers' finished questionnaires. Prior to conducting the study, the Institutional Review Board of [mask for review] University in China approved the research protocol, including the consent procedure [Approval number]. We obtained active consent from parents, students, and teachers prior to data collection, and we promised to keep the participants' information confidential. For interested parents of the identified children, psychiatrists from Anding Hospital, mental health counselors, and a family therapist from the Center of Family Study and Therapy at [mask for review] University offered opportunities for ODD treatment. Participants The final ODD sample consisted of 256 parent-child dyads, including 83 father-child dyads and 173 mother-child dyads. The participating children included 186 boys and 69 girls, with 1 missing gender information. Among these children, 75.8% were the only child in family. Fathers' ages ranged between 25 ODD Symptoms Class master teachers, school psychological teachers and two clinical psychologists were asked to assess children's ODD symptoms based on the 8-item diagnosis of ODD scale in DSM-IV-TR (0 = no; 1 = yes; e.g., "often loses temper", "often argues with adults"; [1]). Children who had four or more items of the 8-item scale were identified with ODD. Scores were summed across the eight items and higher sum scores indicated that the child exhibited more ODD symptoms. The Cronbach's α was 0.85 in the current study. Family Cohesion/Adaptability (Parent Reported) The family cohesion/adaptability was assessed by the Family Adaptability and Cohesion Evaluation Scale (FACES-II; [14]), which has been validated as an appropriate measure for use in China [51]. FACES-II assesses the family function in two dimensions: adaptability (14 items; e.g., "In solving problems, the children's suggestions are followed") and cohesion (16 items; e.g., "Family members like to spend free time with each other"). The correlation coefficient between the adaptability and cohesion was 0.78 (p < 0.001) in the current study. Each parent reported their perception of the family function using a 5-point Likert scale 1 = almost never to 5 = almost always). A composite score was created by summing the scores of two dimensions. A higher total of scores on FACES-II indicated better adaptability and cohesion in the family. In the current study, the Cronbach's α for FACES-II was 0.84. Additionally, given that the data of family cohesion/adaptability was collected from either a father or a mother, an independent t-test was conducted to compare fathers' reports and mothers' reports. The result indicated that there was no significant difference in fathers' and mothers' report of family cohesion/adaptability (t = −0.23, p > 0.05). Parent-Child Attachment (Child Reported) Parent-child attachment was measured by child report of the Chinese Version of parent subscales of the Inventory of Parent and Peer Attachment (IPPA; [52,53]). This measure and its subscales have been demonstrated as having acceptable construct validity and internal consistency in a sample of Chinese primary school-aged children [54]. Each child was asked to rate their attachment to both mother and father on the following dimensions: trust (5 items; e.g., "'My father/mother respects my feelings"); communication (5 items; e.g., "If my father/mother knows something is bothering me, he/she asks me"); and alienation (5 items; e.g., "I am angry with my father/mother"), with parallel wordings of items for assessing relationships with mothers and fathers. All items are rated on a 5-point frequency response scale ranging from 1 (almost never) to 5 (almost always). A composite score was created for each mother-child and father-child attachment by subtracting the scores of alienation subscale from the sum scores of trust and communication subscales [53]. The higher scores indicated higher levels of parent-child attachment. In the current study, the Cronbach's α was 0.88 for both mother-child attachment and father-child attachment, respectively. Child Self-Esteem (Child Reported) Child self-esteem was assessed using the Self-Esteem Scale (SES; [34]. The scale was shown to be a reliable and valid measurement for elementary school-aged children in China [49]. Each participating child reported on their own self-esteem by using a 4-point scale (1 = strongly disagree to 4 = strongly agree) on 10 items (e.g., "I am a person of worth"). A reversed scoring was used for the five items with negative states. Scores were summed to create a composite score, the higher score indicated higher levels of self-esteem. The Cronbach's α was 0.84 in the current study. Children Depressive Symptoms (Child Reported) Children's self-report of depressive symptoms were assessed using the Center for Epidemiological Studies Depression Scale for Children (CES-DC; [55]), Researchers have validated the CES-DC for the assessment of depressive symptoms in Chinese children [56]. The CES-DC consists of 20 items (e.g., "I was bothered by things that usually don't bother me") and each item was rated on a 4-point scale (1 = not at all to 4 = a lot). Summed scores were used as a measure of child depressive symptoms, with higher scores indicating more severe depressive symptoms. The Cronbach's α of this study was 0.86. Aggressive Behavior (Teacher Reported) Child aggression was measured using the "Aggressive with Peers" subscale from the Child Behavior Scale (CBS; [57]). Previous study has proved this scale for measuring child aggressive behavior by teacher [58]. Class master teachers rated each child's aggressive behavior toward peers by using a 5-point scale (1 = never to 5 = always) on 7 items (e.g., "This child pushes or shoves other children"). Scores were summed to create a composite score, with a higher score indicating more aggressive behavior towards peers in school. The Cronbach's α was 0.96 in the current study. Data Analysis Preliminary data analyses were performed using SPSS 20.0. and Mplus 7.0. First, given that the data of family cohesion/adaptability was collected from either a father or a mother, the multiple group analysis was implemented in Mplus 7.0 [59] to examine the possibility of a gender difference among reporters. An unconstrained model that allowed the 13 paths (i.e., path a-m, see the Figure 1) estimates to vary among father-report and mother-report group was estimated. This model fit the data well, χ 2 (42) = 41.30, CFI = 1.00, RMSEA = 0.00, SRMR = 0.05. Next, a constrained model that constrained the parameter estimates of 13 paths for the father-report and mother-report group to be equal was estimated. If this constrained model resulted in a statistically significant decrement of model fit (χ 2 ) in comparison with the unconstrained model, then the pattern of associations could be assumed to vary for the father-report and mother-report groups. This model revealed a good fit for the data, χ 2 (55) = 54.65, CFI = 1.00, RMSEA = 0.00, SRMR = 0.06. Results indicated that the model constraining the 13 paths coefficients to be equal across the two groups did not fit significantly worse than the model with these 13 path coefficients freely estimated across groups (∆χ 2 = 13.35, ∆ df = 13, p = 0.42), suggesting that the current model did not differ across gender of reporters. Therefore, our study did not distinguish them in the model. Then, descriptive statistics were performed using SPSS 20.0 on all demographic variables (i.e., child gender, child age, educational years of parents, and family monthly income) and observed variables (i.e., family cohesion/adaptability, mother-child and father-child attachment, child self-esteem, depressive symptoms, and aggressive behavior). After that, the simple Pearson's correlations between observed and demographic variables were computed in order to understand relations between them. Primary analyses were conducted with the structural equation model (SEM) within Mplus 7.0. The proposed multiple mediation model (see Figure 1) with covariates (i.e., child gender, child age, educational years of parents, and family monthly income) was examined to test for possible mediation effects. The fit indices used to evaluate the model were the chi-square statistic (χ 2 ), goodness-of-fit index (CFI), Tucker-Lewis index (TLI), root mean square error of approximation (RMSEA), and standardized root mean residual (SRMR). Model fit was considered acceptable when the values of χ 2 were not significant, and CFI > 0.95, TLI > 0.95, RMSEA < 0.08, and SRMR < 0.08 [60]. A bootstrapping procedure with 5000 iterations was used to test the indirect effects, in which a 95% confidence interval (CI) excluding zero indicates a significant mediating pathway. Missing data were addressed using Mplus' default of the full information maximum likelihood method (FIML) [61]. Descriptive Statistics among All Variables of Interest Descriptive characteristics and the correlations among study variables are presented in Table 1. Family cohesion/adaptability, father-child attachment, mother-child attachment, child self-esteem were all positively correlated with each other and in the hypothesized direction (ps < 0.01). Furthermore, family cohesion/adaptability, father-child attachment, mother-child attachment, and child self-esteem were negatively associated with child depression (ps < 0.01). However, only child self-esteem was significantly and negatively related to child aggressive behavior (p < 0.01). Additionally, the demographic variables (i.e., children's gender, paternal age, paternal education, maternal age, maternal education, and family monthly income) were related to several of the observed variables. Thus, these demographic variables were examined as covariates in later data analyses. Note. Children's gender was coded 1 for boy and 0 for girl. Parental education was measured on a 6-level categorical variable (1 = elementary school diploma, 6 = master's degree). Family monthly income was measured on a 5-level categorical variable (1 = 2000 Chinese Yuan or less, 5 = 30,000 Chinese Yuan or more). * p < 0.05, ** p < 0.01. Additionally, child self-esteem partially mediated the links between mother-child attachment and child depression (β = −0.12, p < 0.01, 95%CI = [−0.20, −0.05]), and completely mediated the relationship between mother-child attachment and child aggressive behavior (β = −0.14, The model as a whole account reached 54% of the variance in child depression, 30% of the variance in child self-esteem, 24% of the variance in child aggressive behavior, 7% of the variance in mother-child attachment, and 5% of the variance in father-child attachment, ranging from large to small. Discussion The present study aimed to examine the association of multilevel family factors with the emotional and behavioral problems of children with ODD. Our findings indicated that family cohesion/adaptability at system level was indirectly related to emotional and behavioral problems via the mother-child attachment at a dyadic level and child self-esteem at an individual level in sequence. This finding extended previous findings by demonstrating that the multilevel family model could also explain the effects of the system, dyadic, and individual level family factors on the emotional and behavior problems of children with ODD. Regarding dyadic mother-and father-child attachment, we found that only motherchild attachment mediated the association between family cohesion/adaptability and child depression, while father-child attachment was not a significant mediator. These results suggested that mother-child attachment within Chinese families impacts child depressive symptom outcomes to a greater extent. Moreover, child self-esteem partially mediated the link between mother-child attachment and child depression, and completely mediated the relationship between mother-child attachment and child aggressive behavior. This finding highlighted the importance of carefully considering the role of self-esteem as an individual child characteristics on the development of emotional and behavioral problems. Taken together, the findings of the present study provided unique insights into explaining how multilevel family factors differently and uniquely relate to emotional and behavioral problems in children with ODD. Furthermore, the study's results could contribute to the development of educational guidance for families with children who have emotional and behavioral problems. Our findings that family cohesion/adaptability, at the system level factor, was indirectly linked to emotional and behavioral problems through mother-child attachment and child self-esteem were in line with previous findings [38,39] and consistent with person-context interaction theory and multilevel family factors model. In previous studies conducted in Western cultures, family cohesion, a distal family factor, was found to be associated with child behavioral problems through more proximal factors, such as parent-child interactions [62]. From our findings, we could postulate that findings generalize to families in Mainland China. Indeed, within the cohesive family environment, there were more positive interactions between parents and children, which contributed to a higher quality of parent-child attachment and higher levels of child self-esteem. The higher quality of parent-child attachment and higher levels of child self-esteem appeared to protect children with ODD from further developing emotional and behavioral problems. The findings also validate the ancient Chinese proverb, "A harmonious family brings prosperity". Cohesion and adaptability within the family would facilitate parent-child attachment and child development, even among children with ODD. The findings indicated the urgent need to understand the emotional and behavioral problems of children with ODD within the broader family contexts, instead of focusing solely on one family factor. In terms of clinical practice, the findings highlighted the significance of a positive family environment for the development of children with ODD. As hypothesized, mother-child attachment, a dyadic level family factor, was associated closely with child development within the family context. Our study found that mother-child attachments mediated the relationship between family cohesion/adaptability and child depression. However, the mediating role of father-child attachments was not significant. Thus, mother-child attachment, unlike father-child attachment, was a significant mediator in the relationships between system level family factors and the individual outcomes of children with ODD [38]. This conclusion is consistent with the dominant hypothesis [24] and the concept of the traditional division of the family roles [63]. Recently, as a result of social and cultural shifts, more fathers have progressively accepted greater family duties. However, under the traditional gender division of labor, mothers are still the primary caretakers, responsible for the children's everyday lives and diverse socioemotional needs. This was particularly true for mothers of children with emotional and behavioral problems [48,63]. Fathers tend to take a secondary role in the family. Societal expectations and gender norms forced fathers to prioritize financial assistance. Due to this division of work, the mother-child attachment was more crucial to the development of the child than the father-child attachment. Additionally, this implicates that father-child and mother-child attachments were differentially related to child outcomes [64,65]. The father-child attachment was closely related to the social development of the child, while the mother-child attachment was mainly linked to the internal psychological outcomes of children, such as emotional problems [7]. These results all suggested that the mother-child attachment is a pivotal dyadic-level factor that linked with child emotional development, specifically in families with children identified with ODD. Thus, it is important that future research and home-based educational guidance for children with ODD should focus on family-related dyadic factors, such as mother-child attachment. Results in the current study also indicated that child self-esteem mediated the spillover effect from mother-child attachment to the emotional and behavioral problems of children with ODD. This finding highlighted that child self-esteem, as an individual level factor, was an important pathway via which dyadic level factors exert function on child development. In fact, according to the sociometer hypothesis [66], self-esteem is a sociometer that is involved in the maintenance of interpersonal relations. Moreover, Leary (1990) proposed that individual self-esteem is associated with the evaluation of others given that the individual needs to be accepted in society. Children with ODD who experience lower levels of mother-child attachment might internalize negative perceptions of being rejected by their mothers, which might lead to lower self-esteem. Low self-esteem, in turn, can futher exacerbate emotional and behavioral problems in children [41,67]. Conversely, higher quality of mother-child attachment may contribute to a higher level of self-esteem in children, which can buffer against other emotional and behavioral problems. These findings point to the importance that child self-esteem played in the relationship between dyadic level factors (i.e., mother-child attachment) and child psychological outcomes. Importantly, it is worth noting that although children with ODD are more likely to exhibit emotional and behavioral problems, higher levels of self-esteem can improve the healthy development of children, and decrease the occurrence of emotional and behavioral problems. Taken together, these findings can help inform and improve services for both families with children with ODD and with emotional or behavioral problems. Limitations and Future Prospects Our findings should be interpreted in light of several limitations. First, our study predominantly focused on the hierarchy of family factors at different levels and their effects on problem behaviors in children with ODD, the mutual linkages are not further elaborated on in this study. Additionally, with a cross-sectional method, we could not infer causal relationships or examine reciprocal relationships. However, the interplay of multilevel family factors and the emotional and behavioral problems of children with ODD might initiate transactional feedback loops. Longitudinal research should underscore the reciprocal relationships between multilevel family factors and child ODD symptoms. Second, the data collected in the present study were based on self-reports from parents, class master teachers, and children, which could have biased our results. Future studies should aim to reduce the bias caused by self-report methods and prioritize multi-informant ratings to better capture the heterogeneity of family dynamics. A further limitation is that we did not eliminate the influence of the different survey methods utilized in Grades 1-2 and Grades 3-5. To properly reflect the developmental outcomes of young children, future research should embrace more objective approaches. Fourth, the current study only examined parent-child attachment at the dyadic level and child self-esteem at the individual factor. Observing both parent-child relationships and marital relationships on the dyadic level and both the child and parent factors on the individual level may offer a new and integrated perspective to explore the association. Another concern is that caution is needed when generalizing the results of our study across cultures or age groups. As was previously stated, the core family values of Chinese culture are distinct from those of Western civilizations [68]. Consequently, the specific links discovered between multilevel family factors and emotional and behavioral problems in children with ODD must be confirmed in Western countries. Implications Despite these limitations, the present study substantiated and enriched the multilevel family factors model [12,38] by considering mother-child and father-child attachments and child self-esteem concurrently. Findings in the current study contributed to our understanding of how multilevel family factors relate to the emotional and behavioral problems of children with ODD. For researchers and practitioners working with families that have children with ODD, attuning to the family environment (family cohesion and adaptability), parent-child attachment (particularly mother-child attachment) may help decrease the severity of child emotional and behavioral problems. Furthermore, child self-esteem, as a vital self-protective factor, should be emphasized as a mechanism that can help prevent emotional and behavioral problems in children with ODD. Conclusions The current study examined the associations between multilevel family factors (i.e., family cohesion/ adaptability at a system level, mother-child and father-child attachment at a dyadic level, and child self-esteem at an individual level) and emotional and behavioral problems among children with ODD in China. This study contributes to the research on the development of children with ODD who have comorbid emotional and behavioral problems through an examination of a theory-based model proposed by Lin et al. (2022) [12]. The results revealed that a system level factor (family cohesion/adaptability) was associated with child emotional and behavior problems indirectly through factors at the dyadic level (mother-child attachment) and the individual level (child self-esteem) in sequence. Mother-child, but not father-child, attachments, mediated the linkage between family cohesion/adaptability and the emotional problems of children with ODD. Moreover, child self-esteem mediated the association between mother-child attachment and child emotional and behavioral problems. These results underscored the significance of understanding the emotional and behavioral problems of children with ODD within the framework of the family, and more particularly, within the context of the multiple levels of family relationships. The research highlighted the need for practitioners to carefully consider the features of the systemic family and the unique relationship between multilevel family factors and child outcomes. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request. Conflicts of Interest: The authors declare no conflict of interest.
2023-02-01T16:04:07.285Z
2023-01-29T00:00:00.000
{ "year": 2023, "sha1": "b170ccc6e95eeb3ed3c3f2be35078f03a0f3e8c6", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-328X/13/2/113/pdf?version=1674985377", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5685bf6abc7ecc61643e6aadd0719c4039e8430a", "s2fieldsofstudy": [ "Psychology", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
221990861
pes2o/s2orc
v3-fos-license
Antiulcer and Blood-Boosting Activities of Feeds Supplemented with Trametes versicolor from Nigeria Original Research Article Antiulcer and blood-boosting activities of feeds supplemented with Trametes versicolar (Tv) in Wistar rats (Rattus norvegicus) were studied. Haematological studies and antiulcer biochemical analysis were carried out on the rats using standard methods. Data were presented as Mean ± SEM, analyzed using two-way ANOVA and p ≤ 0.05 was significantly different in all the variables. Haematological parameters were not significantly different across all Tv treatments when compared with the control (CN). There were also significant differences in values obtained for gastric ulcer inhibition, nitric oxide, mucin, sulfhydryl, and H + /K + -ATPase. Tv treatment groups also differed when compared with ulcerated untreated control (CU). The pathological changes detected from histological studies on the stomach tissues showed inhibition of ulcers. Tv treatment groups demonstrated blood boosting and antiulcer activity through synergistic activities of mucin, H + /K + ATPase activity, and antioxidant mechanism. The implications of these observations are discussed. INTRODUCTION Ascomycetes and Basidiomycetes are major classes of higher fungi which form reproductive structures known as fruit bodies or basidiocarps [1]. Many higher fungi have been used in different countries of the world as sources of protein, dietary fibre, unsaturated lipids and different mineral elements [2][3][4]. Nigerian higher fungi have been reported to possess different medicinal properties [5][6][7][8][9][10][11]. Therapeutic values of fungi are directly linked to their possession of phytochemicals and bioactive compounds [12,13]. These bioactive molecules could be extracted from edible, inedible and poisonous fungal species and characterized. In developing countries especially Nigeria, the medicinal uses and quest for new health and nutritional supplements from fungi have been exploited by scientists and indigenous people. Medicinal fungi have been used in the management of several diseases [14][15][16]. Problems related to the use of standard drugs are being addressed by the advocacy for the exploitation and use of herbal products as alternatives. Different fungi have been used from ancient times, by traditional healers from Asia, Africa, America and Europe. Traditional medical practitioners in South-Western Nigeria usually prepare hot water extracts of fungi with other medicinal plants, or they may use local gin for the extraction of these fungi [5,8]. Blood is the transport medium in the body of all mammals. Anything that affects blood, usually affects the entire body in relation to growth, health, body maintenance, and reproduction. However, nutritional factors usually influence blood status of any animal [20,21].Blood parameter assessment can be used to determine the effects of foreign materials such as medicinal herbs. Microbiology Gastric Ulcer (GU) is known as an ulcer of the stomach and is defined as sub-mucosal or deeper erosion of normal gastric mucosa. Peptic Ulcer Disease (PUD) is predominantly induced by Helicobacter pylori. Non-steroidal anti-inflammatory medicines have been linked to the increased secretion of gastric acids. Anxiety, smoking, spicy foods and dietary deficiencies are the other factors. Peptic ulcer disease risks may include internal bleeding, gastro-intestinal blockage, perforation, and peptic ulcer refractory [21,22]. Peptic ulcer disease is significant worldwide, affecting 4% of the total population. There were 327,000 deaths from PUD in 1990 and 301,000 deaths in 2013 [22]. Every year, 4 million people throughout the world are affected by PUD. Accordingly, conventional medicine manages ulcers with proton pump inhibitors (PPIs), H 2 receptor antagonists, antacids, antibiotics, and mucosal protection agents [23]. Nevertheless, there are reports of negative effects and long-term recurrences from these medications. Consequently, people are exploring other alternatives, including natural remedies. Macrofungi from Nigeria have been reported of possessing secondary metabolites, which has made them to be reservoirs of useful bioactive compounds with valuable therapeutic values [1,12,16]. However, there is a dearth of information on the haematinic or antiulcer properties of Trametes versicolor. The purpose of this study was to examine the antiulcer and blood boosting activities of Trametes versicolor from Ibadan, southwestern, Nigeria. MATERIALS AND METHODS Fresh fruit bodies of wild Trametes versicolor were collected during the rainy season (August-September, 2017) from University of Ibadan Botanical Gardens, Ibadan (7.3775 o N, 3.9470 o E) in Oyo State, South-West Nigeria. This preliminary identity was validated and further subjected to the standard descriptions of Ostry et al. [24]. The fungal samples were air-dried, powdered and preserved for future use in an air-tight amber bottle, and refrigerated at 4 o C. Rat Experiments Thirty five Wistar rats (100-110g; n=7) were divided into five groups thus: Groups 1-(un-ulcerated normal feed (CN); 2-(ulcerated not treated (CU); 3-(20 mg/kg of cimetidine (Cm); 4-(20% of Tv) and 5-(40% of Tv) for days 7 and 14 respectively. Full haematological analysis was carried out on the rats at the end of each experimental day. The animals had free access to their normal feeds for the control experiments and were pretreated with Tv and cimetidine for 7 and 14 days respectively with water ad libitum throughout the experiment. They were then fasted for 24 hrs prior to the administration of indomethacin and sacrificed after 4 hours. The stomach of each rat was excised, weighed, and graded for ulceration. Animal experiments were performed in line with Experimental Animal Care and Use Guidelines of the National Institute of Health (Pub No. 85-23, revised 1985). Biochemical and histological studies were carried out on the excised stomach tissues using standard methods. Determination of Full Blood Cell Count The Dacie and Lewis [25] methods were used for haematological research in the analysis of blood. Macroscopic Assessment Scoring of Ulcer Inas et al. [26] method was used for indomethacin-induced ulceration while gastric ulceration was assessed using Elegbe and Bamigbose [27] established scoring technique. Histopathological Studies For the histological studies, the method Elegbe and Bamigbose [27] was used. Biochemical determination of Lipid peroxidation, Nitric oxide (NO), Sulfhydryl content, Mucosal Hydrogen-peroxide (H 2 O 2 ), Total protein concentration, Hydrogen/Potassium anti-pump activities, and Mucin content. The lipid peroxidation was calculated by the method of Varshney and Kale [28], and the concentration of nitrite in the supernatant was measured as an NO production indicator detected by the Griess reaction [17]. Sulfhydryl levels and the Hydrogen peroxide (H 2 O 2 ) tissue activity was carried out using the principles of Elegbe and Bamigbose [27]. The protein concentrations of the various samples were estimated using the Biuret process, as defined by Elegbe and Bamigbose [27], with a slight modification; and the determination of Hydrogen/Potassium anti-pump activities was carried out using the method Ronner et al. [29], as updated by Bewaji et al. [30]. Determination of mucin content was carried out by using the method of Bewaji et al. [30]. STATISTICAL ANALYSIS Data were expressed as Mean ± SEM, analyzed using two-way ANOVA and p ≤ 0.05 was significant. Table 2. No observable differences in the haematological variables were noticed with 40Tv and 20Tv with CN on both days of treatments. Influence of Trametes versicolar supplemented diets on Albumin, Globulin, Blood urea nitrogen, and Creatinine on the rats is shown in Table 3. It was observed that on both days of treatments, there were no significant effects in the above serum biochemical variables with 20Tv and 40Tv when compared with CN. Influence of Trametes versicolor supplemented diets on ulcer score, ulcer index, and ulcer percentage inhibition of Indomethacin-Induced Gastric Ulcerated is presented in Table 4. It was observed that both treatments of 20%w/w Tramates versicolor (20Tv) and 40%w/w Tramates versicolor (40Tv) significantly increased in ulcer percentage inhibition by day 7 and day 14 when compared to CU. However, the highest inhibition of ulcer percentage was observed after day 14 for both 20Tv and 40 Tv. On day 7, only supplemented group of 20Tv increased significantly compared to CU. For day 14, the nitric oxide content of both 20Tv and 40Tv increased significantly as compared to CU; however, significant increase was only observed with 40Tv treatment when comparing 7 and 14 day treatments. On day 7, a significant increase ( jjj p< 0.001) was only observed with 40Tv while a significant increase ( kkk p< 0.001) was observed with 20Tv on day 14 when compared with CU. Comparing day 7 and day 14, significant differences ( zzz p< 0.001) were observed with both 20Tv and 40Tv. Similarly, no significant differences with treatment days were noticed. Fig-7: Influence of Trametes versicolor supplemented diets on Hydrogen Potassium pump activities of indomethacin induced ulcer in rats for 7 and 14 days exposure periods On days 7 and 14, all the treatment groups except 40Tv on day 14 showed significant decrease in H + /K + ATPase pump activity compared with CU. Comparing day 7 and day 14 treatments, no significant difference ( z p< 0.05,) was observed except with CN treatment groups. On days 7 and 14, a significant increase ( jjj p< 0.001) in Mucin contents was observed with 20Tv and 40Tv; however, comparing days 7 and 14 treatments, both 20Tv and 40Tv showed significant increases ( zzz p< 0.001) in Mucin content on day 14. DISCUSSION From our results, Trametes versicolor supplemented diets do not affect erythropoiesis negatively. This observation is similar to the findings of Togun et al. [31], who reported that the increase in PCV in combination with the marginal increase in RBC shows more successful erythropoiesis in experimental rabbits. As observed in the results, increase in WBC observed on day 7 could be due to the adjustment of the rats to the formulated feed which was a foreign substance in their bodies while decrease observed in day14 treatment period revealed the immunomodulatory property of the supplemented feeds. There were no major differences in neutrophils, lymphocytes, and monocytes seen, thus further confirming the results. Analysis of the renal and hepatic function was highly useful in screening for toxicity of medicine and herbal extracts, as both are essential to an organism's survival [32]. Trametes versicolor supplemented diets showed no sign of toxicity from the serum biochemical parameters indicating that the role of hepatocyte in rats will not be affected by sub-chronic feed intake. However, the existence of bioactive compounds known to have antioxidant and anti-inflammatory activities may be linked to their antiulcer property as observed with the percentage inhibition of the Tv supplemented diets [17]. This antiulcer property was further observed with the histological studies which showed healing features, such as the absence of observable lesions with the Tv treatments. The increase in total protein content observed may be due to the presence of high protein contents in mushrooms which is mostly needed during inflammation for cell regeneration and repair. This is in support of Jonathan et al. [33] findings, which reported that macro-fungi are highly rich in protein content. The test of oxidative activities that used malondialdehyde assay as its marker showed that there was no breakdown of the lipid stores on the cell membrane due to the treatments. It also discloses the capability of the treatments to avoid an increase in the generation of free radicals that might have aggravated the induced gastric ulcer. The increase in NO as observed may have conferred antiulcer properties on the rats. Nitric oxide (NO) helps protect the integrity of the mucus membrane and stomach epithelium which helps to mediate gastric blood flow as a vasodilator and prevent the secretion of acids as well as stimulate the production of mucus and bicarbonate. It thereby gave protection to the gastrointestinal tract which is useful in gastric ulcer healing [34]. The increase in sulfhydryl level suggests gastro-protective effects of Trametes versicolor supplemented diets which is useful for the formation and maintenance of gastric mucus through the growth of disulfide bridges which limits the development of reactive oxygen species associated with tissue injury while maintaining gastric integrity [35]. Hydrogen peroxide assay was further used to assess Trametes versicolor's antioxidant properties in the feed. The observed increase may be due to the presence of bioactive agents with antioxidant properties which helped to confer antiulcer properties on the fungus. This is in agreement with the findings of the Oyedemi et al. [36], that artificial and biological antioxidants were required to avoid negative effects of unpaired radicals. The changes seen with the higher fungi supplemented feeds on Hydrogen/Potassium pump activities in the experimental rats could be attributed to antiulcer activity demonstrated by the test fungus. Trametes versicolor could have acted as gastric proton pump inhibitor. This is in agreement with the findings of Strand et al. [37] who reported that the primary goal of doctors when managing peptic ulcer is treating with drugs that promote proton pump inhibitor in order to reduce gastric acid secretion. The increase in mucin content may have conferred potential antiulcer properties on the Trametes versicolor supplemented diets by helping to maintain homeostasis through its mucosal defense system [38]. From the results obtained, it can be concluded that Tramates versicolor treatment groups demonstrated blood boosting and antiulcer activity at both concentrations through the synergistic activities of mucin, H + /K + ATPase activity, and antioxidant mechanism. The results of this study suggest that diets enriched with Trametes versicolor do not contain the toxic effects that might hinder their therapeutic use as herbal medicine for the treatment of blood deficiencies and ulcer.
2020-09-29T00:51:44.098Z
2020-09-25T00:00:00.000
{ "year": 2020, "sha1": "1dfa533b7465d56fcf92cb0a2de0947133ec4a4e", "oa_license": null, "oa_url": "https://doi.org/10.36347/sajb.2020.v08i09.005", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "1dfa533b7465d56fcf92cb0a2de0947133ec4a4e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
59395924
pes2o/s2orc
v3-fos-license
Nanocompositional Electron Microscopic Analysis and Role of Grain Boundary Phase of Isotropically Oriented Nd-FeB Magnets Nanoanalytical TEM characterization in combination with finite elementmicromagnetic modelling clarifies the impact of the grain misalignment and grain boundary nanocomposition on the coercive field and gives guidelines how to improve coercivity in NdFe-B based magnets.The nanoprobe electron energy loss spectroscopy measurements obtained an asymmetric composition profile of the Fe-content across the grain boundary phase in isotropically oriented melt-spun magnets and showed an enrichment of iron up to 60 at% in the Nd-containing grain boundaries close to Nd2Fe14B grain surfaces parallel to the c-axis and a reduced iron content up to 35% close to grain surfaces perpendicular to the c-axis. The numerical micromagnetic simulations on isotropically oriented magnets using realistic model structures from the TEM results reveal a complex magnetization reversal starting at the grain boundary phase and show that the coercive field increases compared to directly coupled grains with no grain boundary phase independently of the grain boundary thickness. This behaviour is contrary to the one in aligned anisotropic magnets, where the coercive field decreases compared to directly coupled grains with an increasing grain boundary thickness, if Js value is > 0.2 T, and the magnetization reversal and expansion of reversed magnetic domains primarily start as Bloch domain wall at grain boundaries at the prismatic planes parallel to the c-axis and secondly as Néel domain wall at the basal planes perpendicular to the c-axis. In summary our study shows an increase of coercive field in isotropically oriented Nd-Fe-B magnets for GB layer thickness > 5 nm and an average ⟨Js⟩ value of the GB layer < 0.8 T compared to the magnet with perfectly aligned grains. Introduction The increasing demand of high-performance rare earth permanent magnets with a high coercive field and an energy density product value suitable for large scale applications in wind turbines and electrically powered automotive devices led to the development of heavy rare earth lean/rare earthfree Nd-Fe-B based magnets and to the optimization of the complex multiphase microstructure of the magnets [1].The hard magnetic properties are primarily controlled by the size, shape, and misalignment of the hard magnetic grains and their distributions and secondarily by the occurrence of other nonmagnetic and soft magnetic phases [2][3][4].In addition, the coercive field also strongly depends on the intergranular grain boundary (GB) phases separating the hard magnetic grains [5,6].The role of dopant elements, the thickness, and magnetic properties of the GB-phases have extensively been studied during the last 30 years [7,8].Local changes of the exchange coupling between grains and the decrease of the anisotropy field and demagnetizing field at/near intergranular phases considerably reduce the overall coercive field.First principles ab initio calculations claimed that even an antiparallel exchange coupling between a crystalline -Fe phase and the prismatic {100} planes of Nd 2 Fe 14 B would be energetically favorable, while a positive exchange-coupling constant was predicted in the Nd 2 Fe 14 B (001)/-Fe interface [9]. Advances in electron microscopic characterization technology have greatly improved the ability to quantify real microstructures found in Nd-Fe-B magnets.These techniques, in combination with finite element micromagnetic modelling, are improving the understanding of magnetization 2 Advances in Materials Science and Engineering reversal processes and coercivity mechanisms.Micromagnetic simulations give a deep insight into the mechanisms that cause magnetization reversal at external fields well below the anisotropy field [10].Nowadays, the new nanoanalytical electron microscopic techniques with atomic resolution allow the creation of precise microstructural models suitable for the numerical micromagnetic calculation of the demagnetization curve including the coercive field value.A recent high resolution TEM/STEM investigation of the intergranular GB-phase of a large grained, anisotropic sintered heavy rare earth-free Nd-Fe-B magnet with grain sizes up to several microns revealed a difference in composition for grain boundaries parallel (large Fe-content) and perpendicular (low Fe-content) to the alignment direction [11].This combined TEM/STEM and micromagnetic study of the anisotropic nature of grain boundaries shows a decrease of the coercive field with an increasing thickness of the grain boundary layer. Two quite distinct methods are in commercial use for producing Nd-Fe-B magnets: the rapid-solidification technique of melt spinning and the traditional powder-metallurgysintering approach.The present study compares different microstructures of various melt-spun materials with isotropically oriented hard magnetic grains with a grain size ranging from 20 nm to 100 nm.The melt-spinning procedure involves the ejection of a molten starting alloy through a crucible orifice onto the surface of a substrate copper disc with a high rotating speed [12].The microstructure and magnetic properties of melt-spun neodymium-iron-boron ribbons are sensitively dependent on the quench rate.The resulting hysteretic properties of an individual magnet material strongly depend on their nominal composition, microstructure, and processing parameters [13].Melt-spun magnet materials have widely been used for bonded and hot deformed type magnets so far.Hot-pressed melt-spun nanocrystalline heavy rare earth-free Nd-Fe-B magnets are promising candidates for a low cost solution for applications that require thermal stability up to 175 ∘ C-200 ∘ C [14]. The aim of the present paper is to determine the influence of the grain size, orientation of grains, and nanocomposition of GBs on the coercive field and magnetization reversal behaviour by a combined TEM/STEM and micromagnetic study with special emphasis on the nanoanalytical, high resolution EELS characterization of isotropically oriented GBs.The microstructural model structure based on an anisotropic compositional behaviour of GBs parallel and perpendicular to the easy axis of the grains which is used for the numerical micromagnetic simulations has been derived from the detailed nanoanalytical TEM/STEM analysis. Materials In the present study we investigated the microstructure of three rapidly quenched Nd-Fe-B ribbons in a nanoanalytical TEM/STEM study, which were provided by Magnequench Technology Center, Singapore.The isotropic RErich two-phase ribbon (MQU-F) with the nominal chemical composition (Pr,Nd) 13.6 Fe 73.6 Co 6.6 Ga 0.6 B 5.6 [15] has a distinct 3 nm-6 nm thick RE-rich GB-phase separating the isotropically oriented equiaxed and platelet shaped Nd-Fe-B grains.The isotropic fine grained ribbon (MQP-B+) with the nominal chemical composition Nd 12.4 Fe 77.3 Co 5.2 B 5.2 [16] is enriched in "Fe + Co" and possesses therefore a 1 nm-3 nm thin "Fe + Co"-rich GB-phase separating the isotropically orientated equiaxed Nd-Fe-B grains.In comparison an isotropically oriented and large grained nanocomposite with additional soft magnetic -Fe and Nb-granular phases and without a GB-phase between the hard magnetic grains has been investigated. Methods The nanoanalytical and structural investigations of the rapidly quenched Nd-Fe-B permanent magnet materials have been carried out with an analytical field emission transmission electron microscope (TEM) (FEI Tecnai F20) at 200 kV, which is equipped with a silicon drift energy dispersive X-ray (EDX) detector, a Gatan GIF Tridiem image filter and electron energy loss spectrometer (EELS) and a high angle annular dark field (HAADF) detector.Conventional sample TEM preparation including cutting, polishing, and ion milling in a Precision Ion Polishing System (PIPS) from Gatan was conducted.The structural investigations were performed with Fast Fourier Transformation (FFT) of high resolution TEM/STEM (HRTEM) images and selected area electron diffraction (SAED). EELS experiments were conducted to accurately determine the relative chemical composition of the intergranular phases via the -factor method.This method calculates the relative atomic percentage of an element (e.g., Nd) with respect to another element (e.g., Fe) from the ratio of their edge intensities in the EELS (or EDX) spectrum via the -factor (e.g., (Nd/Fe)), which was derived from the measurement of a standard specimen (e.g., Nd 2 Fe 14 B single crystal).TEM specimens with a relative thickness / < 0.7, where is the absolute specimen thickness and the mean free path in the specimen, were used in these experiments.Firstly, the -factors of Pr/Fe and Nd/Fe were calculated from EELS spectra of single crystalline Pr 2 Fe 14 B and Nd 2 Fe 14 B standards.Secondly, the background in the EELS spectra was fitted with a power-law function and subtracted, which resulted in the edge intensities of the elements.Thirdly, the relative atomic composition was calculated from the edge intensities via the -factors.The determination of the relative chemical composition via the -factor method is accurate for / < 1.0 with a relative error of ±5% [17].An optimized background model was used to measure the Fe-L 2,3 ionization edge due to its close vicinity to the F-K edge and the Nd-M 4,5 ionization edge due to its close vicinity to the Pr-M 4,5 edge [18].To avoid the development of an oxidized layer on the surface of the TEM specimen, precise precautions were taken.The influence of the electron beam broadening and the tilt of the GBs with respect to the incident electron beam on the chemical composition of 2 nm-6 nm thin GBs, as described in our previous publication [11], were taken into account.The higher yield in the elastic scattering events in EELS with respect to EDX [19] leads to a shorter acquisition time of each spectrum in a line scan.This is an advantage especially in the chemical analysis of thin GBs in thin (<50 nm) TEM specimens. The finite element software package FEMME, which is a hybrid finite element/boundary element method code, was used for the numerical micromagnetic simulations [20].On each point of the finite element mesh the Landau-Lifshitz-Gilbert equation is being solved [21].Besides the intrinsic magnetic properties, namely, the exchange constant A, the saturation polarization , and the uniaxial magnetocrystalline anisotropy constant 1 , also the direction of the easy axis (direction of 1 ) of a volume of a phase, which can be set with the polar angle and the azimuthal angle , is an input parameter for the simulation. 1 was set to zero in the GBs, since it is expected to have a negligibly small or zero value.The long range demagnetizing field and the direct exchange coupling between neighbouring atomic moments in the hard magnetic grains and soft magnetic grain boundary layers strongly influence the magnetization reversal.Besides the exchange and the demagnetizing field, the magnetocrystalline anisotropy and the misorientation of the individual grains also contribute to the resulting magnetization reversal and coercivity [10]. Realistic finite element granular structures based on TEM investigations of melt-spun Nd-Fe-B magnets have been generated using the Voronoi algorithm [22].This algorithm creates a unique volume decomposition based on a set of seeding points, similar to the Wigner-Seitz cell construction.We used the voro++ code [23] to create a Voronoi structure of equiaxed grains.The output from voro++ acts as an input for a Salome [24] script that creates a finite element discretization (mesh) of the granular structure.Two finite element model structures were created, one with directly coupled grains and one with a grain boundary phase with an approximate thickness of 10% of the grain size (Figures 1 and 2).The distribution of the easy axes of an isotropically orientated magnet is equal to the random distribution of points on a half sphere with a calculated azimuthal angle = 2 ⋅ and polar angle = cos −1 (V), where and V have to be chosen from random variates between 0 and 1.This results in an average misorientation angle ⟨ 0 ⟩ = 60 ∘ and a projection of the magnetization parallel to the external field of 0.5 [25,26]. For a clear distinction between GBs parallel and perpendicular to the external field and the -axis of the adjacent grains a simple two-grain model structure with an edge length of 40 nm was created and meshed with the software package GID version 12.0.4[27] (Figure 3).Two Nd 2 Fe 14 B grains are separated by a GB-phase consisting of two equally thick GB-volumes with a total GB thickness between 2, 4, 5, 6, and 8 nm.All model structures were discretized with a 0.5 nm-2.5 nm mesh size, where the mesh tessellation was chosen in a way to ensure that the smallest GB volume has at least one central node surrounded with the nearest neighbours corresponding to GB material. Isotropic RE-Rich Two-Phase Melt-Spun Ribbon (MQU-F). The polycrystalline microstructure of a rapidly quenched MQU-F ribbon with isotropic orientated -axis of hard magnetic Nd-Fe-B grains with a size ranging from 20 nm to over 100 nm is shown in the TEM bright field (BF) and HAADF images of Figure 4.The contrast of the TEM-BF image is originated by the combination of orientation/diffraction contrast and absorption contrast, which depends on the thickness and average density of the TEM specimen leading to the bright contrast of the GB-phase.A HAADF image is generated in the STEM mode and the origin of the images contrast depends on the chosen camera length.At a cameral length (cl) below ≈ 80 mm the intensity distribution in the HAADF image mainly consists of the average atomic number 1.65 of the probed volume (-contrast) and the thickness of the specimen [28].The GB-phase shows a double contrast with a dark interface to the adjacent grains and a bright center in the HAADF image in Figure 4(b).The HAADF intensity profile along the EELS-1 line scan and 1.65 dependence (-contrast) are shown in the insert in Figure 4(b).The -contrast was calculated from the atomic percentage of the elements measured with EELS (Figure 7(a)).The dark interface between the grains and the GB is enriched in "Fe + Co" and contains less "Pr + Nd," leading to a lower average atomic number.The -axis of elongated grains was always found to be perpendicular to the longer edge of the grains. The hard magnetic Nd-Fe-B grains are separated by a 3 nm-6 nm thick rare earth-(RE-) rich GB-phase and near GB junctions by the cubic -(Pr,Nd) 2 O 3 phase, which also has previously been reported in literature [2,7,11,[29][30][31][32].The weakly paramagnetic -(Pr,Nd) 2 O 3 phase has only a negligible influence on the magnetization reversal compared to the soft ferromagnetic GB-phases.Dopants like Al, Ga, and Cu influence the liquid phase during sintering [3].Ga-atoms were dissolved in the hard magnetic grains and GBs partially replacing the Fe-atoms during rapid quenching, since their amount is too low to form separate phases.The amorphous oxygen containing RE-rich GB-phase, shown in the HRTEM image in Figure 5, has an approximate composition of (Pr,Nd) 41 (Fe,Co) 49 O 6 F 4 .The RE/Fe ratio is in agreement with the composition of Nd 48 Fe 48 Cu 4 reported by Sasaki et al. [33].A combined STEM and three-dimensional atom probe tomography (3D-AP) study of sintered Nd-Fe-B magnets reported a chemical composition of the Nd enriched amorphous GB-phase of Nd 30 Fe 45 Cu 24.1 B 0.9 [34].Sepehri-Amin et al. [35] produced a ferromagnetic Nd 30 Fe 66 B 3 Cu 1 thin film, whose chemical composition was derived from a laser assisted 3D-AP investigation of GB-phases of sintered Nd-Fe-B magnets.Woodcock et al. [36] reported of an amorphous oxide containing RE-rich GB-phase in a hot deformed magnetic grains are visible.Sasaki et al. [37] reported about a crystalline GB-phase with a RE content of 60 at% in Nd Ga 0.5 GB-phase in Nd-Fe-B magnets subjected to a hydrogen-disproportion-desorption-recombination process was reported in 3D-AP study [39]. In a previous study we have shown [11] that in an aligned sintered magnet the GBs perpendicular (-GB) to the alignment direction of the magnet have a higher RE content (up to 60 at%) than the GBs parallel (-GB) to the alignment direction (RE content below 30 at%).GBs with intermediate misorientation to the alignment direction (-GB) show a chemical composition corresponding to an average of and -GB.In sintered anisotropic magnets pure and -GBs are common, but in melt-spun isotropic magnet materials the GB is a mix of and -GB in general, due to the strong misalignment of the neighbouring grains.The EELS-1 line scan starts from a 2-14-1 grain into a -GB, resulting in a strong gradient of the chemical composition, and continues from the -GB into a grain with approximately 45 ∘ misorientation of the -axis with respect to the surface normal of the GB (Figures 4(b) and 7(a)).This correlates with a gradual change of the chemical composition.The EELS-2 line scan starts in a grain whose -axis is orientated perpendicular to the surface normal of the GB resulting in a slow change in chemical composition (Figures 6 and 7(b)).Since the -axis of the second grain is orientated parallel to the surface normal of the GB the change in chemical composition is faster.The faster change in the chemical composition from a -GB with respect to the -GB is shown in the EELS-3 line scan (Figures 6 and 7(c)). The average "Fe + Co" concentration of the GB-phase in the investigated MQU-F ribbon is 55 at%, if only "Fe + Co" and "Pr + Nd" elements are considered.According to the magnetic phase diagram of Nd 100−x Fe x which was recently published by Sakuma et al. [40] we assumed for the GB-phase a magnetic saturation polarization of 0.43 T and calculated an exchange stiffness constant of 1.0 pJ/T.The relation ∝ ⋅ 2 between and the exchange constant was used, as suggested by Kronmüller and Fähnle [41]. Using the Voronoi model structure of isotropically orientated Nd 2 Fe 14 B grains (Figure 1) with an average grain size of 50 nm and a GB-phase with a thickness of 4 nm-6 nm (Figures 5 and 6) we calculated the demagnetization curves obtained from the numerical finite element micromagnetic simulations depending on the coupling between the grains and the degree of misorientation of the grains.Figure 8 shows a high accordance of the coercive field between the measured value and the randomly misoriented grains.It should be noted that for the simulated demagnetization curve (sm-GB_60 ∘ ) the remanence gets underestimated in the simulation with a perfectly isotropic distribution of the -axes ( 0 ≈ 60 ∘ ).In addition Figure 8 shows that the simulations for directly coupled Nd 2 Fe 14 B grains (no-GB-phase) underestimate the coercive field by 1.5 T ( 0 ≈ 60 ∘ ).The simulation with a smaller degree of misalignment of the hard magnetic grains ( 0 ≈ 45 ∘ ) reveals the significant increase of and with respect to the perfectly isotropically oriented case ( 0 ≈ 60 ∘ ).This is in agreement with the Stoner-Wohlfarth model of noninteracting single-domain particles [26], where is increasing by ≈ 5% of the anisotropy field , which corresponds to ≈ 0.4 T in Nd 2 Fe 14 B, if 0 is reduced from 60 ∘ to 45 ∘ .The reduction of with rising value of 0 is attenuated in the simulations with a ferromagnetic GB-phase.The higher value of the simulation with 0 ≈ 45 ∘ with respect to the simulation with 0 ≈ 60 ∘ is explained by the higher value of the component of the polarization parallel to the applied field direction (-direction). Isotropic Fine Grained Melt-Spun Ribbon (MQP-B+). The small grained microstructure of the sample MQP-B+ is shown in the TEM-BF image of Figure 9(a).The isotropic orientation of the -axes of the Nd-Fe-B grains with a grain size ranging from 15 nm to 50 nm is displayed in the medium angle annular dark field image (MAADF) of Figure 9(b), which is generated at a higher camera length (cl = 970 mm) compared to the HAADF image.The MAADF contrast generation is similar to the one of a TEM-BF image.The insert in Figure 9(a) shows EELS line scan across a 3 nm thick "Pr + Nd" enriched GB-phase.Under the assumption that all boron is bound in the Nd 2 (Fe,Co) 14 B phase the chemical composition of the intergranular GB-phases has been calculated from the nominal composition Nd 12.4 (Fe,Co) 82.5 B 5.2 to be Nd 17 (Fe,Co) 83 .This corresponds to 12 at% of the total composition.With the approximation of 30 nm large rhombic dodecahedron shaped grains separated by a 2 nm-3 nm thick GB-phase the volume fraction of the GB-phase is 21%.The chemical composition of the GB measured by EELS is Nd 20 (Fe,Co) 77 O 3 .These results are in good agreement with experiments with an Auger Microprobe spectrometer [42].The micromagnetic simulations were carried out with the Voronoi model structure with isotropically orientated grains (Figure 1) with an average grain size of 35 nm and a soft magnetic GB-phase with a thickness of 2 nm-4 nm and average values for = 1.1 T and = 6.54 pJ/m, which is similar as described for the MQU-F sample.The simulated coercive field value is in good agreement with the measured value (Figure 10).Due to the high value of the GB the coercive field value (sm-GB) is only slightly increased with respect to of the simulation from directly coupled Nd 2 Fe 14 B grains (no-GB). Isotropic Large Grained Nanocomposite with 𝛼-Fe and Nb-Containing Granular Phases.The large grained microstructure of the exchange coupled nanocomposite with isotropically orientated Nd-Fe-B grains and a grain size ranging from 30 nm to 150 nm is shown in the TEM-BF image of Figure 11(a).The insert in Figure 11(a) is EELS line scan across a GB of two Nd 2 Fe 14 B grains with no detected intergranular GB-phase.Besides the hard magnetic 2-14-1 phase the soft ferromagnetic -Fe and the weakly antiferromagnetic Fe 2 Nb phase ( < ≈ 270 K) [43] are shown in the HRTEM image in Figure 11(b). A large area EDX mapping in the HAADF image in Figure 12(b)-12(e) was used to determine the areal fraction of the identified granular phases (Figure 12 -Fe phase another soft magnetic Nb 6 Fe 76 B 18 ( = 1.41 T, = 2.8 mT) phase which was formed by rapid quenching [44] was identified.Table 1 summarizes the lattice parameter, space groups, and prototypes of the analyzed phases which were used to identify the phases in the HRTEM images.The bright areas in the Fe-K map (Figure 12(c)) correspond to the -Fe phase.The Fe 2 Nb phase is located at the high intensities of the Nb-K map (Figure 12(d)) and the Nb 6 Fe 76 B 18 phase at the more dull yellow regions.The location of the 2-14-1 phase is clearly visible in the bright areas in the Nd-L map (Figure 12(e)). A Voronoi model structure with 29 directly coupled grains (Figure 2) with an average size of 60 nm was used to simulate the hysteretic properties.Corresponding to the analyzed volume distribution of the phases we assumed 21 (72%) Nd 2 Fe 14 B grains, 4 (14%) -Fe grains, and 4 (14%) Nb 6 Fe 76 B 18 grains.The magnetic properties of the phases are summarized in Table 2.All 1 values were set to zero except in the hard magnetic Nd 2 Fe 14 B phase. The measured demagnetization curve and the simulated curves of directly coupled grains with an average grain misorientation of 45 ∘ and 60 ∘ are shown in Figure 13.For the realistic phase distribution the calculated coercive field is slightly underestimated in the simulation compared to the measured value.One reason for this discrepancy is relatively small sample area where the areal distribution was acquired, with respect to the whole ribbon volume.A higher quality of the random distribution of the granular phases would be achieved in a model with a larger number of grains.The model with 29 directly coupled Nd 2 Fe 14 B grains overestimates both and significantly.The strong decrease of in the model structure with the realistic assumption of soft magnetic grains, compared to the case of only hard magnetic Nd 2 Fe 14 B grains, was also reported in a detailed micromagnetic study of Nd-Fe-B magnet with soft magnetic granular phases [45]. Micromagnetic Simulations of the Switching Field of Randomly Orientated Grains.The orientation relation of grain boundaries of adjacent grains and their composition close to their grain surfaces with respect to the alignment direction of the magnet and external field direction influence the resulting magnetic switching field and coercive field, respectively.Using the two-grain (2-G) model structure of Figure 3 we compare in Figure 14 three different configurations which possibly occur in anisotropically and isotropically oriented magnets.The first and second case in Figure 14 show a pure -GB and pure -GB, commonly found in anisotropic aligned sintered Nd-Fe-B magnets.The external field is parallel to [001] direction in both cases.The third case shows -GB facing the lower grain and -GB facing the upper grain and ext is parallel to [111], typically found in isotropically oriented melt-spun Nd-Fe-B magnets. values for and -GB were calculated from the chemical composition obtained from TEM/EELS measurements of GBs in anisotropic sintered Nd-Fe-B magnets [11].The measured "Fe + Co" concentrations of the GBs in melt-spun magnets (Figures 7 and 9(a)) and the corresponding and values are summarized in Table 3. The micromagnetic simulations show that the switching field sw depends on both, the GB thickness and value of the GB layer (Figure 15(a)).For small value of the -GB 1 The Néel temperature of the weakly antiferromagnetic Fe 2 Nb phase is ≈ 270 K and therefore we assumed nonmagnetic properties for the simulation at room temperature.(<0.2 T) sw slightly increases with rising GB thickness (-GB).For high value of the -GB (1.0 T) sw is significantly lower with rising GB thickness (-GB).In both cases the external field is parallel to [001] direction.This behaviour is typical for anisotropic magnets with perfectly aligned grains.In the isotropic case (-GB), with ext ‖ [111], the switching field value slightly decreases with rising GB thickness (Figure 15(a)).For a GB thickness > 5 nm the anisotropic -GB ( ext ‖ [001]) has a lower sw compared to the isotropic -GB ( ext ‖ [111]).This is an explanation for the trend of higher sw values of magnets with higher misorientation degree, which contradicts the results formulated by Stoner and Wohlfarth [26] for noninteracting grains or particles but agrees with experimental results [52] and previous simulations [11].In comparison, the dependence of the switching field of a 2-G model structure with averaged homogeneous magnetic properties in the GB layer = 0.43 T and = 1.00 pJ/m and = 1.1 T and = 6.54 pJ/m, respectively, is shown in Figure 15 x-GB x-GB x-GB y-GB y-GB y-GB During the magnetization reversal processes different types of domain wall (DW) types, such as Bloch and Néel DWs, are formed in perfectly aligned magnets depending on the orientation of the GB with respect to the -axis of the adjacent grains and the direction of the external field.The calculated demagnetization curves for the pure -GB with ext // [001] and = 0.15 T (Table 3) and for the pure -GB with ext // [001] and = 1.0 T and a GB thickness of 8 nm are shown in Figure 16.As a result of the large difference in and values the coercive field for and -GB varies from 2.7 T to 6.5 T. The -GB shows a 12% higher coercive field, if the magnetic properties of and -GB are the same.This difference is originated by the different total energies for the formation of a Bloch domain wall (DW) (-GB) and a Néel DW (-GB) with an additional stray field contribution. The magnetization of the -GB rotates in the perpendicular direction with respect to the adjacent grains at a relatively small external field of 0.95 T (Figure 17A).Two Néel DWs are formed, whereby the magnetization within the center of the GB is antiparallel to one of the adjacent grains, until being at a high external field value of 6.45 T (Figure 17B).The high value of the necessary external field is originated by the large formation energy of a Néel DW due to the strong stray field occurring along the whole interfaces between the GB and the neighbouring grains. The magnetization reversal state C is typical for a Bloch DW nucleated in the -GB (Figure 18C).Since the magnetization vector has a degree of freedom to rotate along the -axis with relatively low activation energy, the -GB switches at a lower external field of 3.78 T and finally forms two Bloch DWs at the interfaces with the hard magnetic grains (Figure 18D).The formation energy of the stray-field-free Bloch DWs is smaller than the one of the Néel DWs.In general the DWs are complex magnetization transitions between neighbouring magnetic domains.Their energy, thickness, and shape depend on various parameters such as the intrinsic magnetic properties and the shape of the magnetic material.The complex structure of DWs can only be calculated numerically by means of micromagnetic simulations [53]. The saturation polarization and the thickness of the GB layer have been varied using the isotropic Voronoi model structure of Figure 1 in order to verify the results of the 2-G model structure of Figure 15 with a realistic model structure with averaged homogeneous magnetic properties.At a small value of and the GB magnetically decouples the isotropically orientated hard magnetic grains leading to an increase of with respect to direct coupled Nd 2 Fe 14 B grains (Figure 19(a)).This behaviour is strongly pronounced in the MQU-F magnet material and also present in the MQP-B+ ribbon.As and of the GB-phase rise, decreases linearly due to stronger coupling of the hard magnetic grains and the higher probability of a nucleation of a reverse magnetic domain in the GB.Simultaneously the remanence increases because of the stronger remanence enhancement effect of the coupled Nd-Fe-B grains [54].At a GB thickness of 5 nm and grain size of 50 nm the coercive fields for the model structures with and without a GB-phase are equal at ≈ 1.40 T ( = 10.60 pJ/m) and equal at ≈ 1.34 T ( = 9.71 pJ/m) for a GB thickness of 3 nm and a grain size of 30 nm (Figure 19(a)).The further increase in and leads to a reduction of with respect to directly coupled Nd 2 Fe 14 B grains.In these simulations the ratio between the grain size and the GB thickness was kept constant.This accredits the significant difference in of the 30 nm G_3 nm GB and 50 nm G_5 nm GB simulations.This influence of the grain size is approximately equal to the difference of the calculated values of the simulations of the model structure of directly coupled grains without a GB-phase (dotted lines in Figure 19(a)).Bance et al. [55] showed that the decrease of with increasing grain size in hard magnets is caused by the nonuniform magnetostatic field in the polyhedral grains.In summary the results from the 2-G model structure that is mostly independent of the GB thickness in isotropically oriented Nd-Fe-B magnets were also verified with the realistic Voronoi model structure calculations. The dependence of on the GB properties is more strongly pronounced in aligned Nd-Fe-B magnets.Figure 19(b) compares the results of simulations using the Voronoi model structure of Figure 1 with an average grain misalignment ⟨ 0 ⟩ ≈ 7 ∘ .We observed that the decrease of with rising grain size is less pronounced in the simulations of anisotropically oriented directly coupled Nd-Fe-B grains (dotted lines in Figure 19(b)).Secondly, the GB thickness has a stronger influence on the reduction of in anisotropic magnets, which is shown in the greater difference in the values of the 30 nm G_3 nm GB and 50 nm G_5 nm GB simulations compared to the directly coupled simulations (no-GB).This is in accordance with our recently published results of the strong decrease of with rising GB thickness in anisotropic Nd-Fe-B magnets [11].It should be emphasized that the presence of a soft magnetic GB layer always leads to a reduction of the coercive field in aligned magnet, if the saturation polarization of the GB is > 0.1 T ( = 0.05 pJ/m).The decrease of with rising of the GB layer shows a nonlinear behaviour in anisotropically oriented grains, compared to the linear decrease in the isotropic case. Conclusion The TEM/EELS analysis of nanocrystalline Nd-Fe-B based magnet materials revealed an asymmetric composition profile of the Fe-and the Nd-content across the grain boundary phase in isotropically oriented melt-spun magnets.We found an enrichment of iron up to 60 at% in the Nd-containing grain boundaries close to the prismatic Nd 2 Fe 14 B grain surfaces and a reduced iron content up to 35% close to basal grain surfaces perpendicular to the -axis.Numerical micromagnetic simulations based on granular Voronoi model structures showed that the coercive field strongly depends on the average Fe-content and the saturation polarization and exchange stiffness constant of the GB-phase as well as on the GB thickness and grain orientation.In general, the coercive field is significantly increased, if the Fe-content of the GBs, especially parallel to the -direction of the hard magnetic 2-14-1 grains, is reduced.Our simulations predicted an increase of the coercive field of isotropically oriented magnets with a soft magnetic GB-phase independently of the grain boundary thickness between 2 nm and 20 nm for ⟨ ⟩ < 1.2 T compared to directly coupled 2-14-1 grains with no-GB-phase.Contrary to this result we have demonstrated that the coercive field of anisotropic, aligned magnets significantly decreases for soft magnetic GB-phases with > 0.2 T and GB thickness of 3 nm-5 nm compared to directly coupled 2-14-1 grains.Moreover a rising GB thickness > 4 nm further leads to a significant reduction in coercive field in anisotropic aligned magnets. We have demonstrated that numerical micromagnetic simulations perfectly predict the hysteretic properties of Figure 1 : Figure 1: Micromagnetic finite element model structure with 29 Voronoi grains separated by a GB-phase with a thickness of about 10% of the grain diameter. Figure 4 : Figure 4: (a) TEM-BF image showing several misaligned grains with the marked [001] directions and the framed section of the HRTEM image of Figure 6.(b) HAADF image (cl = 30 mm) with the EELS-1 line scan (Figure 7) across GB with a double contrast.Insert in (b) correlates the double contrast of the GB (HAADF signal (red)) and the average 1.65 (blue) along the EELS-1 line scan. Figure 6 :FFigure 7 : Figure 6: HRTEM image of three grains separated by crystalline GBs showing the (001) lattice fringes of the top right grain, (114) of the left grain, and (111) of the bottom grain are visible; the positions of the EELS line scans 2 and 3 of Figure 7 are shown. Figure 8 : Figure8: Comparison of the measured demagnetization curve of the MQU-F melt-spun ribbon with calculated curves for directly coupled Nd 2 Fe 14 B grains (no-GB) and grains separated by a weakly soft magnetic GB-phase (sm-GB) with = 0.43 T and = 1.0 pJ/T for an average grain misorientation of 45 ∘ and 60 ∘ .The average grain size is 50 nm and the average GB thickness is 5 nm. Figure 10 :Figure 11 : Figure10: Comparison of the measured demagnetization curve of the MQP-B+ melt-spun ribbon with calculated curves for directly coupled Nd 2 Fe 14 B grains (no-GB) and grains separated by a weakly soft magnetic GB-phase (sm-GB) with = 1.1 T and = 6.54 pJ/T for an average grain misorientation of 60 ∘ .The average grain size is 35 nm and the average GB thickness is 3 nm. Figure 13 : Figure 13: Comparison of the measured demagnetization curve of the Nd-Fe-B nanocomposite melt-spun ribbon with calculated curves for directly coupled only hard magnetic grains (only Nd 2 Fe 14 B) and for the model structure with 8 soft ferromagnetic grains and 21 Nd 2 Fe 14 B grains (8 sm-G).45 ∘ and 60 ∘ denote the average misorientation of the granular model structure.The average grain size is 60 nm. Figure 14 : Figure 14: Three different configurations with the orientation of the GB parallel and normal to ext and the -axis of the grain perpendicular to the GB (-GB) and parallel to the GB (-GB). Figure 15 : Figure 15: (a) Influence of the GB thickness on sw : for the three different 2-G model structures of Figure 14 (solid lines).In comparison the GB with averaged homogeneous magnetic properties of = 0.43 T = 1.10 T are shown (dotted line).(b) Influence of the averaged homogeneous saturation polarization of the GB-phase on sw in the 2-G model structure for different GB thickness, ext ‖ [111].The 2-G model structure with a GB thickness of 20 nm has a size of 60 × 60 × 60 nm. Figure 16 : Figure 16: Calculated demagnetization curves for -GB with ext // [001] and -GB with ext // [001] and a GB thickness of 8 nm.The details of the magnetic states A-D are shown in Figures 17 and 18. Figure 16: Calculated demagnetization curves for -GB with ext // [001] and -GB with ext // [001] and a GB thickness of 8 nm.The details of the magnetic states A-D are shown in Figures 17 and 18. Figure 17 :Figure 18 : Figure 17: Calculated magnetization states of the -GB with ext // [001]: A the magnetization of the GB is in plane and B the magnetization of the GB is parallel to the external field and antiparallel to the adjacent grains forming two Néel DWs close to the grain surfaces. Figure 19 : Figure 19: Influence of the averaged magnetic properties, the grain size and GB thickness on the coercive field.(a) Isotropically oriented grains.(b) Isotropically oriented grains. Fe 71.8 Co 7.8 B 3.5 13.5 Pr 0.2 Dy 0.2 Tb 0.2 Fe 76.0 Co 1.8 B 6.6 Cu 0.1 Al 0.5 Ni 0.4 O 0.5 sintered magnet with a high energy product investigated with STEM methods.Another 3D-AP study [33] of a sintered Nd-Fe-B magnet reported about a crystalline GB with Nd-content of 55 at%.A crystalline 5 nm-10 nm thick Cu enriched cubic c-Nd 2 O 3 GB-phase in Nd 12.0 Dy 2.7 Fe 76.3 Cu 0.4 B 6.0 M 2.6 (M = Al, Co, and Nb) sintered Nd-Fe-B magnet was reported by Kim et al. [38].A crystalline Nd enriched Nd 16.4 Table 1 : Crystal structure and lattice parameters of identified phases in the large grained nanocomposite Nd-Fe-B melt-spun ribbon. Table 2 : Areal fraction and magnetic properties of the four identified granular phases used in the micromagnetic simulations. Table 3 : Measured Fe + Co content in GBs in sintered and meltspun Nd-Fe-B magnets and resulting magnetic properties. (a) (dotted lines).With a low value (0.43 T) of the GB layer and ext ‖ [111] sw is above the value of the anisotropic -GB ( ext ‖ [001]).The switching field value of the averaged GB ( ext ‖ [111]) with a of 1.10 T is below sw of the -GB ( ext ‖ [001]) for all GB thicknesses.At a GB thickness of about 4 nm the -GB and the homogeneous GB with a of 0.43 T have approximately the same switching field values.Therefore it is justified to use a single phased GB with homogeneous x-GB, H ext ‖ [001] y-GB, H ext ‖ [001] xy-GB, H ext ‖ [111]
2018-12-28T00:32:59.630Z
2017-04-06T00:00:00.000
{ "year": 2017, "sha1": "628750d640a4352bf3dbf3c8693b82c2f803c891", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/amse/2017/1461835.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "628750d640a4352bf3dbf3c8693b82c2f803c891", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
17552062
pes2o/s2orc
v3-fos-license
Distinct effects of TRAIL on the mitochondrial network in human cancer cells and normal cells: role of plasma membrane depolarization. Apo2 ligand/tumor necrosis factor-related apoptosis-inducing ligand (Apo2L/TRAIL) is a promising anticancer drug due to its tumor-selective cytotoxicity. Here we report that TRAIL exhibits distinct effects on the mitochondrial networks in malignant cells and normal cells. Live-cell imaging revealed that multiple human cancer cell lines and normal cells exhibited two different modes of mitochondrial responses in response to TRAIL and death receptor agonists. Mitochondria within tumor cells became fragmented into punctate and clustered in response to toxic stimuli. The mitochondrial fragmentation was observed at 4 h, then became more pronounced over time, and associated with apoptotic cell death. In contrast, mitochondria within normal cells such as melanocytes and fibroblasts became only modestly truncated, even when they were treated with toxic stimuli. Although TRAIL activated dynamin-related protein 1 (Drp1)-dependent mitochondrial fission, inhibition of this process by Drp1 knockdown or with the Drp1 inhibitor mdivi-1, potentiated TRAIL-induced apoptosis, mitochondrial fragmentation, and clustering. Moreover, mitochondrial reactive oxygen species (ROS)-mediated depolarization accelerated mitochondrial network abnormalities in tumor cells, but not in normal cells, and TRAIL caused higher levels of mitochondrial ROS accumulation and depolarization in malignant cells than in normal cells. Our findings suggest that tumor cells are more prone than normal cells to oxidative stress and depolarization, thereby being more vulnerable to mitochondrial network abnormalities and that this vulnerability may be relevant to the tumor-targeting killing by TRAIL. INTRODUCTION Apo2 ligand/tumor necrosis factor-related apoptosisinducing ligand (Apo2L/TRAIL) is a member of the tumor necrosis factor cytokine superfamily, which has emerged as a promising anticancer drug, because it induces apoptosis in cancer cells with minimal cytotoxicity toward normal cells [1][2][3][4]. TRAIL binds to two death receptors (DRs), TRAIL receptor (TRAIL-R)1/DR4 and TRAIL-R2/ DR5 to trigger the extrinsic and intrinsic apoptotic pathways [5,6]. However, multiple cancer cell types such as malignant melanoma, glioma, and non-small cell lung cancer (NSCLC) cells are resistant to TRAIL treatment despite expressing DRs on their cell surface. Moreover, TRAIL-responsive tumors acquire a resistant phenotype that renders TRAIL therapy ineffective [7,8]. Therefore, overcoming TRAIL resistance is necessary for effective TRAIL therapy, and small molecules that can potentiate TRAIL effectiveness are urgently required. Persistent depolarization of the plasma membrane potential is an early event essential for apoptosis and caspase-3 activation in human malignant cells that is induced by diverse pro-apoptotic agents including anti-Fas antibody, arsenic trioxide, and the mitochondrial toxin rotenone [9][10][11]. We previously showed that depolarization is an early and prerequisite event during TRAIL-induced apoptosis in malignant tumor cells such as melanoma, leukemia, and NSCLC. TRAIL dose-and timedependently induces robust depolarization in these cells after a time lag of 2−4 h [12,13]. Moreover, persistent depolarization induced by high K + loading or ATPsensitive K + channel inhibitors sensitizes melanoma cells. This sensitization is associated with increased intrinsic and endoplasmic reticulum death pathways. A number of pro-apoptotic responses, including mitochondrial reactive oxygen species (mROS) generation, collapse of mitochondrial membrane potential (MMP), and oxidation of cardiolipin within mitochondria are potentiated. All of these events are known to cooperatively lead to mitochondrial membrane integrity disruption, a gate keeping player to the release of pro-apoptotic proteins such as cytochrome c. In contrast, TRAIL and membranedepolarizing agents alone or in combination minimally induce apoptosis in normal melanocytes, though the cells express DR4 and DR5 on their surfaces. These observations suggest that malignant cells are more prone than normal cells to the depolarization-triggered apoptosis. However, the precise mechanisms by which depolarization potentiates mitochondrial dysfunction remain unclear. Mitochondria are highly dynamic organelles with a reticular network that is regulated by the balance between fission and fusion. Mitochondrial morphology is critical for cell function and apoptosis [14,15]. The mitochondrial network depends on the delicate balance between two antagonistic machineries responsible for fission and fusion of the mitochondrial membrane. Mitochondrial network dynamics is controlled by dynamin-related proteins with GTPase activity, namely mitofusin 1/2 (Mfn1/2), optic atrophy 1 (OPA1), and dynamin-related protein 1 (Drp1). Mfn1/2, and OPA1 act in concert to regulate mitochondrial fusion and cristae organization, while Drp1 regulates mitochondrial fission [16,17]. A well-balanced fission and fusion is required for healthy mitochondria, because either process is essential for cell function and survival. However, the role of the mitochondrial network in cancer cell apoptosis is a matter of debate, because controversial results have been reported about the role of mitochondrial fission. Mitochondrial fission is thought to be essential for mitochondrial outer membrane permeabilization and cytochrome c release [18]. However, an increasing body of evidence suggests that mitochondrial fission is proapoptotic or anti-apoptotic, depending on the cell type and the applied apoptotic stimuli [19][20][21][22][23][24]. In an attempt to elucidate the role of mitochondrial fission in cancer cell apoptosis, we previously studied the effect of enforced mitochondrial fusion due to Drp1 inhibition, on TRAIL-induced apoptosis. Both Drp1 knockdown and the Drp1 inhibitor, mitochondrial division inhibitor-1 (mdivi-1) [25], inhibits mitochondrial fission, and kills and sensitizes cancer cells to the apoptosis [26]. These effects are generally observed among different cancer cell lines. Moreover, the sensitization is associated with increased intrinsic pathway activity and is preceded by depolarization, MMP collapse, mROS generation, and cardiolipin oxidation. In contrast, mdivi-1 and TRAIL, alone or in combination induced minimal apoptosis in normal melanocytes and fibroblasts. These observations suggest that mitochondrial fission acts as an anti-apoptotic factor in a tumor-specific manner. These findings led us to investigate the possible role of the mitochondrial network in TRAIL-induced cancer cell apoptosis. Here we report, for the first time, that TRAIL induces pro-apoptotic mitochondrial network abnormalities in human malignant cells, but not in normal cells. We also demonstrated that this difference is attributed to the distinct sensitivities of these cells to ROS and depolarization, which are required for the pro-apoptotic mitochondrial network abnormalities. TRAIL induces punctate mitochondria and their clustering in human cancer cells, but not in normal cells As reported earlier [12], TRAIL dose-dependently increased apoptotic cell death in A375 melanoma cells. Until 24 h, treatment with up to 100 ng/ml of TRAIL induced a moderate (<30%) increase in the number of annexin V-positive cells. On the other hand, 72 h of treatment with 25 ng/ml of TRAIL caused a modest increase in the number of annexin V-positive cells (18.2 ± 0.7 %, n = 4) while treatment with 100 ng/ml of TRAIL substantially increased the cell population (59.8 ± 2.9 %, n = 4). Therefore, we used 25 ng/ml and 100 ng/ml TRAIL, respectively as a weak and strong inducer of apoptosis throughout the present study. Then, we determined whether TRAIL affected mitochondrial network dynamics in these cells. The cells were treated with recombinant human TRAIL for various time periods, stained with the mitochondria-targeting dye MitoTracker Red CMXRos, and then their mitochondrial network were analyzed using a cell imaging system equipped with digital inverted microscope. In control cells, the mitochondria mainly consisted of a tubular morphology of ~ 12 μm, a hallmark of well-balanced fission and fusion ( Figure 1A, left). TRAIL treated cells showed multiple mitochondrial network abnormalities in a doseand time-dependent manner. After 24 h of treatment with TRAIL (25 ng/ml), a modest mitochondrial truncation took place ( Figure 1A, middle), resulting in short mitochondria of the average length of ~9 μm ( Figure 1C). Upon stimulation with a higher concentration of TRAIL (100 ng/ml), substantial mitochondrial fragmentation occurred ( Figure 1A, right), resulting in extremely short mitochondria of the average length of ~ 3 μm ( Figure 1C). The majority of the mitochondria became punctate and clustered. Time course experiments indicated that for TRAIL (100 ng/ml), a modest truncation was observed as rapidly as 30 min, while punctate mitochondria and their clustering were first detected at 4 h and then became more pronounced over time ( Figure 1B). Next, we examined whether this phenomenon is specific for melanoma cells or generally observed among multiple cancer cell types. The mitochondria within A549 NSCLC cells exhibited moderately fragmented network even in the absence of stimulus (Figure 2A, top left). After TRAIL treatment, clustering of punctate mitochondria became clear ( Figure 2A, top right). Similarly, the mitochondria within two osteosarcoma cell lines MG63 and HOS also became fragmented into punctate and clustered after TRAIL treatment (Figure 2A, middle and bottom). These results show that TRAIL induces similar mitochondrial network abnormalities in different human cancer cell types. Then, we examined whether these mitochondrial network abnormalities are specific for tumor cells. As shown in Figure 2B, TRAIL treatment resulted in modest fission, but not clustering of punctate mitochondria in melanocytes and fibroblasts. These results indicate that TRAIL evokes clustering of punctate mitochondria in a tumor-specific manner. The mitochondrial network abnormalities are associated with cell death Microscopic analyses showed that healthy cells possess tubular, elongated, or modestly fragmented mitochondria, while morphologically damaged cells regularly harbor punctate and clustered mitochondria. To clarify the possible link between the mitochondrial network abnormalities and cell death, we compared the effects of two different anti-DR4/5 antibodies with different pro-apoptotic activities on mitochondrial network. An anti-DR5 antibody (αDR5; MAB631 1 μg/ ml) was equipotent to TRAIL (100 ng/ml) at inducing apoptotic cell death, while an anti-DR4 antibody (αDR4; MAB631) was ineffective [27] (see Figure 5C). Treatment with αDR4 resulted in modest fragmentation of mitochondria in A375 cells ( Figure 3A, middle), while treatment with αDR5 resulted in considerable increases in punctate and clustered mitochondria ( Figure 3A, bottom). Similar distinct effects of αDR4 and αDR5 on mitochondrial morphology were observed in A2058 cells ( Figure 3C). These observations were confirmed by mitochondrial length measurements ( Figure 3B, 3D) as well as confocal imaging ( Figure 3E). These cells harboring heavily fragmented and clustered mitochondria possessed brighter, fragmented nuclei, indicating the onset of nuclear fragmentation and chromatin condensation, hallmarks of apoptosis. TRAIL induces Drp1-dependent mitochondrial fission and its inhibition accelerates the mitochondrial network abnormalities Since different cancer cell types exhibit similar basic responses to TRAIL, we investigated the precise mechanisms that cause the mitochondrial network abnormalities by using melanoma cells and melanocytes as representatives of malignant cells and their normal counterpart, respectively. Then, we examined the effect of TRAIL on phosphorylation of Drp1 at Ser616, since this event is essential for mitochondrial fission [16]. Immunoblot analyses using an antibody directed against Drp1 phosphorylated at Ser616 (pDrp1 Ser616) indicated that TRAIL treatment increased the level of pDrp1 Ser616. Figure 4A shows the representative blots using A375 cells. The level of pDrp1 Ser616 clearly first increased at 2 h, and thereafter keeping the level for at least 4 h ( Figure 4A, top). On the other hand, the level of pDrp1 at Ser637, which inhibits mitochondrial fission [16], was Figure 4: TRAIL induces Drp1-dependent mitochondrial fission whose inhibition accelerates the pro-apoptotic mitochondrial network abnormalities. A., B. A375 cells were treated with TRAIL (100 ng/ml) for the indicated time periods and analyzed for their expression of pDrp1 Ser616 and Ser637 A. or Drp1, Fis1, and Mfn1 B. using immunoblotting. GAPDH was used as a loading control. C., E. Control and Drp1 knockdown A375 cells C. were treated with TRAIL (100 ng/ml) or A375 cell were treated with TRAIL (100 ng/ml) alone or in combination with mdivi-1 (50 μM) E. for 24 h at 37 °C, and analyzed for mitochondrial morphology. D., F. Statistical analyses of the average mitochondrial length for experiment C and E, respectively. The values represent the means ± SE of three independent experiments. *P < 0.05; **P < 0.01; ns, not significant. first observed at 30 min, reaching the maximum at 1 h, and thereafter declining ( Figure 4A, middle). We also checked the possible changes in the expression levels of Drp1, Fis1, and Mfn1 and found that their expression levels were marginally changed ( Figure 4B). To determine whether the pro-apoptotic mitochondrial network abnormalities were attributed to the Drp1 pathway, we analyzed the effect of TRAIL on the mitochondrial network in A375 cells in which Drp1 protein levels were downregulated by siRNA interference as previously reported [26]. Compared to control cells treated with irrelevant scrambled siRNA, cells treated with siRNA targeting Drp1 exhibited significantly elongated mitochondrial network, indicating that mitochondrial fission is impaired in these cells ( Figure 4C, lower left and 4D). Nevertheless, the Drp1 knockdown significantly enhanced TRAIL-induced mitochondrial fragmentation and clustering ( Figure 4C, lower right). As a result, the average mitochondrial length became significantly shorter than that observed in control cells treated with TRAIL ( Figure 4D). To verify these observations, we also examined the effect of mdivi-1, a potent Drp1 inhibitor, which can inhibit mitochondrial fission in multiple human cancer cell types and sensitizes them to TRAIL-induced apoptosis [26]. Mdivi-1 treatment elongated mitochondria in some cells while it promoted modest mitochondrial fragmentation in others. Totally, no significant increase in the average mitochondrial length was observed compared to non-treated control cells ( Figure 4E, lower left, 4F). However, similar to Drp1 knockdown, Mdivi-1 treatment significantly accelerated TRAIL-induced mitochondrial fragmentation and clustering ( Figure 4E, lower right), resulting in mitochondria with the average length of ~ 2 μm ( Figure 4F). Collectively, these results show that although TRAIL induces Drp1-dependent mitochondrial fission, this process inhibits rather than mediates mitochondrial network abnormalities. Correlation among depolarization, mitochondrial dysfunction, and apoptosis Considering that depolarization is a key proapoptotic event in TRAIL-induced apoptosis [12,13], we hypothesized that it might play a role in the mitochondrial network abnormalities. To test this possibility, first we analyzed the capacity of the αDR4 and the αDR5 to evoke depolarization. As shown in Figure 5A, flow cytometric analyses using the anionic dye bis-oxonol showed that TRAIL and αDR5 dose-dependently and significantly induced depolarization, while αDR4 only modestly induced depolarization. Next, we analyzed the collapse of MMP and caspase-3 activation, two key events in the intrinsic pathway by flow cytometry. Based on their staining, cells were divided into four groups, MMP low CASP-3 low , MMP low CASP-3 high , MMP high CASP-3 low , or MMP high CASP-3 high cells. In control cells, the majority of the cell population (77.3 ± 1.1%) was MMP high CASP-3 low , and other minor populations were MMP low CASP-3 low (17.6 ± 1.4%), MMP high CASP-3 high (3.5 ± 0.6%), or MMP low CASP-3 high cells (1.7 ± 0.5%) (N = 4), indicating that MMP is intact and there is minimal caspase-3 activation ( Figure 5B). Treatment with TRAIL resulted in a significant increase in MMP low CASP-3 low cells up to 42.2 ± 3.9%, concomitant with a significant decrease in MMP high CASP-3 low cells down to 42.3 ± 1.6%. In addition, MMP low CASP-3 high and MMP high CASP-3 high cells were increased up to 9.0 ± 3.0% and 6.5 ± 2.2%, respectively. This indicates that TRAIL induces significant MMP collapse and caspase-3 activation. Treatment with αDR5 induced similar effects, while the αDR4 caused only modest MMP depolarization and minimal caspase-3/7 activation. Moreover, TRAIL and αDR5 were similarly able to induce apoptosis, while αDR4 minimally induced apoptosis ( Figure 5C, 5D). Although KCl and mdivi-1 alone had minimal effects, these two drugs significantly potentiated the pro-apoptotic effects of TRAIL and αDR5. They also significantly augmented αDR4-induced apoptosis to levels comparable to that induced by TRAIL and αDR5 alone. These results show the correlation among depolarization, mitochondrial dysfunction, and apoptosis. Depolarization potentiates the αDR4-induced mitochondrial network abnormalities Next, we examined whether depolarization affects the effects of αDR4 on the mitochondrial network. Figure 6A shows the representative confocal pictures in A2058 cells. Upon high K + loading, some mitochondria became elongated while other became moderately fragmented, as observed after αDR4 treatment. However, when KCl and αDR4 were applied together, mitochondria primarily became punctate and clustered ( Figure 6A). Moreover, these mitochondria harbored damaged or fragmented nuclei. Similar to KCl, mdivi-1 also potentiated αDR4induced mitochondrial network abnormalities ( Figure 6A). Judging from the mitochondrial morphology and the average mitochondrial length, Mdivi-1 and KCl potentiated the αDR4-induced mitochondrial network abnormalities to a similar extent ( Figure 6B and 6C). Similar results were obtained with A375 cells (data not shown). Normal cells are resistant to DR-mediated depolarization, mROS accumulation, and mitochondrial network abnormalities Our earlier studies show that depolarization and mROS mutually regulate one another during TRAILinduced apoptosis [13,28]. Therefore, we determined whether the TRAIL-induced depolarization in melanoma www.impactjournals.com/oncotarget cells was also ROS-dependent. Treatment with MnTBaP, a superoxide dismutase-mimetic at concentrations ranging from 3 to 30 μM dose-dependently reduced TRAILinduced depolarization in A375 cells, although the effects of MnTBaP were usually more pronounced for TRAIL (25 ng/ml) than for TRAIL (100 ng/ml) ( Figure 7A). As a result, MnTBaP (30 μM) inhibited the depolarization induced by TRAIL (25 ng/ml) by 61.1 ± 3.7% (N = 3). On the other hand, depolarization induced by high K + loading was minimally affected by MnTBaP treatment ( Figure 7A). These results show that TRAIL specifically induces depolarization in a ROS-dependent manner in these cells. Next, we analyzed the ability of TRAIL to induce depolarization in melanocytes in which TRAIL induces minimal pro-apoptotic mitochondrial network abnormalities. TRAIL, αDR5 and αDR4 all induced minimal depolarization in melanocytes, while high K + loading caused substantial depolarization comparable to that observed in A375 cells ( Figure 7B), indicating that melanocytes are specifically resistant to TRAILinduced depolarization. Since mROS appears to regulate depolarization, we next compared mROS generation between these two cell types using MitoSOX Red. This dye localizes to mitochondria and serves as a fluoroprobe for selective detection of superoxide in these organelles [29,30]. Consistent with our earlier study [28], TRAIL treatment substantially increased mROS levels in A375 cells ( Figure 7C). αDR5 also dose-dependently increased mROS levels while αDR4 had minimal effect. In contrast, all of these agents only modestly increased mROS levels in melanocytes. We also found that unlike tumor cells, minimal punctate mitochondria and their clustering were observed in normal cells even when αDR4 was applied together with KCl or Mdivi-1 ( Figure 7D). Collectively, these findings suggest that normal cells are more resistant than tumor cells to DR-mediated depolarization, mROS accumulation, and mitochondrial network abnormalities. DISCUSSION In the present study, we show that TRAIL exerts distinct effects on the mitochondrial networks in malignant cells and normal cells. TRAIL induced heavy fragmentation of mitochondria into punctate and their clustering in multiple human cancer cell lines ( Figures 1, 2A) and predominantly caused moderate fission in normal cells such as melanocytes and fibroblasts ( Figure 2B). Analyses using agonistic αDR4 and αDR5 antibodies indicated that these effects are attributed to DR ligation ( Figure 3A, 3B). Considering that TRAIL induced apoptosis in cancer cells, but not in normal cells, it is possible that moderate fission is adaptive while mitochondrial fragmentation and clustering are proapoptotic. Consistent with this view, the clustering of punctate mitochondria was specifically associated with cell damage as well as nuclear fragmentation and chromatin condensation, the hallmarks of apoptosis (Figures 3E, 6A). Moreover, it is likely that the punctate mitochondria results from mitochondrial swelling, a consequence of mitochondrial dysfunction and integrity collapse. In addition, the observation that αDR4 lacking the proapoptotic activity ( Figure 5C, 5D) induced moderate fragmentation, but not clustering of punctate mitochondria, also supports the pro-apoptotic role of the latter response ( Figure 3A, 3B). Previous studies have shown that mitochondrial clustering precedes cytochrome c release during etoposide-, anti-Fas-or arsenic trioxide-induced apoptosis in leukemia cells [31,32]. The authors also have shown that mitochondrial clustering occurs upstream of caspase-3 activation, and is specifically associated with intrinsic death pathway, suggesting the pro-apoptotic role. Hyperfusion of mitochondria caused by fission inhibition is observed in different cancerous cells and in normal cells upon stimulation with diverse stimuli. Tondera et al. [33] have shown that in several cell types including mouse embryonic fibroblasts (MEFs), mitochondria hyperfuse to form highly-interconnected networks in response to a number of apoptotic stimuli such as UV irradiation, actinomycin D and cycloheximide. This stress-induced mitochondrial hyperfusion (SIMH) requires OPA1, MFN1 and the mitochondrial inner membrane protein, stomatinlike protein 2 (SLP-2), and is adaptive to metabolic insults. SIMH is a pro-survival response against stressinduced apoptosis, as evidenced by the higher sensitivity of cells deficient in MFN1 or SLP-2 to actinomycin D and UV irradiation. Moreover, enforced mitochondrial hyperfusion, due to the expression of dominant-negative mutant of Drp1 or membrane-associated ring finger 5, promotes NF-B activation, and the expression of antiapoptotic genes such as Bcl-XL, X-linked inhibitor of apoptosis protein, and FLICE-inhibitory protein, in human embryonic kidney cells (HEK293), HeLa cells, and MEFs [34]. On the other hand, mitochondrial hyperfusion induced by knockdown of Drp1 or mitochondrial fission factor, delays cell cycle progression and promotes G2/M accumulation and caspase-dependent cell death in U2OS osteosarcoma cells [35]. These observations suggest that mitochondrial hyperfusion is pro-apoptotic. Thus, at present the role of mitochondrial hyperfusion in cancer cell apoptosis is controversial. Consistent with our previous study [26], Drp1 knockdown and mdivi-1 treatment increased elongated mitochondria in tumor cells ( Figure 4C-4E). However, unlike Drp1 knockdown, we also observed modest mitochondrial fragmentation in some mdivi-1 treated cells (Figures 4E, 6A). Totally, we observed no significant increase in the average mitochondrial length ( Figure 4F). Similar dual effects were observed in KCl treated cells ( Figure 6A). In our systems, mitochondrial network abnormalities occurred sequentially; mitochondrial truncation associated with local mitochondrial elongation was first observed at early time points (30 min) after TRAIL treatment while mitochondrial fragmentation and clustering regularly became prominent only at late time points (4-24 h) ( Figure 1B). Moreover, while αDR4 or KCl alone induced mitochondrial truncation and local mitochondrial elongation, their combined application caused substantial mitochondrial fragmentation and clustering. This suggests that the two modes of mitochondrial network abnormalities are not independent events but www.impactjournals.com/oncotarget response to DR agonists such as TRAIL, tubular mitochondria in normal cells undergo Drp1-dependent fission and then fuse to restore the tubular network (upper panel). This may make normal cells resistant to the stress. On the other hand, in tumor cells, relatively higher levels of mitochondrial ROS (mROS) accumulation occur and then mROS evoke depolarization, which promotes mitochondrial fragmentation and clustering. Consequently, a smaller number of mitochondria can restore the tubular network through the fission-fusion homeostasis described above. Hence, this mitochondrial morphology homeostasis can be anti-apoptotic by counteracting pro-apoptotic (irreversible) mitochondrial network abnormalities, impairment of the Drp1-dependent fission by Drp1 knockdown or mdivi-1 accelerates mitochondrial fragmentation and clustering, and cell death. mROS accumulation also causes mitochondrial membrane potential (MMP) collapse, resulting in the opening of the mitochondrial permeability transition pore (mPTP), mitochondrial swelling and collapse of mitochondrial integrity, a central gatekeeper of the intrinsic apoptosis pathway. Mitochondrial swelling and depolarization may cooperatively facilitate the formation and clustering of damaged punctate mitochondria. Either or both of mitochondrial fragmentation and clustering may promote apoptotic cell death. possibly are related. Collectively, it is possible that the dual response is either pro-survival or pro-apoptotic depending on its duration time or amplitude. Interestingly, mitochondrial hyperfusion like SIMH, may inherently be an adaptive, pro-survival response, which may be restored thereafter while mitochondrial hyperfusion may act as a pro-apoptotic response when it lasts persistently, as reported by Westrate et al [35]. Consistent with the impact on mitochondrial network, TRAIL induced the phosphorylation of Drp1 at Ser616 or Ser637. Moreover, the duration time for which the phosphorylated proteins persisted was often different; pDrp1 Ser616 lasted for at least 4 h while pDrp1 Ser637 declined within 2 h ( Figure 4A). Since pDrp1 Ser616 is essential for mitochondrial fission while pDrp1 Ser637 inhibits it [16], this may provide a bias toward mitochondrial fission over mitochondrial fusion. In fact, mitochondrial fragmentation and clustering progressed at late timing after TRAIL treatment. Given that mitochondrial hyperfusion results from mitochondrial fission inhibition, it might play a role in TRAIL-induced cell death, although we can observe only local mitochondrial elongation in some cells. Further investigation is necessary to determine this view. Since mitochondrial fragmentation and clustering were accelerated rather than inhibited by Drp1 knockdown and by mdivi-1 treatment, Drp1-dependent mitochondrial fission may counteract the pro-apoptotic mitochondrial network abnormalities. This is also in accordance with our earlier observations that these two interventions potentiate TRAIL-induced mitochondrial dysfunctions and apoptosis [26]. Collectively, these observations suggest that the mitochondrial fragmentation observed here may be different from the reversible Drp1-dependent mitochondrial fission and may be irreversibly committed to mitochondrial dysfunction. It is noted that mitochondrial fragmentation can occur in a Drp1-independent manner. Dimmer and colleagues have recently shown that down-regulation of LETM1, an inner mitochondrial membrane protein, leads to Drp1-independent fragmentation and clustering of mitochondria and cell death [36]. This event is not caused by an imbalance in the fission-fusion equilibrium but is related to ion homeostasis disruption, since LETM1 functions as K + /H + antiporter. This is interesting when considering that TRAIL-induced mitochondrial fragmentation and clustering is augmented by depolarization, a major cause of ion homeostasis disruption and associated with cell death. Considering the similarity in their observations and ours, it is possible that similar Drp1-independent ion homeostasismediated mechanisms underlie in the TRAIL-mediated mitochondrial fragmentation. Further studies on the possible role of LETM1 and ion homeostasis disruption in the mitochondrial fragmentation in the pro-apoptotic mitochondrial abnormalities are currently underway. Another important finding in this study is that persistent depolarization is required for the proapoptotic mitochondrial network abnormalities, though depolarization per se, did not cause mitochondrial fragmentation and clustering ( Figure 6A). αDR4, which could induce minimal mitochondrial fragmentation and clustering ( Figures 3A, 3C, 3E, 6A), also induced minimal depolarization together with minimal mitochondrial dysfunction, caspase-3 activation, and apoptosis ( Figure 5A-5D). However, in the presence of KCl or mdivi-1, αDR4 induced substantial mitochondrial fragmentation and clustering ( Figure 6A). In addition, a considerable level of apoptosis, comparable to that induced by TRAIL or αDR5 alone, was observed under these conditions, while KCl or mdivi-1 alone caused minimal apoptosis ( Figure 5C, 5D). These results suggest that depolarization plays a pivotal role in the progression from non-apoptotic to the pro-apoptotic mitochondrial network abnormalities, though further studies are necessary to elucidate the mechanisms by which depolarization elicits these effects. It may be worthwhile to determine the molecular basis of the tumor-specific mitochondrial network abnormalities, because central targeting can be exploited to enhance the induction of tumor-targeting apoptosis. In expansion of our earlier work using Jurkat leukemia cells [13], we found that mROS also contributed to TRAILinduced depolarization in melanoma cells ( Figure 7A). More importantly, DR ligation preferentially induced depolarization and mROS accumulation in malignant cells over normal cells ( Figure 7B, 7C) Collectively, our results imply that higher depolarization and mROS accumulation in malignant cells cause the tumor-selective mitochondrial network abnormalities, leading to apoptosis (Figure 8). In summary, we demonstrate in this paper that TRAIL induces the mitochondrial network abnormalities associated with apoptotic cell death in human malignant cells, but not in normal cells and that depolarization plays a critical role in this process. To our knowledge, these findings are the first to show that TRAIL affects the mitochondrial network in human malignant cells and may provide insight into the molecular basis of the tumortargeting killing effect of TRAIL. Cell culture The human melanoma cell lines, A549 cells (human lung adenocarcinoma epithelial cell line), and osteosarcoma cell lines were obtained from Health Science Research Resource Bank (Osaka, Japan). Human dermal fibroblasts from facial dermis were obtained from Cell Applications (San Diego, CA). These cell lines were cultured in Dulbecco's modified Eagle's medium (DMEM; Sigma-Aldrich) supplemented with 10% fetal bovine serum (FBS; Sigma-Aldrich) (FBS/DMEM) in a 5% CO 2 incubator. Normal human epidermal melanocytes were obtained from Cascade Biologics (Portland, OR), and cultured in DermaLife Basal Medium supplemented with DermaLife M LifeFactors (Kurabo, Osaka, Japan). Cells were harvested by incubation in 0.25% trypsin-EDTA (Life Technologies Japan) for 5 min at 37°C. Apoptosis Apoptotic cell death was quantitatively assessed by double-staining with fluorescein isothiocyanate (FITC)conjugated annexin V and propidium iodide (PI) as previously described [12]. Briefly, cells (2 × 10 5 /well) in 24-well plates were incubated with the agents to be tested for 24 h in FBS/DMEM at 37°C. Subsequently, the cells were stained with FITC-conjugated annexin V and PI using a commercially available kit (Annexin V FITC Apoptosis Detection kit I; BD Biosciences Japan). The stained cells were evaluated in the FACSCalibur (BD Biosciences Japan) and analyzed using CellQuest software (BD Biosciences). Four cellular subpopulations were evaluated: viable cells (annexin V -/PI -); early apoptotic cells (annexin V + /PI -); late apoptotic cells (annexin V + / PI + ); and necrotic/damaged cells (annexin V -/PI + ). Annexin V + cells were considered to be apoptotic cells. Mitochondrial network imaging acquisition and length measurements The mitochondrial network was analyzed by staining with the mitochondria-targeting dye MitoTracker ® Red CMXRos (Life Technologies Japan) as previously described [26] with minor modifications. Briefly, cells in FBS/DMEM were placed at the density of 5 × 10 4 /300 μl/ well on an 8-well chambered coverglass (Thermo Fisher Scientific, Rochester, NY) and treated with the agents to be tested for 24 h at 37°C in a 5% CO 2 incubator. After removing the medium by aspiration, the cells were washed with Hank's balanced salt solution (HBSS), and stained with 20 nM MitoTracker Red CMXRos and Hoechst 33342 (Dojindo) in HBSS for 1 h at 37°C in the dark in a 5% CO 2 incubator. The cells were then washed with and immersed in FluoroBrite TM DMEM. Images were obtained and analyzed using an EVOS FL Cell Imaging System (Life Technologies Japan) equipped with digital inverted microscope at ×1200 magnification. MitoTracker Red and Hoechst 33342 signals were captured using Texas Red and DAPI Light Cubes, respectively. For confocal imaging, samples were observed using a laser scanning microscopy with Airyscan (LSM 700 and 880, Carl-Zeiss microscopy Japan, Tokyo, Japan) equipped with a oil-immersion objective (Plan Apochromat 63x/1.4; Carl Zeiss) with 555/561 and 405 nm laser.. Images were enlarged and analyzed for mitochondrial length using the free NIH ImageJ software (NIH, USA) and ZEN lite 2012 (Carl-Zeiss). Depolarization analysis Depolarization was measured by flow cytometry using bis-oxonol, an anionic dye that increases in fluorescence intensity upon membrane depolarization, as previously described [12]. Briefly, cells (4×10 5 cells/500 μl) suspended in HBSS were incubated with 100 nM dye for 15 min at 37°C, and then incubated with the agents to be tested for 4 h at 37°C in a 5% CO2containing atmosphere. Subsequently, 1×10 4 cells were counted for their fluorescence using the FL-2 channel of a FACSCalibur and analyzed using the CellQuest software. The data were expressed as F/F 0 , where F 0 is the fluorescence in unstimulated cells and F is the fluorescence in stimulated cells. Caspase-3/7 activation and MMP collapse Activation of caspase-3/7 and MMP collapse were simultaneously measured by flow cytometry as previously described [12]. Briefly, cells (2 × 10 5 /ml) in 24-well plates were treated with the agents to be tested for 24 h in 10% FBS/DMEM at 37°C, and then stained with the dual sensor MitoCasp TM kit (Cell Technology, Mountain View, CA). Caspase-3/7 activation and MMP were evaluated using FACSCalibur according to the manufacturer's instructions, and the data were analyzed using CellQuest software. mROS mROS levels were measured using MitoSOX TM Red (Life Technologies Japan) by flow cytometry and the signals were calibrated as previously described [28]. Briefly, cells (5 × 10 5 /500 μl) suspended in HBSS were incubated with the agents to be tested for 4 h at 37°C and www.impactjournals.com/oncotarget then incubated with 5 μM MitoSOX for 15 min at 37 °C for loading. The cells were washed, resuspended in HBSS on ice, centrifuged at 4°C and then analyzed for their fluorescence using FACSCalibur. The data were analyzed using CellQuest software and expressed as F/F 0 , where F 0 is the fluorescence of unstimulated cells and F is the fluorescence of stimulated cells. Drp1 knockdown Mitochondrial fission was inhibited by downregulating Drp1 expression using small interfering RNA (siRNA) as previously described [26]. Cells (2.5 × 10 5 / well) were plated in six-well plates and transfected with 20 nM of either Drp1-targeting siRNA (#sc-43732, Santa Cruz Biotechnology, Santa Cruz, CA) or scrambled control siRNA (#sc-37007, Santa Cruz) using Lipofectamine RNA/Max Kit (Life Technologies Japan) according to the manufacturer's instructions and cultured for 72 h at 37 °C in a 5% CO 2 incubator. Immunoblotting The level of Drp1 was determined by immunoblot analysis, as previously described [26]. The phosphorylation of Drp1 was assessed by immunoblotting. Briefly, cells (1.5 × 10 5 /ml) were washed twice with ice-cold PBS, lysed with RIPA buffer (Nacalai Tesque, Kyoto, Japan) containing protease inhibitors, and homogenized by sonication using a Bioruptor UCD-250 (Cosmo Bio, Tokyo, Japan). After centrifugation, the resulting supernatant was measured for protein content using a Pierce BSA Protein Assay Kit (Themo Fisher Scientific) according to the manufacturer's instructions. After heating at 70°C for 10 min, samples (15-20 μg protein) were subjected to reducing sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) using a 10% separation gel (Life Technologies) and transferred onto polyvinylidene difluoride membranes (Life Technologies). The membranes were blocked with Blocking One (Nacalai Tesque) for 1 h at room temperature, washed with Tris-buffered saline containing 0.1% Tween 20 (TBS-T), and then incubated with primary antibody; phosphor-Drp1 (Ser616; #3455) or phosphor-Drp1 (Ser637; #4867) (Cell Signaling Technology Japan, Tokyo, Japan) overnight at 4 °C. After washing with TBS-T, the membranes were incubated with secondary antibody for 1 h at room temperature. The signal was detected with the ECL Prime Western Blotting Detection Reagent (GE Healthcare, Little Chalfont, UK) using GAPDH (Abcam, Cambridge, UK) as a loading control. Statistical analysis Data were analyzed by one-way analysis of variance followed by the post-hoc Tukey test using an add-in software for Excel 2012 for Windows (SSRI, Tokyo, Japan). All values were expressed as mean±SE, and P < 0.05 was considered to be significant.
2017-06-19T06:40:04.198Z
2015-05-25T00:00:00.000
{ "year": 2015, "sha1": "a8a7a23018af80e9086e230110415bdd64772401", "oa_license": "CCBY", "oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=4268&path[]=9494", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a8a7a23018af80e9086e230110415bdd64772401", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
245558632
pes2o/s2orc
v3-fos-license
Reshaping the educational landscape: During and after the COVID19 pandemic The aim of this paper is to describe and analyze the response to COVID-19 and evolution through different models of online instruction during the pandemic at a large Canadian university. This paper primarily focuses on the approach taken by the Faculty of Education including the necessary restructuring of the processes, organization of the workforce, support configurations, and institutional constraints. The factors that impacted changes in the curriculum are examined. Three distinct phases were identified and compared: 1) remote teaching, 2) fully online using a combination of synchronous and asynchronous instruction, and 3) a diversity of hybrid approaches. The paper highlights a number of challenges experienced with online education during the pandemic. Each one of them presents both barriers and opportunities. The process has made way for a potential transformation of educational practice at North American universities. This will likely come as a combination of increased knowledge and practice of online learning during the pandemic, and as a need to reshape traditional institutional structures to reflect the shifted landscape of education. It has opened discussions on equity and accessibility, learner-centered design, and the potential for change in the classroom and educational programming. Introduction In 2020, many universities were required to quickly move into remote teaching due to the COVID-19 pandemic. The shock of the events was followed by an attempt to buffer damage by rapidly replacing in-person teaching with an online classroom (Bryson & Andres, 2020). Some universities were better prepared and had systems in place to support online teaching and learning while others struggled due to a lack of support, infrastructure, or political ideologies resistant to online or blended learning (Ali, 2020). Most instructors had to change their teaching approach to meet student needs (Núñez & Leeuwner, 2020). These came with both positive and negative experiences (Adedoyin & Soykan, 2020;Walsh et al., 2021). According to UNESCO's report on the impact of COVID -19 on higher education institutions, the disruption of education "affected more than 220 million tertiary-level students around the world" (UNESCO, 2021, p.1). As the pandemic continued, many institutions remained online for over a year. This created a situation for ongoing adjustments and development of online classrooms. The University of British Columbia (UBC), with twelve Faculties, is one of the world's top research universities, attracting more than $650 million in research funding each year (UBC, 2021). Its Faculty of Education (FoE) is the largest Faculty of Education in the province of British Columbia and ranks within the top 10 faculties in the world. It consists of six academic units, a number of administrative and support offices, as well as different centres and institutes. The two major groups of student population are those studying in the BEd program (obtaining their Teaching certificate), and those in a variety of graduate programs (diplomas, master or doctoral studies). In both cases, the students are those who are preparing to teach or educational practitioners already in the field. With scholars who have the highest success rate at UBC in applying for research grants, FoE is a considerable contributor to the research activity. During the first half of the pandemic year, there was a pause in research field work due to the fact that it was primarily being done with human subjects. This forced the FoE leadership to re-think faculty members' daily life and their engagement. The leadership team was committed to the promise that no full-time employee or tenure-track faculty member would lose his/her job due to the pandemic, similar to the majority of countries in Europe and North America. According to the UNESCO's survey (2021), eighteen out of twenty-seven countries reported no reduction in academic and administrative staff employment despite the closure of the universities. Still, some radical shifts had to be made in relation to workloads, appointments, and the teach/research ratio in the time of COVID. The FoE at UBC has a long history of remote learning, from correspondence courses to the first fully online offerings in 2002. It has established a reputation in the field as a leading Faculty in this area with fully online Master Degree programs offered across Canada and internationally. The main unit responsible for assisting with online delivery in the Faculty is the Educational Technology Support (ETS). The ETS team is a small group of professionals dedicated to the Faculty to provide learning design and educational technology support. Because of the Faculty's history with online courses, the sudden switch to fully online delivery might have come easier than in the rest of the University, but it did not happen with less effort and investment. Despite the good expertise in-house, the ETS was not able to handle the fourfold number of requests. As the landscape during the pandemic kept changing with every academic term, due to newly obtained knowledge, approaches and skills, the leadership and ETS unit continued to make modifications and accommodations. Three distinctive phases were identified in this journey through the pandemic: 1) remote teaching, 2) fully online combination of synchronous and asynchronous instruction, and 3) a diversity of blended models. Background UBC has a long history of using learning management systems (LMS), starting with Web-CT, that was created by Murray Goldberg, a faculty member in computer science at UBC, in 1996. All credit courses are automatically created in LMS approximately two weeks after the UBC course calendar has been published. In the Faculty of Education, pre-Covid adoption of today's centrally supported LMS, Canvas, was in the range of 35-40%. Lots of course shells stayed empty. The majority of the ones used were for fully online courses. This relatively low rate was partially a result of the BC Ministry of Education's requirements for all the BEd Teacher Education Certificate courses to be exclusively offered in the classroom. No online or blended instruction was allowed. This resulted in the majority of fully online courses that existed were graduate courses. There was no incentive for the instructors teaching in the BEd program to learn or use educational technology despite the obvious demand in the field. Predominantly, the use of learning technologies was dependent on enthusiasm, interest and personal digital literacy skills of the individual faculty members. Distance education courses, despite their quality and consistent increase in student enrollment over the years, were not necessarily seen as on par with in-class courses. In alignment with this, teaching online was also not seen as equivalent to in person teaching, so it was mainly left to sessional lecturers. This was not a unique perception of online learning and thus, the requirement for learning digital literacy skills among educational professionals was low. It would be safe to assume that the situation in the FoE and at UBC in general, was similar to other institutions across the globe where the pre-Covid level of digital literacy skills was concerning. According to Times Higher Education report (2021), that looked into the status in UK, the pandemic's accelerated use of technology exposed the need for a greater level of digital literacy. The university reacted to the new pandemic situation by enabling financial support from the government. It was left to the individual Faculties to identify their needs and decide on the priorities and where the funds were spent. Some Faculties decided to invest in the recording equipment and tech enhanced classrooms, some in hiring graduate students to help out with content upload into the LMS, Canvas. The Faculty of Education, however, took a unique approach to this challenge and opportunity, and decided to hire five learning designers (part time and fulltime contract positions), all graduates from its own Master Degree in Educational Technology (MET). The strategic idea was to invest in building internal capacity for the future by developing digital literacy skills among all its members, faculty, students and staff. Financially, it was more costly to hire highly educated professionals, than undergraduate student help. This decision was founded in the premise that it was more important to spend time with instructors demonstrating what good online course design was and discussing how it could help improve student learning experience, than to teach them how to upload a file in Canvas or open a discussion forum. As soon as the "Why" was there, the "how" was much easier to accomplish. The assumption was that the focus on the paradigm shift would have long lasting effect. With this specific goal in mind and specific hiring target, FoE was interested in tracking and measuring how successful this approach to the transition of all courses to online would be, and, as a subgoal, how successful the MET program is in producing learning design and educational technology experts. Aims This paper aims to describe the processes and shifts of an Educational Technology Support unit in assisting faculty to transition to online learning dur-ing the COVID-19 pandemic. We set out to answer the questions: In what ways, if any, did the ETS unit adapt to the ongoing needs during the COVID-19 pandemic? From the institutional perspective, what lessons does this teach us for future events? Methods The paper examines and describes the praxis of an educational technology support team during the period between March 2020 to September 2021, from the first lockdown of the University of British Columbia campus in Vancouver, Canada, to the opening of the in-class instruction after 18 months of remote teaching and learning. We take a pragmatic approach to analyzing the shift through different phases of the pandemic and how they led to today's changed reality in the educational landscape. To review the practice of the unit, we examined how the structure changed (based on adjustments to hiring, who was hired, organization of teams) and how it impacted the continuation of instruction. In addition, we analyzed the activities of the unit in relation to the ongoing demands (e. g. emails, workshops, design adjustments to meet the changing challenges). This included the unit's response to student and faculty feedback. The data was organized into semesters (which make up the phases). These were examined to accurately describe the modifications between the phases and compare the differences. Graphs were created for emails, workshops, and number of courses supported at different points during the pandemic to get a visual representation of the shifts. In addition, the organizational structure of the team was recorded and analyzed to compare the differences from one semester to the next. To explore the findings in depth, the authors also provide the contextual background for the shifts (such as student and staff feedback, funding, and university factors). A narrative approach was adopted for the writing of this paper as its authors' lived experiences as members of the unit (director and learning designer) make valuable contributions to the understanding of the results. Results From the initial pandemic response to moving back to campus, there was a distinct shift between semesters, taking us to a journey from substitution to redefinition of teaching and learning (Puentedura, 2010). Phase 1: Remote teaching When the pandemic and the restrictions on face-to-face gathering began, most instructors were half way through the winter semester (Mid-March), and preparing for summer courses. The easiest way to continue teaching was to move their lectures to an online synchronous session. Many simply took their current program and used a web conferencing system to replicate the way they teach in the classroom. The main system available in the Canvas environment was Collaborate Ultra web conferencing tool. UBC IT made a quick environmental scan and decided to introduce Zoom to its learning technology ecosystem. In record time, the staff members were provisioned with Zoom accounts. A number of administrative staff that lost some of their duties due to the pandemic, were tasked to help ETS. Most of them were quickly trained to provide basic, tier 1 learning technology assistance. They were paired with experienced learning designers, and then assigned to a number of programmatic units (Departments, Schools, Institutes and similar), which included individual instructors and courses (Fig. 1). The remaining members of the support team were focused on supporting the existing fully online courses and programs developed pre-Covid. ETS staff increased substantially due to the decisions to re-organize the unit and spend the resource funds on contract expertise. It grew from a five-person unit to a team of 15-20 people at different times of the year. The development of tech skills was accomplished in a number of different ways: by providing ample self-guided instructions (video and text tutorials), by using examples of good course design and good online assessment strategies, by organizing opportunities to learn about tools at workshops and events, but also by strengthening the tech support part of the ETS team. The newly-hired learning designers worked together with the experienced members of the team to provide consultation on pedagogical approaches and online teaching. At the same time, they were introduced to the established processes and had to get accustomed to the culture of the workplace. Learning how to operate in a virtual environment, mastering the functionality of the tools while concurrently meeting the curriculum requirements was very challenging for instructors. The idea that "the way I teach in the classroom, I can just do online" soon proved to be a large misconception. The lack of required digital literacy skills during hiring for teaching positions, and thus neg-lecting those skills, showed to be a weakness in the process of moving to a remote classroom. The initial invitations for consultations with learning designers were only marginally taken upon. The replication of classroom practice into a synchronous Zoom session, that was the most frequent way of "accelerated migration to online, " was a major disaster. The students felt lost and overwhelmed, and they expressed this in their feedback. However, being a new reality for everyone, there was a lot of flexibility and a lot of understanding of each other's mistakes, both on the instructor and student side. We were all learning, and learning quickly. Phase 2: Evolution to designing for online learning Moving into preparation for the winter courses (September 2020 semester) there were a few significant adjustments. Initial student feedback from the remote learning sessions, requested increased asyn-chronous activities, less synchronous time, standardization of the environment, online engagement tools with more time allotted for activities, etc. A couple of months into the pandemic, the FoE leadership team provided some strong recommendations: • The workload of faculty members who ended up with reduced research activities was to be shifted to authoring online courses. • Adoption and implementation of guidelines for course development, which were based on the criteria that took into account the program enrolments, tuition revenue, the nature of the course (required or elective), the likelihood that the course could be offered online post-covid, the fact that the online version already existed, and so on. The guidelines were to help Departments and Schools look at their courses and their needs differently and prioritize appropriately. • Each course author was to be connected with a learning designer for support. • The enhancement of skills and competences of the staff members who temporarily changed their role to be continued, and further adjustments to their tasks be made in order to increase the efficiency of the support provided. There was a common resolve by the academic and administrative staff to work collaboratively and respond to the challenges of the pandemic in a way that would demonstrate the institutional care for its student's learning. ETS also made changes in its internal organization. After a few months of being frenzied, where the experienced support team spent many hours on training staff from different roles, such as program and administrative assistants and members of the marketing team to become tech support, the challenge of this approach became apparent. Those whose background or interest was not in learning technologies had a very hard time with this partial change of their duties. They were expected to learn in a couple of weeks what others were learning in months or years and this was not viable. The team had to be re-organized again. Mini-teams were created, where a few learning designers were partnered with strong and experienced learning technologists (LTs) in order to respond to the demands more efficiently. Based on their personal strengths and capacity, the administrative staff members either stayed with the learning technology support team, or their skills were used in a different way, such as support that involved specialized tasks more in line with their personal strengths. For example, a marketing and communication coordinator focused on providing frequent updates on the ETS website that became the centre for critical information. She was in charge of coordinating with LD and LT teams around upcoming professional development offerings, taking care of publishing, administering and promoting the sessions. The student employees served as runners-up, first line responders to the inquiries and requests. They were able to quickly triage the questions between mini-teams and members. They were also tasked with learning and mastering either new tools that UBC IT was provisioning to help with online learning, such as MS Teams, or those that had sudden increase in use, like collaborative or interactive tools, such as H5P (see Fig. 2). The team meetings turned into internal professional development where experience and issues were shared and new knowledge presented. The use of MatterMost (an open-source online chat service) became a critical life line for communication among ETS team members. The messages were flying from one channel to another, and probably if not more, through private, direct messaging. These replaced casual conversations and provided a space for instant and ongoing communication, including checking on tasks and knowledge exchange. It also enabled the team to be tightly connected and collaborate throughout the day despite the remote locations. The pandemic seemed to have a longer life than summer 2020 and the promise that everyone would be back in the classrooms by September did not materialize. However, preparation for September looked much better organized and unified. The number of sections for Fall semester (about 400+) still seemed pretty daunting in comparison to the number of learning designers (equivalent to 6 full time employees). A number of strategies were implemented to assist with creating a quick but effective design of the courses that is also conducive to online learning. A "course starter" was created by the LDs and applied to most sections. This allowed for a basic format that could be adjusted and personalized to the course needs. Each course had a designated learning designer and learning technologist who would work with the course author to create the online course. This involved an initial meeting where the LD could consider the syllabus and discuss the course with the author to understand the unique needs such as the learning outcomes, instructor's digital literacy skills, student needs, and assessments. From an initial conversation and examination of the course syllabus, modules, assignments, activities, and other aspects of the course could be developed. LDs were assigned based on their previous experience working with different Departments and faculty members. This was important as relationship building was found to be one of the most crucial aspects to supporting faculty in developing their online courses and adoption of new approaches. These adjustments led to a more efficient process with courses that could include a variety of online styles. Thus, despite an increase in the number of courses for September term (Fig. 3), the quality of the courses and the process of design and development was improved. As observed in the study by Nworie (2021), this may also be due to faculty members recognizing that they can teach online, and that teaching can be good quality, and in some instances better than in-person. At the same time, there was a general recognition that designing a good online learning experience required time and investment. . Courses and sections supported over the pandemic In addition to the one-on-one interactions, LTs also addressed ongoing issues through the common unit email, responding to requests. This became increasingly important as there was a sharp rise in emails from the start of the pandemic and peaking during the start of the winter term (Figure 4). Over this time, there was a shift to increasingly more complex requests and a greater consideration for pedagogy. The team began to explore more interactive technologies and designs (such as using H5P) and a greater consideration for tools that could enhance collaboration (various peer review technology and multiple options for online discussions and interaction, such as Padlet). In total, ETS supported almost 700 courses and over 1300 sections from May 2020 to the end of April 2021 as compared to much smaller numbers in the previous year. Over this time most support activities increased extensively ( Figure 5). Phase 3: Transformation to hybrid opportunities After 18 months of remote work and remote instruction, some institutions, including UBC, announced going back to "normal" in-person teaching in September 2021. Equipped with new skills, instructors, staff, and other members of the university community have started to shift their thinking to how these online spaces could be used in conjunction with in-person teaching. These considerations have led to increased discussions and implementation of hybrid courses. Five different hybrid approaches were identified at the university (CTLT, 2020): • Concurrent Hybrid: On-campus and remote students attend class synchronously. Instruction and class interactions are livestreamed to allow two-way interaction. • Asynchronous Hybrid: On-campus instruction is recorded and made available for remote students to access asynchronously at another time (no livestreaming). • Sequential Hybrid: On-campus and remote students meet in separate, consecutive sessions where instruction is repeated. When students are not in a scheduled class meeting, they are assigned asynchronous work online. • Multi-Section Hybrid: Online and oncampus instruction occur in separate sections, potentially taught by different instructors • Alternating Hybrid: All students are required to attend some on-campus instruction but attend in smaller groups to comply with health guidelines. When not on campus, students engage in learning activities online. During summer 2021 the AV team worked on enhancing the classrooms, where possible, by adding audio and video devices, building mobile recording kits, and assembling AV Zoom carts that could be booked by instructors. There was still a number of students and faculty members who could not attend classes for various reasons (e. g. international students with issues around visas, vaccination status, immuno-compromised instructors or students), so in preparation for September 2021, ETS had to take into consideration the possibility of all five hybrid models being implemented. The final government financial support for COVID-related expenses was used in the FoE to support the increased hybridity and experimentation with all types of teaching and learning. Graduate students were hired and assigned to hybrid courses to help with the hardware and software available in the classrooms, with booking and handling Zoom carts and mobile recording kits, to facilitate participation of remote students; and work with the ETS team to deliver the synchronous /asynchronous, mostly graduate programs. Number of Teaching Assistants (TAs) increased, as the instructors needed help with hybrid courses "classroom management" and differentiated instruction. The application of hybrid instruction is not without its own challenges. Not having time to prepare for this new type of teaching, and declaring "hybrid" ad hoc, leaves students not always knowing where their classes are held (online or in the classroom), in what way (synchronous or asynchronous), and how to manage the time between and across the modalities. With re-opening of face-to-face courses for a post COVID-19 university, there has been a continued need to rethink education and instruction in the aftermath of a year of online teaching and learning. Most courses within the Faculty had a variation of an online course prepared during phase 1 and 2. This created a unique opportunity for taking advantage of these materials to design personalized courses that use some form of hybrid/blended learning. The BC Ministry of Education went back to the in-person teaching requirements for the Teacher Education program, which reduced the flexibility of instruction for this part of the student population. Despite this, the use of LMS continued, even in its basic, limited form, as a space where students go to find their readings and submit assignments. The commitment for other face-to-face courses to have partial integration of online components have been increasing. Those range from simple use of the LMS, as described with the Teacher Education program, to creating additional opportunities for engagement, such as by using discussions, announcements, and other tools for communication and interaction. In addition, the change in both student and staff perceptions of online learning, its affordances, and the possibilities now play a role in future course design. Discussion Over the year and a half of the initial pandemic response, there was a noticeable shift between different phases in the faculty: remote teaching, designing for online learning, and then re-envisioning instruction to include diverse hybridized models as the faculty move back to in person courses. Looking at the transformations in the way technology was used to support teaching and learning over this time, the phases that emerged nicely align with the SAMR (Substitution, Augmentation, Modification and Redefinition) model (Puentedura, 2010). Substitution occurred at the start of the pandemic through the Natasha Boskic, Simone Hausknecht use of online conferencing tools to simulate faceto-face teaching practice. This form of technology integration is at a very basic level and often can be improved since it does not bring additional benefits to the learning. Substitution was fine for an emergency response. During the second phase of pandemic learning design, a process of augmentation and modification occurred through thoughtful consideration of the tools and their affordances to support learning. It is at this juncture in time that there is a possibility of moving forward into redefinition, where technology allows for the creation of new tasks and new ways of thinking about the relationship between teaching, learning and technological tools (Puentedura, 2010). There were certain advantages to online learning that were noticed by educators and administration such as the flexibility, accessibility, global reach, and equity (Xie et al. 2020). With most of the higher education instructors now having experience in creating or adapting a course for an online environment and teaching online, the standards of what good online teaching is has risen. The student expectation of excellency will be different (Nworie, 2021). What we can expect to see in the near future is a change in institutions' strategic plans and policies in order to respond to those expectations. As Nworie states "universities should develop plans that will guarantee students' readiness to learn online not only in normal times but also in the event of disruptions to classroom instruction, ensuring that there are no roadblocks to synchronous and asynchronous online learning" (2021). The choice of investing in quality design and enhancing faculty digital literacy increases the opportunities for digital transformation (Hodges & McCullough, 2021). Challenges and opportunities A number of challenges experienced with online education during the pandemic still remain and should be considered as universities move forward. Each one of them presents both barriers and opportunities. This past year highlighted the importance of digital literacy for faculty, staff, and students. Teachers with previous training in online teaching were better equipped to meet the emergency change to online (Walsh et al., 2021;Times Higher Education, 2021). Universities could take this opportunity to create ongoing training to increase digital literacy skills across faculty and departments. Educational technology support units within universities can play an important role in these professional development programs. On the other hand, there were a number of systemic problems that surrounded technology. While online teaching provided increased access to students in remote locations or those where being in class physically was problematic, limited access to the Internet and technology by students in lower-income schools or districts during the pandemic widened the existing first-level digital divide (Goldstein, 2020, Scheerder, Deursen, & Dijk, 2017, Stellmann, Song, & Tucker, 2021, UNESCO, 2021. How can universities and other educational institutions increase access and accessibility and reduce the increasing divide? Social isolation and disconnection from inperson engagement had a strong impact on students' well-being (Grajek & Sobczyk, 2021, Lukács, 2021, Schlesselman, Cain & DiVall, 2020. The pandemic had negative effects on studies, relationships with family and friends, physical activity, financial situation, health, life satisfaction, and so on. As a consequence, the universities and their academic units, such as FoE, invested in building resources to support students and instructors, and rapidly adopting and implementing tools that support networking and communication (Crawford, 2020, Henrich, 2020. A number of instructors in informal conversations with ETS team members reported higher satisfaction with teaching online when spending more time on getting to know their students better. The pandemic has brought to the forefront the values and priorities we, as humans, tend to forget, driven by work, curriculum, deadlines. It has highlighted the need to reshape again the notions of learner-centred design. The intensive months of remote Zoom classes in summer 2020 demonstrated that synchronous online learning could be extremely exhausting. Zoom fatigue became a concern and with it approaches to creating a better experience as well as what these environments mean for learning (Bailenson, 2021, Hausknecht & Lim, 2021. The fall and winter semesters in 2020 and 2021 revealed new possibilities with using virtual spaces for learning and the amount of work needed for their design. A mixture of both in-person and online classes in Fall 2021 brought confusion and disorganization demonstrating that we are yet to learn about how to prepare for hybrid. The instructors who are willing to apply the advantages of learning technologies and include the remote participants together with those in the classroom are challenged with inadequate infrastructure, lack of appropriate technology to support differentiated instruction and faced with institutional constraints, such as policies that do not recognize the changed landscape. The traditional 13-week-per-semester classroom course structure is difficult to change despite the existing other models, such as for example, block teaching at the Victoria University in Australia, where Trish McCluskey, the Associate Provost, Learning and Teaching, has been working on building sustainable programmes, applying agile approaches to a radical reconceptualization of the traditional university curriculum since 2018 (Ambler, Solomonides & Smallridge, 2021). It will take time for institutions to catch up with the new, more fluid and flexible reality. What was learned quickly has to be re-examined, including institutional policies and structures, our own pedagogical paradigms, redefining how we work and learn, and discovering what is sustainable in the long run and what we want our future to be. The changes need to be holistic and systemic. Limitations and future directions This paper outlines the process of a specific educational technology support unit at a university. Thus, the findings are not applicable to every institution world-wide. In addition, the paper is written through the lens of two members of the team. This provided a contextual perspective and indepth knowledge of the process, but it also reflects their views. Future studies could conduct surveys amongst various educational technology support units within the university or across universities to compare their experiences of adapting to the pandemic. Conclusion As we move into a changed reality, it is difficult to determine how the lessons learned over this time will unfold. It is apparent that universities need to be prepared for uncertain events that disrupt traditional approaches and structures. The pandemic has highlighted the importance of having solid systems to support online and hybrid learning. Similar to Adedoyin and Soykan (2020) suggestion, we believe that emergency remote teaching was not ideal (although it was necessary) and universities will consider the lessons learned and move towards more well-designed online courses and hybrid approaches. There is a difference between a quick transition to synchronous learning or designing a course in an extremely limited time with limited readily available resources from a well-planned, full course development process, working with learning designer and tech support staff, with adequate preparation of students to a different way of learning (Nworie, 2021). During the last year and a half, many institutions have been thrown into a world of online teaching and learning. With the struggles and pitfalls, also comes opportunities to re-envision and re-define the role of technology in enhancing teaching and learning. Moving forward, it is up to faculties and institutions to consider what this reshaping will look like for their needs. This is a time when there is an opportunity to embrace innovation and digital literacy promotion for all members of the education environment.
2021-12-30T16:15:19.200Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "0006ed0ec64326ac7693f57051433d0f9ab881ee", "oa_license": "CCBY", "oa_url": "https://scindeks-clanci.ceon.rs/data/pdf/0352-2334/2021/0352-23342104036B.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c04c84ceb16e0596ece7a718b1ce9f6f7f3c214c", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
119403341
pes2o/s2orc
v3-fos-license
The reactor antineutrino anomaly and low energy threshold neutrino experiments Short distance reactor antineutrino experiments measure an antineutrino spectrum a few percent lower than expected from theoretical predictions. In this work we study the potential of low energy threshold reactor experiments in the context of a light sterile neutrino signal. We discuss the perspectives of the recently detected coherent elastic neutrino-nucleus scattering in future reactor antineutrino experiments. We find that the expectations to improve the current constraints on the mixing with sterile neutrinos are promising. We also analyse the measurements of antineutrino scattering off electrons from short distance reactor experiments. In this case, despite the statistics is not competitive with inverse beta decay experiments, the restrictions play an important role when we compare it with the Gallium anomaly. At the same time, we also discuss in more detail the case of a different prescription for the reactor antineutrino flux as a solution to the so called reactor anomaly. After the recent evaluation of the antineutrino spectrum by Daya Bay [36], the need for a better understanding of the spectrum has been pointed out. Moreover, the possibility that the reactor anomaly can be solved by a revaluation of the antineutrino flux has also been considered [37]. Since the data in the reactor signal for sterile neutrinos come from IBD experiments, it will be interesting to consider alternative detection technologies as a complementary test to this anomaly. For this reason we study here the current data from neutrino electron scattering, as well as the prospects of CENNS. II. ANTINEUTRINO ELECTRON SCATTERING MEASUREMENT In this section we concentrate our study in experiments that use the electron antineutrino scattering off electrons as the detection process. For this purpose, we have reanalyzed the experimental results, using the current prescription for the reactor antineutrino flux [34], to obtain a restriction on the mixing parameters of a sterile neutrino. Following this approach, the effective survival probability for short baseline antineutrino experiments in the 3+1 mixing scheme 1 can be written as [43] P SBL νe→νe = sin 2 2θ ee sin 2 ∆m 2 where The expected number of events, in the presence of a fourth, sterile, neutrino state, will be given in this case as where λ(E ν ) stands for the antineutrino spectrum; for energies above 2 MeV, this spectrum has been taken according to Ref [34]; on the other hand, if we need to include energies bellow 2 MeV, we have included the spectrum computed in Ref. [44]. R(T, T ) is the resolution function for the given experiment, P SBL να→να is the effective survival probability as given in Eq. (1), and dσ dT is the differential cross section for the antineutrino scattering off electrons, given as [45] dσ where m e stands for the electron mass and G F is the Fermi constant. In this expression, g L = 1/2 + sin 2 θ W and g R = sin 2 θ W are the usual Standard Model couplings. Several experiments using neutrino electron scattering as detection reaction have been performed along the years. Some of them have searched for a non-zero neutrino magnetic moment [46]. The experiments for our analysis will be TEXONO, MUNU, Rovno and Krasnoyarsk. The most recent experimental result has been given by the TEXONO Collaboration [9], that has reported the measurement of ten bins with an electron recoil energy between 3 and 8 MeV. The energy resolution for this experiment was σ(T ) = 0.0325 √ T [47]. A previous experiment, with a lower threshold, was performed by the MUNU Collaboration [48]. In this case, the error in the electron recoil energy was considered to be σ(T ) = 0.08T 0.7 [49]. We also considered the Rovno [7] and Krasnoyarsk [6] results. For these experiments, the fuel proportions, as well as the electron recoil energy window, are shown in Table I. We have performed a goodness of fit analysis for the experiments quoted above. After performing the combined fit using the four reactor experiments, we have have obtained the restriction for the sterile oscillation parameters, sin 2 2θ ee and ∆m 2 41 , as shown in Fig. (1). We also show in this figure the allowed regions for the Gallium anomaly, recomputed from Ref. [50], taking into account the recent measurements of the Gamow-Teller transitions represented by Gallium-FF, Galliun-HF, and Gallium-HK cases. It is possible to notice that, although the restriction is not competitive with the signal reported by IBD experiments, the current resolution is enough to constrain a small region of the Gallium anomaly. III. PERSPECTIVES FOR COHERENT NEUTRINO NUCLEUS SCATTERING IN REACTOR EXPERIMENTS The CENNS is another interesting process to explore physics beyond the Standard Model. This interaction was proposed more than four decades ago within the SM context [22,51]. Different Collaborations and experimental proposals have considered the possibility of detecting the coherent neutrino-nucleus scattering [52][53][54][55]. Recently the COHERENT Collaboration has achieved the first detection of CENNS, opening a promising new era of low energy neutrino experiments. In this section we will study four different proposals that plan to use a reactor as their antineutrino source. They are the TEXONO, MINER, RED100, and CONNIE experiments, that we describe briefly in what follows. • The TEXONO Collaboration has proposed the use of high purity Germanium-based detectors, with a threshold energy of T thres ∼ 100 eV [52,56]. The Collaboration expects to develop a modular detector and reach 1 kg mass for the target. The reactor flux would come from the Kuo-Sheng nuclear power plant and the detector would be located 28 m away from the reactor. For a quenching factor Q f = 0.25 the expected number of events would be 4000 kg −1 year −1 [52]. • The MINER Collaboration will use a detector made of 72 Ge and 28 Si with a 2 : 1 proportion and with a threshold energy, T thres ∼ 10 eV. A TRIGA-type pool reactor will deliver an antineutrino flux with a fuel average proportion of ( 235 U: 238 U: 239 Pu: 241 Pu) given by [54] (0.967:0.013:0.02:0.001). With this special type of reactor, the detector can be located at a distance of 1 − 3 m from the source. An event rate of 5 − 20 kg −1 day −1 is forecast for this configuration [57]. In our simulations we will consider a 20 kg 72 Ge detector with one year of data taking at an event rate of 5 events kg −1 day −1 . • The Kalinin power plant has also a program to detect CENNS. At least two different options appear in the literature. One is a germanium detector, νGeN [58], while the other one considers the use of liquid Xenon, RED100 [59]. We focus in the Xenon case as this material has been of interest for different experimental groups [60] and it is a different target with an energy threshold of T thres ∼ 0.5 keV [61]. The expected distance to the Kalinin reactor is about 15 m and they expect to detect 433 events per day. The expected fiducial mass is 100 kg [59]. As in the previous proposals, we consider one year of data taking. • The CONNIE Collaboration [53] is currently working at the Angra-2 reactor using Charged-Coupled Devices (CCD's) as a detector, at 30 m from the reactor. The CCDs have an energy threshold of 50 eV. Although the expected events should be few, due to the CCD's low mass, the high resolution of the detectors may help to the detection of the CENNS. In this work we will consider as a benchmark 100 events. In order to calculate the number of events for any of the above proposals, we use the following expression for the cross section, here, M is the mass of the nucleus, E ν is the neutrino energy, T is the nucleus recoil energy, F (q 2 ) is the nuclear form factor, and the neutral current vector couplings (including radiative corrections) are given by [24] g where ρ N C νN = 1.0082,ŝ 2 Z = sin 2 θ W = 0.23126,κ νN = 0.9972, λ uL = −0.0031, λ dL = −0.0025, and λ dR = 2λ uR = 7.5 × 10 −5 [62]. We have checked that, for a first analysis of the expected sensitivity to a sterile neutrino signal, the corresponding form factors, F (q 2 ), will not play a significant role 2 and, therefore we have taken them as unity in what follows. For estimating the number of expected events (SM) in the detector, we use the expression, where M detector is the mass of the detector, φ 0 is the total neutrino flux, t is the data taking time period, λ(E ν ) is the neutrino spectrum, E ν is the neutrino energy, and T is the nucleus recoil energy. The maximum recoil energy is related with the neutrino energy and the nucleus mass through the relation T max (E ν ) = 2E 2 ν /(M + 2E ν ). In all the cases we will consider one year of data taking. For the oscillation to a fourth sterile family, we will consider the two families case in vacuum, where the number of events is In the above equation, P SBL να→να represents the neutrino survival probability as expressed in Eq. (1). The differential cross section has just been discussed above, and the antineutrino flux will depend on the specific reactor under consideration. With this expression we can make a forecast for different experimental setups. We will consider the case of the MINER, RED100, and TEXONO proposal with the fluxes and thresholds mentioned above. We will assume that each experiment will measure exactly the standard prediction for the three active neutrino picture. With this hypothesis we will obtain an expected χ 2 analysis assuming only statistical errors. The result of this computations for the MINER Collaboration is shown in the Fig. (2), where we have considered two different baselines of 1 m and 3 m. Since we are using only statistical errors, our analysis can be considered as very optimistic. In order to consider the more realistic counterpart, we have also shown in the same figure the case where the detector can only achieve a 50 % efficiency. We can notice that for a baseline of 1 m the MINER Collaboration could exclude the current best fit point to the sterile neutrino analysis [64]. A similar analysis was done for the case of the RED100 proposal where we have considered the Kalinin nuclear power plant as the antineutrino flux source. We show in Fig. (3) the case of two different baselines and two possible efficiencies. The expectative to improve the current constraints on the mixing with a sterile neutrino is also promising in this case, despite the relatively high detection energy threshold. We have also analyzed the case of the TEXONO proposal. The results are shown in Fig. (4). As in the previous cases, we have also considered different possibilities for this proposal. In particular, we take into account different quenching factors for the detector. This factor represents the ratio of the electron recoil to nucleus recoil energy [65], which gives us an important correction since the detector response to a nucleus recoil energy is different from the response coming from electron calibration sources. The quenching factor is given by where E ee represents the electron equivalent energy and E N r is the nuclear recoil energy. In the case of the TEXONO experiment, we calculated the expected number of events for the quenching factors Q f = 1 and Q f = 0.2. The regions of mixing angle and squared-mass splitting favored by different combinations of quenching factors and detector efficiencies are shown in the Fig.(4). The results are in agreement with the previous work of Ref. [31] and shows other cases with a different quenching factor. The expectations for this proposal are competitive with the MINER and RED100 proposals as can be seen from Figs. (2) and (3). We conclude this section comparing the expected signal for these proposals in two very different situations. Recently, the theoretical estimates for the antineutrino flux have been under deep scrutiny (see for instance [37,66]) and the Reactor anomaly might be solved by a re-evaluation of the neutrino fluxes. In this case, it is also possible that the CENNS experiments give a confirmation of this result, especially if several CENNS experiments with different baselines are performed, as seems to be the case. This situation is illustrated in Fig. (5), where we show what will be the antineutrino rate measured by these proposals if a 5 % decrease in the 235 U is considered [37] (without any sterile effect). On the other hand, we also show the expected ratio for the same experiments, in the case that the sterile neutrino is the responsible for the deficit. For this case we consider ∆m 2 = 1.7 eV 2 and sin 2 2θ ee = 0.062, according to the most recent fit of antineutrino disappearance data [64]. As expected, the different baselines will give a different ratio for the sterile solution. The situation is different if the reactor anomaly is due to a correction in the antineutrino flux, where the expected number of events will be different than for the oscillation explanation, especially for the RED100 and the MINER (1 m) cases. In this case, as expected, the complementarity of different experiments using different baselines, thresholds, and fuel proportions could be very helpful in discriminating what is the real explanation of the reactor anomaly. neutrino with a sin 2 θ ee = 0.062 and ∆m 2 = 1.7 eV. The blue dots give the ratio for the case of a decrease in the 235 U of 5 % as proposed in a recent article [37]. The black line represents the average probability for a mean energy of 4 MeV, and the dotted black curve corresponds to an energy of 6.5 MeV, both with an energy resolution of 15 %. And finally the error bars account for the statistical errors. IV. CONCLUSIONS In this work we have studied the reactor anomaly in the context of future CENNS experiments and in antineutrino electron scattering data from short baseline reactor neutrino experiments. Concerning antineutrino-electron scattering we conclude that this interaction can give limited information due to the relatively poor statistics, although it is possible to constrain a small region of the Gallium anomaly. On other hand, the recent observation of CENNS by the COHERENT Collaboration strongly motivates the further exploration of physics beyond the Standard Model in this context. We show that CENNS experiments could play an important role in the determination, or exclusion, of the sterile signal. Particularly, the RED100, TEXONO, and MINER proposals could test the current best fit point of the sterile allowed parameter space. Regarding the need of a precise antineutrino flux determination, CENNS is particularly attractive, since the detection technique is different from that of IBD detectors. In this case, we obtained the ratios between predicted and expected data in two different cases: considering sterile neutrinos and taking a decrease in the antineutrino flux as it is suggested by some recent works. Both situations could be of interest in order to explain the reactor antineutrino anomaly.
2017-08-31T01:08:41.000Z
2017-08-31T00:00:00.000
{ "year": 2017, "sha1": "159f0f8fc83176bb744a900cd57e574cee6cbbda", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.physletb.2017.11.074", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "b47cae60c12b5fd482e2e38b09383d7408f223db", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
470973
pes2o/s2orc
v3-fos-license
Cryptic e1a2 BCR-ABL1 Fusion With Complex Chromosomal Abnormality in de novo Myelodysplastic Syndrome Dear Editor MDS is a clonal disorder marked by ineffective hematopoiesis, cytopenias, clonal chromosomal abnormalities, and a variable predilection to undergo clonal evolution to AML. Multiple genetic aberrations occur during the clonal evolution of MDS, and in the majority of cases, somatic mutations result from the deletion of all or part of a chromosome [1]. BCR-ABL1 is a hybrid from the ABL1 gene on chromosome 9 and the BCR gene on chromosome 22. The fusion protein encoded from this gene has strong tyrosine kinase activity and is involved in the pathogenesis of several hematologic disorders. BCR-ABL1 genes are categorized into three types, based on differences in the BCR gene's breakpoint, which appear to be related to the disease phenotype. The BCR-ABL1 fusion gene is found in CML and in some cases of acute lymphoblastic leukemia [2]. However, reports on BCR-ABL1-positive MDS cases are extremely rare [3,4,5,6]. Here, we report an unusual case of MDS progressing to AML with an e1a2 BCR-ABL1 fusion transcript and a complex karyotype, including monosomy 7. A 65-yr-old man was admitted for evaluation of persistent pancytopenia with 9% blasts in the peripheral blood. He presented with general weakness and breathlessness. Evaluation of the bone marrow (BM) disclosed hypercellularity with mild erythroid hyperplasia, blasts count of 18%, and remarkable dyshematopoiesis (Fig. 1). Immunophenotype analysis of BM cells using flow cytometry showed that the blasts were positive (>20% of cells) for CD13 (45%), CD33 (97%), CD117 (68%), CD34 (97%), and HLA-DR (43%), and negative for other megakaryocytic and lymphoid markers. Multiplex reverse transcriptase polymerase chain reaction (RT-PCR) evaluation of total RNA isolated from BM cells (HemaVision kit; DNA-Diagnostic, Risskov, Denmark) indicated an e1a2 (p190 BCR-ABL1) rearrangement, confirmed on two independent cDNAs and via direct sequencing (Fig. 2). Quantitative RT-PCR analysis (Real-Q BCR-ABL1 quantification kit; BioSewoom, Seoul, Korea) revealed the BCR-ABL1 to ABL1 transcript ratio to be 0.001339977. The fluorescence in situ hybridization signal for BCR-ABL1 rearrangement was found in one interphase cell out of 296 analyzed cells (Fig. 1D). There was no evidence of other molecular abnormalities such as mutations in JAK2 or calreticulin. Conventional cytogenetic analysis revealed a complex abnormality without the Philadelphia (Ph) chromosome: 44,XY,del(5)(q31),-7,del(12)(p12),-14,-16,+mar [cp20]. The final diagnosis was MDS subtype refractory anemia with excess blasts-2 (RAEB-2). The patient died before undergoing a follow-up BM examination owing to lung cancer, which was found shortly after the MDS diagnosis. Fig. 1 Morphological and molecular cytogenetic analyses of bone marrow cells. A bone-marrow aspirate from the patient showed dyserythropoiesis (A), dysmegakaryopoiesis (B), and dysgranulopoiesis (C). Fluorescence in situ hybridization with a BCR-ABL1 dual-color ... Fig. 2 Detection of p190 BCR-ABL1 fusion transcript by multiplex reverse transcriptase -PCR and direct sequencing. Analysis of RNA samples collected from the patient's bone marrow (A, B). The screening kit produced a single band in the M8 lane (A), and the split-out ... BCR-ABL1-positive de novo MDS is a very rare disease, and when it manifests with excess blasts, as in this case, it may be confused with the accelerated phase of CML. However, several features of this case differentiate it from CML. First, our patient did not have organomegalies, such as splenomegaly, or any related symptoms, which are common in CML. Second, our patient did not display basophilia in the peripheral blood or BM, which is common in CML. Third, BM examination showed severe dyshematopoietic features without findings of myeloproliferative neoplasms. Fourth, monosomy 7 and del(5)(q31) identified using chromosomal analysis are typical aberrations of MDS. It is common that BCR-ABL1-negative MDS changes to BCR-ABL1-positive MDS after disease progression [3,4,5,6]. In the case of BCR-ABL1-positive MDS, most patients show excess blasts on the verge of transformation to AML [6,7,8], which implies that the appearance of BCR-ABL1-positive clones is likely to be closely associated with leukemogenesis and triggers MDS to transform into AML. Jacobsen et al. [9] proposed three possible explanations for the acquisition of the late-appearing Ph chromosome: 1) it may have been present on initial presentation; however, the techniques used were inadequate to detect the BCR rearrangement; 2) it may represent further evidence of multistep pathogenesis; and 3) the presence of the Ph chromosome in some cells may represent the development of a new clone of cells in leukemic relapse. In this case, we presume that the third explanation may be fitting. The BCR-ABL1-positive clone was most likely detected on emergence because the BCR-ABL1 copy number and BCR-ABL1 to ABL1 transcript ratio during quantitative RT-PCR analysis were found to be very low. Moreover, a typical Ph chromosome was not found in 20 metaphase cells by conventional cytogenetic analysis. To the best of our knowledge, this is the first report of de novo MDS RAEB-2 with the cryptic e1a2 subtype of BCR-ABL1 rearrangement and complex chromosomal abnormality in Korea. Currently, there is no established treatment method that is distinguished from the usual MDS treatment. In the future, a more systematic study of this disease is needed with respect to serial biologic monitoring, therapy, and outcome. Dear Editor MDS is a clonal disorder marked by ineffective hematopoiesis, cytopenias, clonal chromosomal abnormalities, and a variable predilection to undergo clonal evolution to AML. Multiple genetic aberrations occur during the clonal evolution of MDS, and in the majority of cases, somatic mutations result from the deletion of all or part of a chromosome [1]. BCR-ABL1 is a hybrid from the ABL1 gene on chromosome 9 and the BCR gene on chromosome 22. The fusion protein encoded from this gene has strong tyrosine kinase activity and is involved in the pathogenesis of several hematologic disorders. BCR-ABL1 genes are categorized into three types, based on differences in the BCR gene's breakpoint, which appear to be related to the disease phenotype. The BCR-ABL1 fusion gene is found in CML and in some cases of acute lymphoblastic leukemia [2]. However, reports on BCR-ABL1-positive MDS cases are extremely rare [3][4][5][6]. Here, we report an unusual case of MDS progressing to AML with an e1a2 BCR-ABL1 fusion transcript and a complex karyotype, including monosomy 7. BCR-ABL1-positive de novo MDS is a very rare disease, and when it manifests with excess blasts, as in this case, it may be confused with the accelerated phase of CML. However, several features of this case differentiate it from CML. First, our patient did not have organomegalies, such as splenomegaly, or any related symptoms, which are common in CML. Second, our patient did not display basophilia in the peripheral blood or BM, which is common in CML. Third, BM examination showed severe dyshematopoietic features without findings of myeloproliferative neoplasms. Fourth, monosomy 7 and del(5)(q31) identi-fied using chromosomal analysis are typical aberrations of MDS. It is common that BCR-ABL1-negative MDS changes to BCR-ABL1-positive MDS after disease progression [3][4][5][6]. In the case of BCR-ABL1-positive MDS, most patients show excess blasts on the verge of transformation to AML [6][7][8], which implies that the appearance of BCR-ABL1-positive clones is likely to be closely associated with leukemogenesis and triggers MDS to transform into AML. Jacobsen et al. [9] proposed three possible explanations for the acquisition of the late-appearing Ph chromosome: 1) it may have been present on initial presentation; however, the techniques used were inadequate to detect the BCR rearrangement; 2) it may represent further evidence of multistep pathogenesis; and 3) the presence of the Ph chromosome in some cells may represent the development of a new clone of cells in leukemic relapse. In this case, we presume that the third explanation may be fitting. The BCR-ABL1-positive clone was most likely detected on emergence because the BCR-ABL1 copy number and BCR-ABL1 to ABL1 transcript ratio during quantitative RT-PCR analysis were found to be very low. Moreover, a typical Ph chromosome was not found in 20 metaphase cells by conventional cytogenetic analysis. To the best of our knowledge, this is the first report of de novo MDS RAEB-2 with the cryptic e1a2 subtype of BCR-ABL1 rearrangement and complex chromosomal abnormality in Korea. Currently, there is no established treatment method that is distinguished from the usual MDS treatment. In the future, a more systematic study of this disease is needed with respect to serial biologic monitoring, therapy, and outcome.
2016-08-09T08:50:54.084Z
2015-09-01T00:00:00.000
{ "year": 2015, "sha1": "5e3c2d933b7c5ee720c06ba4eb51d12bf6a1875b", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3343/alm.2015.35.6.643", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5e3c2d933b7c5ee720c06ba4eb51d12bf6a1875b", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
244535252
pes2o/s2orc
v3-fos-license
Identification of a Chemotherapeutic Lead Molecule for the Potential Disruption of the FAM72A-UNG2 Interaction to Interfere with Genome Stability, Centromere Formation, and Genome Editing Simple Summary Pivotal factors that contribute to tumorigenesis were subjected to analysis by molecular modeling. In particular, the FAM72A-UNG2 protein–protein interaction was modeled to predict a potential solution for the treatment of cancer. We screened chemical libraries to identify withaferin B as a lead molecule capable of interfering with the FAM72A-UNG2 interaction, thus opening new therapeutic avenues for cancer. Abstract Family with sequence similarity 72 A (FAM72A) is a pivotal mitosis-promoting factor that is highly expressed in various types of cancer. FAM72A interacts with the uracil-DNA glycosylase UNG2 to prevent mutagenesis by eliminating uracil from DNA molecules through cleaving the N-glycosylic bond and initiating the base excision repair pathway, thus maintaining genome integrity. In the present study, we determined a specific FAM72A-UNG2 heterodimer protein interaction using molecular docking and dynamics. In addition, through in silico screening, we identified withaferin B as a molecule that can specifically prevent the FAM72A-UNG2 interaction by blocking its cell signaling pathways. Our results provide an excellent basis for possible therapeutic approaches in the clinical treatment of cancer. Introduction Genomic uracil bases may occur from cytosine deamination or the misincorporation of dUMP residues during DNA replication [1]. The uracil-DNA glycosylase UNG physiologically functions in the base excision repair (BER) mechanism of the cell in order to replace uracil from U/G mispairs with cytosine, thus preventing genomic mutations [2][3][4][5][6]. It excises unwanted genomic uracil bases using an extrahelical base recognition mechanism, thus preventing possible C-to-T transition mutations that eventually arise from cytosine deamination [7][8][9]. The resulting apurinic/apyrimidinic site (AP-site) is considered one of the most common DNA lesions in the genome, and a persistent AP-site can have ad-verse consequences, as the lesion disrupts many DNA and RNA transactions and leads to cytotoxic strand breaks, mutations, and other forms of genomic instability [1,10,11]. Human UNG exists in two different isoforms, mitochondrial UNG1 and nuclear UNG2, that are both encoded from the same single 13.5-kb nuclear UNG gene as a result of two separate promoters and alternative splicing [12][13][14]. While UNG1 and UNG2 share a common conserved catalytic domain, they contain differing N-termini sequences responsible for differential subcellular localization. Amino acid (AA) residues 1-92 make up the N-terminus of UNG2; they contain a nuclear localization signal of positively charged residue clusters (K and R residues), rendering UNG2 as the primary uracil-DNA glycosylase enzyme in the nucleus [14,15]. Interestingly, in the absence of binding partners, the Nterminal region is, for the most part, without a fixed structure [2,[16][17][18]. UNG2 is rapidly recruited to sites of DNA damage where its N-terminus can interact with its catalytic site (which binds to uracil) and with chromatin. UNG2 colocalizes with CENP-A at centromeres and other sites of DNA damage in proliferating cells, thus implying that it is also required for chromosome segregation during mitosis [19]. Family with sequence similarity 72 A (FAM72A) is a novel gene expressed in the brain hippocampus area in proliferating neural stem cells, particularly during the G2/Mphase of the cell cycle [20,21]. Most strikingly, humans have four paralogs (FAM72 A-D), whereas all other species express just one ortholog [22,23]. Under pathophysiological conditions, FAM72A is also expressed in various proliferating cancer cells [24,25]. Notably, FAM72A interacts with UNG2 [26,27]. This denotes that the cellular role of FAM72A is as a cooperative partner for genomic BER in order to ensure genome integrity and impede the formation of cancer. Recent data show that decreased levels of FAM72A lead to hyperphysiological UNG2 levels, an increased uracil correction, and, thus, error-free DNA repair. In contrast, the binding of FAM72A with UNG2 antagonizes UNG2 activity and causes UNG2 degradation in B cells, leading to increased levels of genome-wide deoxyuracils and, therefore, mediating increased levels of U•G mispairs that engage in mutagenic mismatch repair, promoting the error-prone processing of activation-induced cytidine deaminase (AID)-induced deoxyuracils [27,28]. Thus, FAM72A bridges BER and mismatch repair in order to modulate antibody diversification during B cell and antibody maturation [27,28]. Overall, an increased FAM72A level could lead to reduced UNG2 levels and could thus shift the balance of appropriate mutagenic DNA repair, therefore making the cells more susceptible to mutations, with possible effects on tumor development [24,25,[27][28][29]. To understand the possible disruption of the FAM72A-UNG2 interaction, the current investigation conducted an in silico prediction of FAM72A-UNG2 heterodimer-protein interaction and the identification of potential chemicals that interfere with the FAM72A-UNG2 heterodimer protein activity for the potential treatment of cancer. Homology Modeling and Protein Structure Validation of UNG2 by Modeller, I-TASSER and AlphaFold FAM72A 3D protein structure was used from previously designed PDB data [30]. Unfortunately, no suitable UNG2 3D protein structure was available, and the N-terminal UNG2 residues were called an intrinsically disordered region. Thus, the UNG2 protein sequence was checked in the National Center for Biotechnology Information (NCBI) and PDB, and the closest suggested template for a UNG2 3D protein structure model was selected. The UNG2 3D peptide sequence was based on the UNG2 protein sequence (Gene ID: 7374, isoform-2: NP_550433.1, 313 AAs; Uniprot-ID: P13051) and the 1AKZ_A PDB model (DOI: 10.2210/pdb1akz/pdb) [31] was selected as the template. The obtained template for the N-terminal UNG2 3D peptide structure model (AA 1-313) was then forwarded for UNG2 3D peptide structure modeling with I-TASSER [36,37] and Modeller v9.20 [38,39] software, and Chimera software was used as a graphical interface as described previously [34,35,39]. For comparison, we also applied the UNG2 protein sequence to the state-of-the-art machine learning method, AlphaFold ((https://alphafold.ebi.ac.uk/), accessed on 3 November 2021) [40,41]. Intrinsically Disordered Region in UNG2 (AA 1-92) The N-terminal regulatory region of UNG2 is described as an intrinsically disordered region by several groups [16,17,42]. In continuation, JPred4 [43] was used for additional secondary structure prediction. A search algorithm and sequence weighting method against the given UNG2 protein sequence was applied with default parameters (hidden Markov model (HMM) and BLOSUM filter). The UNG2 AA composition was calculated to identify and justify the AA residues promoting structured or unstructured regions in the UNG2 protein. Modeled structures were visualized using Chimera software as the graphical interface to check the core, rim, and buried regions, as described previously [30,34,35]. Molecular Docking of FAM72A Protein and UNG2 Peptide (AA 1-45) by HPEPDOCK Docking interactions for the FAM72A protein with the modeled UNG2 peptide (AA 1-45) as a heterodimer were performed by HPEPDOCK (default parameters were applied) [44]. The FAM72A monomer was exported as a PDB file, whereas the UNG2 peptide was submitted as a FASTA-formatted AA sequence (AA 1-45), and MODPEP and MDOCK were applied for the fine adjustments of FAM72A-UNG2 interactions [44]. Modeled structures were visualized using Chimera software as a graphical interface, as described previously [30,34,35]. Molecular Mechanics/Generalized Born Surface Area (MM/GBSA) Calculation The molecular mechanics/generalized Born surface area (MM/GBSA) free energy decomposition per AA residue in protein-protein interactions was predicted on the FAM72A protein and UNG2 (AA 1-45) peptide heterodimer [45]. Hawkdock calculated the free binding energy for the key AA residues in the protein-protein interfaces based on the Amber16 force field [46]. Carbon Distribution (CARd) Analysis The protein carbon distribution (CARd) analysis was performed to validate the specific FAM72A-UNG2 interaction sites using our recently described algorithm [35,47]. A site-specific mutagenic approach was enabled to check the hot spot residues in the FAM72A protein and UNG2 (AA 1-45) peptide heterodimer interaction. AA modifications in FAM72A (F104A, F104R, F104N, F104G, and F104S) and the effect on the conformational stability of the FAM72A protein and UNG2 (AA 1-45) peptide heterodimer complex were investigated by the BIOVIA Discovery Studio software (Dassault Systems; Waltham, MA, USA) as described previously [48,49]. Molecular Dynamics Simulation by GROMACS The starting coordinates of FAM72A and UNG2 were taken from the modeled structures, as described in Section 2.2. GROMOS96 43a1 force field was used in this study. Hydrogens were added to the protein molecules by using the pdb2gmx application in GROMACS (2019.2). Then, protein molecules were placed in a cubic simulation box (de- fault parameters). A simple point charge water model was used for the solvation simulation box. To neutralize the system, Na + and Cl − ions were added to the simulation box. The structure was relaxed through a process called energy minimization (EM). Subsequently, energy-minimized structures were used for the system equilibration performed under constant NVT and NPT (number, volume, temperature, and pressure) ensembles. The production run was carried out using the NPT ensemble for 50 ns with a time step of 2 fs at a constant temperature of 300 K and 1 bar pressure. Simulation trajectories were visualized using Visual Molecular Dynamics (VMD) 1.9.4a42. Analysis of features, including root mean square deviation (RMSD), root mean square fluctuation (RMSF), and radius of gyration (Rg), were performed using GROMACS (2019) tools [50]. RMSD is the most commonly used metric, in which, the root mean square distance between corresponding residues is calculated [51,52]. Since the RMSD can weight the distances between all residue pairs equally, a small number of local structural deviations can result in high RMSD, even when the global topologies of compared structures are similar. The average RMSD of randomly related proteins depends on the length of compared structures, rendering the absolute magnitude of RMSD meaningless [53]. The RMSD computes the average distance between the backbone atoms of starting structure (reference structure) with simulated structures (frame by frame) when superimposed. The RMSF computes fluctuations (standard deviation) of atomic positions of each AA residue in the trajectory. The RMSD and RMSF were calculated for 50 ns using GROMACS (2019) [50] for the FAM72A-UNG2 heterodimer with the UNG2 (AA 1-45) peptide and FAM72A protein (wildtype [wt], W125A, W125R, F104A, F104R, F104G, F104N, and F104S) [50]. The trajectory files resulting from the molecular dynamics simulation were computed as RMSD, RMSF, and Rg, and were plotted by Grace ("GRaphing, Advanced Computation and Exploration of data"; a WYSIWYG 2D graph plotting tool for Unix operating systems). The MTiOpenScreen [54], along with Autodock vina [55] and iScreen [56] databases, were applied for chemical library virtual screening and de novo drug design. Additionally, pharmacophore analysis was performed using the PharmMapper server [57] to detect the basic pharmacophore group of select chemicals for docking analysis. Further application of COACH, TM-SITE, S-SITE, COFACTOR, and ConCavity approaches [30,34,35] provided potential ligand-binding sites of the 3D FAM72A-UNG protein heterodimer or the FAM72A protein monomer structure model (with refinement by ModRefiner [58]), with potential molecules based on a BioLiP [59]. Further molecular docking studies have been undertaken in order to gain further insights into the possible FAM72A-UNG2 binding interference by molecules newly identified by protein-ligand binding site prediction and to understand their mechanisms of interaction. Identified molecules obtained by the virtual screening were docked onto the FAM72A protein and UNG2 (AA 1-45) peptide heterodimer and/or FAM72A monomer using Schrödinger to depict binding mode and calculate binding energy [60,61]. FAM72A and UNG2 3D protein structures were prepared using the protein preparation wizard panel of the Schrödinger software package (Schrödinger, LLC, New York, NY, USA). The 3D protein crystal structures of FAM72A and UNG2 were transferred to the workspace and pre-processed, and missing loops were filled [62]. Water molecules were removed from the ligand-binding domain. H-bonds were optimized using the hydrogen bond optimizer, and the FAM72A and UNG2 protein structures were moved to the minimization process to minimize the energy in order to confirm the lowest energy conformational structure [63]. Default parameters were used for the molecular docking process, applying the Glide 4.0 XP extra precision module of the Schrödinger software package (Schrödinger, LLC). The binding affinity with FAM72A was calculated for each chemical compound and ranked by the scoring function. Modeled structures were visualized using the same Chimera software as the graphical interface, as described previously [30,34,35,62]. The UNG2 protein can be functionally divided into two domains: an N-terminal regulatory region (AA 1-92) and a C-terminal catalytic region (AA 93-313). The disordered N-terminal region has been identified as interacting with several proteins, including proliferating cell nuclear antigen (PCNA) and replication protein A (RPA) (both found at DNA replication forks), as well as with FAM72A [2,[16][17][18]26,27]. To further enlighten the FAM72A-UNG2 interaction, we investigated the N-terminus of UNG2 (AA 1-92), applying the Modeller, I-TASSER, and AlphaFold protein structure prediction analysis programs. Our predicted comparative structure analysis revealed a long protruding thread-like disordered N-terminal loop (AA 1-92) required to gather and catch more targets for molecular crowding ( Figure 1). and missing loops were filled [62]. Water molecules were removed from the ligand-binding domain. H-bonds were optimized using the hydrogen bond optimizer, and the FAM72A and UNG2 protein structures were moved to the minimization process to minimize the energy in order to confirm the lowest energy conformational structure [63]. Default parameters were used for the molecular docking process, applying the Glide 4.0 XP extra precision module of the Schrödinger software package (Schrödinger, LLC). The binding affinity with FAM72A was calculated for each chemical compound and ranked by the scoring function. Modeled structures were visualized using the same Chimera software as the graphical interface, as described previously [30,34,35,62]. Homology Modeling and Protein Structure Validation of UNG2 by Modeller, I-TASSER and AlphaFold The UNG2 protein can be functionally divided into two domains: an N-terminal regulatory region (AA1-92) and a C-terminal catalytic region (AA93-313). The disordered Nterminal region has been identified as interacting with several proteins, including proliferating cell nuclear antigen (PCNA) and replication protein A (RPA) (both found at DNA replication forks), as well as with FAM72A [2,[16][17][18]26,27]. To further enlighten the FAM72A-UNG2 interaction, we investigated the N-terminus of UNG2 (AA1-92), applying the Modeller, I-TASSER, and AlphaFold protein structure prediction analysis programs. Our predicted comparative structure analysis revealed a long protruding threadlike disordered N-terminal loop (AA1-92) required to gather and catch more targets for molecular crowding ( Figure 1). Figure 1. Predicted comparative 3D protein structure analysis of full-length UNG2 protein, including N-terminal UNG2 (AA1-92) (Left): Modeler modeled full-length UNG2 3D protein structure output (using human UNG2 sequence from UniProtKB (P13051) and PDB template 1AKZ) implying a conformation suitable for N-terminal UNG2 protein binding to chromatin; (Center): I-TASSER modeled full-length UNG2 3D protein structure output implying a conformation suitable for the binding of the N-terminal UNG2 protein to its catalytic site; (Right): AlphaFold output of full length UNG2 (using human UNG2 sequence from UniProtKB (P13051). All three protein structure prediction approaches revealed the N-terminal loop as a protruding thread-like disordered region suitable for interactions with multiple protein binding partners and for molecular crowding. Secondary structure color code: α-helix in blue, β-sheet in green, and loop in red. Modeler modeled full-length UNG2 3D protein structure output (using human UNG2 sequence from UniProtKB (P13051) and PDB template 1AKZ) implying a conformation suitable for N-terminal UNG2 protein binding to chromatin; (Center): I-TASSER modeled full-length UNG2 3D protein structure output implying a conformation suitable for the binding of the N-terminal UNG2 protein to its catalytic site; (Right): AlphaFold output of full length UNG2 (using human UNG2 sequence from UniProtKB (P13051). All three protein structure prediction approaches revealed the N-terminal loop as a protruding thread-like disordered region suitable for interactions with multiple protein binding partners and for molecular crowding. Secondary structure color code: α-helix in blue, β-sheet in green, and loop in red. Intrinsically Disordered Region in N-Terminal UNG2 (AA 1-92) Intrinsically disordered proteins (IDPs) execute various functions in all kinds of cellular processes [64][65][66]. The N-terminal regulatory domain of UNG2 possesses such an unstructured regional IDP motif (AA 1-92). In general, the absence of a hydrophobic core is probably the reason for an unstructured region, whereby hydrophilic AAs may dominate in number. AAs have been classified as order-promoting (Asn, Cys, Ile, Leu, Phe, Trp, Tyr, and Val) and disorder-promoting (Ala, Arg, Gln, Glu, Gly, Lys, Pro, and Ser) [65,67]. The calculated AA composition revealed an abundance of disorder-promoting Ala, Pro, Ser, and Gly in the N-terminal regulatory region, which is very flexible in moving and orienting the N-terminus of UNG2. In contrast, the catalytic region is full of hydrophobic order-promoting AA residues, including Leu, Val, Ile, and Trp. Evidently, Cys and Trp are not available for a foldable secondary structure formation, as linker residues form in the Nterminus of the UNG2. The plot shows the biases in AA composition at N-terminal residues and explains the importance of sulfur-containing AAs and tryptophan at hydrophobic cores for protein rigidity. The absence of AAs, such as Cys and Trp, has been recognized by evolutionary studies about protein plasticity and disordered protein regions in previous studies ( Figure 2) [66,[68][69][70]. nate in number. AAs have been classified as order-promoting (Asn, Cys, Ile, Leu, Phe, Trp, Tyr, and Val) and disorder-promoting (Ala, Arg, Gln, Glu, Gly, Lys, Pro, and Ser) [65,67]. The calculated AA composition revealed an abundance of disorder-promoting Ala, Pro, Ser, and Gly in the N-terminal regulatory region, which is very flexible in moving and orienting the N-terminus of UNG2. In contrast, the catalytic region is full of hydrophobic order-promoting AA residues, including Leu, Val, Ile, and Trp. Evidently, Cys and Trp are not available for a foldable secondary structure formation, as linker residues form in the N-terminus of the UNG2. The plot shows the biases in AA composition at Nterminal residues and explains the importance of sulfur-containing AAs and tryptophan at hydrophobic cores for protein rigidity. The absence of AAs, such as Cys and Trp, has been recognized by evolutionary studies about protein plasticity and disordered protein regions in previous studies ( Figure 2) [66,[68][69][70]. Figure 1). (Right): AA composition was calculated for AA residue enrichment in promoting ordered or disordered regions in UNG2. The AA composition of UNG2 analysis clearly shows the abundance of disorder-promoting AAs Ala, Pro, Ser, and Gly within the N-terminal regulatory region, while the catalytic region is enriched by hydrophobic order-promoting AA residues, including Leu, Val, Ile, and Trp. Evidently, Cys and Trp are not available for a foldable secondary structure formation as linker residues form in the N-terminus of UNG2. The plot shows biases in AA composition at N-terminal residues and explains the importance of sulfur-containing AAs and tryptophan at hydrophobic cores for protein rigidity. The absence of AAs, such as Cys and Trp, has been recognized by evolutionary studies of protein plasticity and disordered protein regions in previous studies. FAM72A-UNG2 Interaction and Molecular Docking Study of FAM72A Protein and UNG2 (AA1-45) Peptide by HPEPDOCK Our molecular docking study evaluated the molecular forces responsible for specific biomolecular FAM72A-UNG2 interactions. The FAM72A monomer was exported as a PDB file (FAM72A 3D protein structure was used from previously designed PDB data [30]), whereas the UNG2 (AA1-45) peptide was submitted as a FASTA-formatted AA sequence (AA1-45). The UNG2 (AA1-45) peptide was used because these AAs appeared to be the pivotal interacting AAs [2,[16][17][18]26]. The docked structure was analyzed for specific Figure 1). (Right): AA composition was calculated for AA residue enrichment in promoting ordered or disordered regions in UNG2. The AA composition of UNG2 analysis clearly shows the abundance of disorder-promoting AAs Ala, Pro, Ser, and Gly within the N-terminal regulatory region, while the catalytic region is enriched by hydrophobic order-promoting AA residues, including Leu, Val, Ile, and Trp. Evidently, Cys and Trp are not available for a foldable secondary structure formation as linker residues form in the N-terminus of UNG2. The plot shows biases in AA composition at N-terminal residues and explains the importance of sulfur-containing AAs and tryptophan at hydrophobic cores for protein rigidity. The absence of AAs, such as Cys and Trp, has been recognized by evolutionary studies of protein plasticity and disordered protein regions in previous studies. FAM72A-UNG2 Interaction and Molecular Docking Study of FAM72A Protein and UNG2 (AA 1-45) Peptide by HPEPDOCK Our molecular docking study evaluated the molecular forces responsible for specific biomolecular FAM72A-UNG2 interactions. The FAM72A monomer was exported as a PDB file (FAM72A 3D protein structure was used from previously designed PDB data [30]), whereas the UNG2 (AA 1-45) peptide was submitted as a FASTA-formatted AA sequence (AA 1-45). The UNG2 (AA 1-45) peptide was used because these AAs appeared to be the pivotal interacting AAs [2,[16][17][18]26]. The docked structure was analyzed for specific AAs contributing to the FAM72A protein and UNG2 peptide interactions. Mostly, electrostatic forces dominate other forces. Hydrogen bonding is less preferred for the FAM72A-UNG2 association because the side chain (UNG2; chain-B) moves along the diagonal portion of the other FAM72A protein (chain-A). Due to the lack of a proper quaternary structure in the N-terminal UNG2 region, the UNG2 peptide prefers surface AA residues (such as AAs 5,7,8,10,11,12,13,and 15) in order to make connections and to increase the prevalence rate of interactions ( Figure 3). AAs contributing to the FAM72A protein and UNG2 peptide interactions. Mostly, electrostatic forces dominate other forces. Hydrogen bonding is less preferred for the FAM72A-UNG2 association because the side chain (UNG2; chain-B) moves along the diagonal portion of the other FAM72A protein (chain-A). Due to the lack of a proper quaternary structure in the N-terminal UNG2 region, the UNG2 peptide prefers surface AA residues (such as AAs 5,7,8,10,11,12,13,and 15) in order to make connections and to increase the prevalence rate of interactions ( Figure 3). The image on the right-hand side is a y-axis 180° of the FAM72A-UNG2 interaction to illustrate the interaction clearly. Key interacting AA residues are labeled in red. The prevalence rate of interface residues on the FAM72A-UNG2 interaction has been depicted by molecular rendering (cartoon model). Hydrophobicity is the major phenomenon of the FAM72A protein and UNG2 peptide interaction. Free Binding Energy Prediction on FAM72A Protein and UNG2 (AA1-45) Peptide Heterodimer An MM/GBSA prediction is imposed on the free binding energy calculation in the FAM72A protein and UNG2 (AA1-45) peptide heterodimer. The MM/GBSA analysis offered a breakthrough regarding catalytic AAs in the FAM72A protein and UNG2 (AA1-45) peptide heterodimer. AA residue-residue contacts in the FAM72A-UNG2 heterodimer were calculated in terms of free binding energy, considering van der Waals forces, electrostatic energy, solvent accessible surface areas, and polar and non-polar energies. pivotal AA with the highest binding energy contacting the Ser12 and Pro13 AAs of UNG. The UNG2 (AA1-45) peptide has been verified with only a few AAs accountable for binding contribution (AAs 2, 5, 8, 11, and 15, respectively). AA-Specific Mutations in the FWMF Motif (AA101-104) of FAM72A Affecting the FAM72A Protein and UNG2 (AA1-45) Peptide Heterodimer Binding Site-directed specific mutations in the FWMF motif (AA101-104) of FAM72A were used (F104A, F104R, F104N, F104G, and F104S) to evaluate the rigidity and flexibility of the interface in the FAM72A protein and UNG2 (AA1-45) peptide heterodimer binding (Figures 5 and 6). We modeled the FAM72A and UNG2 (AA 1-45) peptide and the interaction of FAM72A with the UNG2 (AA 1-45) peptide. The FWMF motif (AA 101-104) appears to be key for the FAM72A protein structure and its binding to UNG2 ( Figure 6). A mutation in the FWMF motif from wt F104 to F104R had the largest effect, turning the binding energy from negative (strong binding/hydrophobic core) to positive (strong binding/ hydrogen bonding). Molecular Dynamic Simulation by GROMACS Validates AA-specific Mutations in the FWMF Motif (AA 101-104) of FAM72A Affecting FAM72A-UNG2 Heterodimer Binding Since phenylalanine F104 appeared to be the key AA within the FWMF motif (AA 101-104) at the interface of the FAM72A-UNG2 interaction, we further investigated the effect of FAM72A mutations at wt AA F104 phenylalanine (F104 → F104A, F104R, F104N, F104G, and F104S) within the FWMF motif (AA 101-104) on the dynamic nature of FAM72A-UNG2 binding. Dynamic conformation changes in FAM72A-UNG2 (AA 1-45) binding were simulated by GROMACS and plotted by Grace (Figures 7-9). We modeled the FAM72A and UNG2 (AA1-45) peptide and the interaction of FAM72A with the UNG2 (AA1-45) peptide. The FWMF motif (AA101-104) appears to be key for the FAM72A protein structure and its binding to UNG2 (Figure 6). A mutation in the FWMF motif from wt F104 to F104R had the largest effect, turning the binding energy from negative (strong binding/hydrophobic core) to positive (strong binding/hydrogen bonding). Since phenylalanine F104 appeared to be the key AA within the FWMF motif (AA101-104) at the interface of the FAM72A-UNG2 interaction, we further investigated the effect of FAM72A mutations at wt AA F104 phenylalanine (F104 → F104A, F104R, F104N, F104G, and F104S) within the FWMF motif (AA101-104) on the dynamic nature of FAM72A-UNG2 binding. Dynamic conformation changes in FAM72A-UNG2 (AA1-45) binding were simulated by GROMACS and plotted by Grace (Figures 7-9). Since phenylalanine F104 appeared to be the key AA within the FWMF motif (AA101-104) at the interface of the FAM72A-UNG2 interaction, we further investigated the effect of FAM72A mutations at wt AA F104 phenylalanine (F104 → F104A, F104R, F104N, F104G, and F104S) within the FWMF motif (AA101-104) on the dynamic nature of FAM72A-UNG2 binding. Dynamic conformation changes in FAM72A-UNG2 (AA1-45) binding were simulated by GROMACS and plotted by Grace (Figures 7-9). We assessed the effect of F104 mutations in FAM72A on the dynamic confirmation changes, stability, and rigidity of core and buried regions. Trajectories recorded up to 50 ns were plotted as RMSD, RMSF, and Rg, respectively. Figures 7-9 show the effect of these mutations on protein backbone changes in mutated FAM72A and signifies the pivotal role of this FWMF motif (AA101-104) for the FAM72A-UNG2 interaction. These data confirm the FWMF motif as a suitable target to interfere with FAM72A-UNG2 signaling pathways. We assessed the effect of F104 mutations in FAM72A on the dynamic confirmation changes, stability, and rigidity of core and buried regions. Trajectories recorded up to 50 ns were plotted as RMSD, RMSF, and Rg, respectively. Figures 7-9 show the effect of these mutations on protein backbone changes in mutated FAM72A and signifies the pivotal role of this FWMF motif (AA 101-104) for the FAM72A-UNG2 interaction. These data confirm the FWMF motif as a suitable target to interfere with FAM72A-UNG2 signaling pathways. We assessed the effect of F104 mutations in FAM72A on the dynamic confirmation changes, stability, and rigidity of core and buried regions. Trajectories recorded up to 50 ns were plotted as RMSD, RMSF, and Rg, respectively. Figures 7-9 show the effect of these mutations on protein backbone changes in mutated FAM72A and signifies the pivotal role of this FWMF motif (AA101-104) for the FAM72A-UNG2 interaction. These data confirm the FWMF motif as a suitable target to interfere with FAM72A-UNG2 signaling pathways. Lead Discovery and Chemical Docking-Interference with FAM72A-UNG2 Interaction and Activity We performed a virtual high-throughput screening to detect a potential lead interference with the FAM72A-UNG2 interaction [71][72][73]. Binding scores, along with drug-likeness and pharmacophore properties, were considered [35,[74][75][76][77][78]. The MTiOpen Screen required Autodock vina to exercise the docking of chemical libraries, scoring, and ensemble analysis [54,55]. The virtual screening suggested 100 compounds, and predicted binding energies (kcal/mol) were considered to filter the hits for the selection of the "best" hit for optimization in order to identify the promising lead compound. Based on predicted binding energies, we identified withaferin B (PubChem CID: 11113907) as the "best" hit (binding energy −0.5 kcal/mol) molecule that could potentially interfere with the FAM72A-UNG2 interaction ( Figure 11). In the lead generation, the Glide XP docking analysis showed a strong binding affinity of −1.868 kcal/mol at the FAM72A-UNG2 interference site. Active AA residues contributing to the interaction were visualized by LIGPLOT ( Figure 11). Lead Discovery and Chemical Docking-Interference with FAM72A-UNG2 Interaction and Activity We performed a virtual high-throughput screening to detect a potential lead interference with the FAM72A-UNG2 interaction [71][72][73]. Binding scores, along with druglikeness and pharmacophore properties, were considered [35,[74][75][76][77][78]. The MTiOpen Screen required Autodock vina to exercise the docking of chemical libraries, scoring, and ensemble analysis [54,55]. The virtual screening suggested 100 compounds, and predicted binding energies (kcal/mol) were considered to filter the hits for the selection of the "best" hit for optimization in order to identify the promising lead compound. Based on predicted binding energies, we identified withaferin B (PubChem CID: 11113907) as the "best" hit (binding energy −0.5 kcal/mol) molecule that could potentially interfere with the FAM72A-UNG2 interaction ( Figure 11). In the lead generation, the Glide XP docking analysis showed a strong binding affinity of −1.868 kcal/mol at the FAM72A-UNG2 interference site. Active AA residues contributing to the interaction were visualized by LIGPLOT ( Figure 11). Interestingly, withaferin B has structural similarities with withaferin A. Both compounds are withanolide analogues (derived from Withania somnifera (Indian ginseng)) and contain an oxapentacyclo moiety. Besides, in their central moieties, withaferin B contains octadecan-5-yl, whereas withaferin A contains octadec-4-en-3-one. Of note, withaferin A has been reported to be a potential anti-cancer molecule that can inhibit cell proliferation, cell migration, and cell invasion [79][80][81][82][83][84][85][86]. Similarly, withaferin B, bound to the FAM72A-UNG2 heterodimer, could possibly block FAM72A-UNG2 signaling pathways in cancer cells [26,29]; however, thus far, the biological and therapeutic properties of withaferin B remain unknown. Conclusions Accumulating evidence indicates the involvement of FAM72A in tumorigenesis [24][25][26]29,87,88]. Elevated FAM72A causes reduced UNG2 levels, eventually leading to new mutations [24,25,[27][28][29]. Our data pave the way for new investigative experimental approaches to validate the prevention of cancer by interfering with the FAM72A-UNG2 signaling pathways using withaferin B. Withaferin B is a potential candidate for future investigations in the interference with genome stability, centromere formation, and genome editing, and on potential therapeutic strategies for the treatment of cancer. Withaferin B binds to the FAM72A-UNG2 heterodimer's interface at the FWMF motif and interacts strongly with both FAM72A and UNG2. Our data show that withaferin B could probably bind to the FAM72A-UNG2 heterodimer using electrostatic interactions and hydrophobic contacts via FAM72A' AAs Y60, T56, C59, and M103 and hydrogen bonding with FAM72A' D71. Moreover, withaferin B could probably bind to the FAM72A-UNG2 heterodimer using hydrophobic contacts via UNG2 AA F11 to disrupt the stability of FAM72A-UNG2 chain attachment and, thus, to inhibit the formation of active FAM72A-UNG2 protein complexes. As a result, FAM72A-UNG2 cell signaling could be turned off. Conclusions Accumulating evidence indicates the involvement of FAM72A in tumorigenesis [24][25][26]29,87,88]. Elevated FAM72A causes reduced UNG2 levels, eventually leading to new mutations [24,25,[27][28][29]. Our data pave the way for new investigative experimental approaches to validate the prevention of cancer by interfering with the FAM72A-UNG2 signaling pathways using withaferin B. Withaferin B is a potential candidate for future investigations in the interference with genome stability, centromere formation, and genome editing, and on potential therapeutic strategies for the treatment of cancer. Data Availability Statement: The data presented in this study are available on request from the corresponding authors. Conflicts of Interest: The authors have no competing financial interest.
2021-11-25T16:15:58.169Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "d0b5d7c08b187438ea43752bb41dc13a60dde14f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6694/13/22/5870/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3058928b695b3bd92fb31774a0b0f5e80f2d16a0", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
9670358
pes2o/s2orc
v3-fos-license
Syntactic Simpli(cid:2)cation for Improving Content Selection in Multi-Document Summarization In this paper, we explore the use of automatic syntactic simpli(cid:2)cation for improving content selection in multi-document summarization. In particular, we show how simplifying parenthet-icals by removing relative clauses and apposi-tives results in improved sentence clustering, by forcing clustering based on central rather than background information. We argue that the inclusion of parenthetical information in a sum-mary is a reference-generationtask rather than a content-selection one, and implement a baseline reference rewriting module. We perform our evaluations on the test sets from the 2003 and 2004 Document Understanding Conference and report that simplifying parentheticals results in signi(cid:2)cant improvement on the automated evaluation metric Rouge . Introduction Syntactic simplification is an NLP task, the goal of which is to rewrite sentences to reduce their grammatical complexity while preserving their meaning and information content. Text simplification is a useful task for varied reasons. Chandrasekar et al. (1996) viewed text simplification as a preprocessing tool to improve the performance of their parser. The PSET project (Carroll et al., 1999), on the other hand, focused its research on simplifying newspaper text for aphasics, who have trouble with long sentences and complicated grammatical constructs. We have previously (Siddharthan, 2002;Siddharthan, 2003) developed a shallow and robust syntactic simplification system for news reports, that simplifies relative clauses, apposition and conjunction. In this paper, we explore the use of syntactic simplification in multi-document summarization. Sentence Shortening for Summarization It is interesting to survey the literature in sentence shortening, a task related to syntactic simplification. Grefenstette (1998) proposed the use of sentence shortening to generate telegraphic texts that would help a blind reader (with a text-to-speech software) skim a page in a manner similar to sighted readers. He provided eight levels of telegraphic reduction. The first (the most drastic) generated a stream of all the proper nouns in the text. The second generated all nouns in subject or object position. The third, in addition, included the head verbs. The least drastic reduction generated all subjects, head verbs, objects, subclauses and prepositions and dependent noun heads. Reproducing from an example in his paper, the sentence: Former Democratic National Committee finance director Richard Sullivan faced more pointed questioning from Republicans during his second day on the witness stand in the Senate's fund-raising investigation. got shortened (with different levels of reduction) to: 5 Richard Sullivan Republicans Senate. 5 Richard Sullivan faced pointed questioning. 5 Richard Sullivan faced pointed questioning from Republicans during day on stand in Senate fundraising investigation. Grefenstette (1998) provided a rule based approach to telegraphic reduction of the kind illustrated above. Since then, Jing (2000), Riezler et al. (2003) and Knight and Marcu (2000) have explored statistical models for sentence shortening that, in addition, aim at ensuring grammaticality of the shortened sentences. These sentence-shortening approaches have been evaluated by comparison with human-shortened sentences and have been shown to compare favorably. However, the use of sentence shortening for the multi-document summarization task has been largely unexplored, even though intuitively it appears that sentence-shortening can allow more important information to be included in a summary. Recently, Lin (2003) showed that statistical sentence-shortening approaches like Knight and Marcu (2000) do not improve content selection in summaries. Indeed he reported that syntax-based sentence-shortening resulted in significantly worse content selection by their extractive summarizer NeATS. Lin (2003) concluded that pure syntaxbased compression does not improve overall summarizer performance, even though the compression algorithm performs well at the sentence level. Simplifying Syntax for Summarization A problem with using statistical sentenceshortening for summarization is that syntactic form does not always correlate with the importance of the information contained within. As a result, syntactic sentence shortening might get rid of important information that should be included in the summary. In contrast, the syntactic simplification literature deals with syntactic constructs that can be interpreted from a rhetorical perspective. In particular, appositives and non-restrictive relative clauses are considered parentheticals in RST (Mann and Thompson, 1988). Their role is to provide background information on entities, and to relate the entity to the discourse. Along with restrictive relative clauses, their inclusion in a summary should ideally be determined by a reference generating module, not a content selector. It is thus more likely that the removal of appositives and relative clauses will impact content-selection than the removal of adjectives and prepositional phrases, as attempted by sentence shortening. It is precisely this hypothesis that we explore in this paper. Outline We describe our sentence-clustering based summarizer in the next section, including our experiments on using simplification of parentheticals to improve clustering in 6 2.1. We evaluate our summarizer in 6 3 and then describe our reference regenerator in 6 4. We present a discussion of our approach in 6 5 and conclude in 6 6. The Summarizer We use a sentence-clustering approach to multidocument summarization (similar to multigen (Barzilay, 2003)), where sentences in the input documents are clustered according to their similarity. Larger clusters represent information that is repeated more often across input documents; hence the size of a cluster is indicative of the importance of that information. For our current implementation, a representative (simplified) sentence is selected from each cluster and these are incorporated into the summary in the order of decreasing cluster size. A problem with this approach is that the clustering is not always accurate. Clusters can contain spurious sentences, and a cluster's size might then exaggerate its importance. Improving the quality of the clustering can thus be expected to improve the content of the summary. We now describe our experiments on syntactic simplification and sentence clustering. Our hypothesis is that simplifying parenthetical units (relative clauses and appositives) will improve the performance of our clustering algorithm, by preventing it from clustering on the basis of background information. Simplification and Clustering We use SimFinder (Hatzivassiloglou et al., 1999) for sentence clustering and its similarity metric to evaluate cluster quality; SimFinder outputs similarity values (simvals) between 0 and 1 for pairs of sentences, based on word overlap, synonymy and n-gram matches. We use the average of the simvals for each pair of sentences in a cluster to evaluate a quality-score for the cluster. Table 1 below shows the quality-scores averaged over all clusters when the original document set is and is not preprocessed using our syntactic simplification software (described in 6 2.2). We use 30 document sets from the 2003 Document Understanding Conference (see 6 3.1 for description). For each of the experiments in table 1, SimFinder produced around 1500 clusters, with an average cluster size beween 3.6 and 3.8. Orig Simp-Paren Simp-Conj Av. quality-score 0.687 0.722 0.686 Std. deviation (7 ) 0.130 0.112 0.126 Table 1: Syntactic Simplification and Clustering Table 1 shows that removing parentheticals results in a 5% relative improvement in clustering. This improvement is significant at confidence 8 @ 9 A C B D as determined by the difference in proportions test (Snedecor and Cochran, 1989). Further, the standard deviation for the performance of the clustering decreases by around 2%. This suggests that removing parentheticals results in better and more robust clustering. As an example of how clustering improves, our simplification routine simplifies: PAL, which has been unable to make payments on dlrs 2.1 billion in debt, was devastated by a pilots' strike in June and by the region's currency crisis, which reduced passenger numbers and inflated costs. to: PAL was devastated by a pilots' strike in June and by the region's currency crisis. Three other sentences also simplify to the extent that they represent PAL being hit by the June strike. The resulting cluster (with quality score=0.94) is: 4. In June, PAL was embroiled in a crippling threeweek pilots' strike. On the other hand, splitting conjoined clauses does not appear to aid clustering 1 . This indicates that the improvement from removing parentheticals is not because shorter sentences might cluster better (as SimFinder controls for sentence length, this is anyway unlikely). For confirmation, we performed one more experiment-we deleted words at random, so that the average sentence length for the modified input documents was the same as for the inputs with parentheticals removed. This actually made the clustering worse (av. quality score of 0.637), confirming that the improvement from removing parentheticals was not due to reduced sentence length. These results demonstrate that the parenthetical nature of relative clauses and appositives makes their removal useful. Improved clustering, however, need not necessarily translate to improved content selection in summaries. We therefore also need to evaluate our summarizer. We do this in 6 3, but first we describe the summarizer in more detail. Description of our Summarizer Our summarizer has four stages-preprocessing of original documents to remove parentheticals, clustering of the simplified sentences, selecting of one representative sentence from each cluster and deciding which of these selected sentences to incorporate in the summary. We use our syntactic simplification software (Siddharthan, 2002;Siddharthan, 2003) to remove parentheticals. It uses the LT TTT (Grover et al., 2000) for POS-tagging and simple noun-chunking. It then performs apposition and relative clause identification and attachment using shallow techniques based on local context and animacy information obtained from WordNet (Miller et al., 1993). We then cluster the simplified sentences with SimFinder (Hatzivassiloglou et al., 1999). To further tighten the clusters and ensure that their size is representative of their importance, we post-process them as follows. SimFinder implements an incremental approach to clustering. At each incremental step, the similarity of a new sentence to an existing cluster is computed. If this is higher than a threshold, the sentence is added to the cluster. There is no backtracking; once a sentence is added to a cluster, it cannot be removed, even if it is dissimilar to all the sentences added to the cluster in the future. Hence, there are often one or two sentences that have low similarity with the final cluster. We remove these with a post-process that can be considered equivalent to a back-tracking step. We redefine the criteria for a sentence to be part of the final cluster such that it has to be similar (simval above the threshold) to all other sentences in the final cluster. We prune the cluster to remove sentences that do not satisfy this criterion. Consider the following cluster and a threshold of 0.65. Each line consists of two sentence ids (P[sent id]) and their simval. We mark all the lines with similarity values below the threshold (in bold font). We then remove as few sentences as possible such that these lines are excluded. In this example, it is sufficient to remove E G F I H Q P . The final cluster is then: The result is a much tighter cluster with one sentence less than the original. This pruning operation leads to even higher similarity scores than those presented in table 1. Having pruned the clusters, we select a representative sentence from each cluster based on tf*idf. We then incorporate these representative sentences into the summary in decreasing order of their cluster size. For clusters with the same size, we incorporate sentences in decreasing order of tf*idf. Unlike multigen (Barzilay, 2003), which is generative and constructs a sentence from each cluster using information fusion, we implement extractive summarization and select one (simplified) sentence from each cluster. We discuss the scope for generation in our summarizer in 6 4 and 6 6. Evaluation We present two evaluations in this section. Our system, as described in the previous section, was entered for the DUC'04 competition. We describe how it fared in 6 3.3. We also present an evaluation over a larger data set to show that syntactic simplification of parenthetical units significantly improves content selection (6 3.4). But first, we describe our data (6 3.1) and the evaluation metric Rouge (6 3.2). Data The Document Understanding Conference (DUC) has been run annually since 2001 and is the biggest summarization evaluation effort, with participants from all over the world. In 2003, DUC put special emphasis on the development of automatic evaluation methods and also started providing participants with multiple human-written models needed for reliable evaluation. Participating generic multidocument summarizers were tested on 30 eventbased sets in 2003 and 50 sets in 2004, all 80 containing roughly 10 newswire articles each. There were four human-written summaries for each set, created for evaluation purposes. In DUC'03, the task was to generate 100 word summaries, while in DUC'04, the limit was changed to 665 bytes. Evaluation Metric We evaluated our summarizer on the DUC test sets using the Rouge automatic scoring metric (Lin and Hovy, 2003). The experiments in Lin and Hovy (2003) show that among n-gram approaches to scoring, Rouge-1 (based on unigrams) has the highest correlation with human scores. In 2004, an additional automatic metric based on longest common subsequence was included (Rouge-L), that aims to overcome some deficiencies of Rouge-1, such as its susceptibility to ungrammatical keyword packing by dishonest summarizers 2 . For our evaluations, we use the Rouge settings from DUC'04: stop words are included, words are Porter-stemmed, and all four human model summaries are used. DUC'04 Evaluation We entered our system as described above for the DUC'04 competition. There were 35 entries for the generic summary task, including ours. At 95% confidence levels, our system was significantly superior to 23 systems and indistinguishable from the other 11 (using Rouge-L). Using Rouge-1, there was one system that was significantly superior to ours, 10 that were indistinguishable and 23 that were significantly inferior. We give a few Rouge scores from DUC'04 in figure 2 below for comparison purposes. The 95% confidence intervals for our summarizer are +-0.0123 (Rouge-1) and +-0.0130 (Rouge-L). Table 3 below shows the Rouge-1 and Rouge-L scores for our summarizer when the text is and is not simplified to remove parentheticals. The data for this evaluation consists of the 80 document sets from DUC'03 and DUC'04. We did not use data from previous years as these included only one human model-summary and Rouge requires multiple models to be reliable. The improvement in performance when the text is preprocessed to remove parenthetical units is significant at 95% confidence limits. When compared to the 34 other participants of DUC'04, the simplification step raises our clustering-based summarizer from languishing in the bottom half to being in the top third and statistically indistinguishable from the top system at 95% confidence (using Rouge-L). Reference Regeneration As the evaluations above show, preprocessing text with syntactic simplification significantly improves content selection for our summarizer. This is encouraging; however, our summarizer, as describe so far, generates summaries that contain no parentheticals (appositives or relative clauses), as these are removed from the original texts prior to summarization. We believe that the inclusion of parenthetical information about entities should be treated as a reference generation task, rather than a content selection one. Our analysis of human summaries suggests that people select parentheticals to improve coherence and to aid the hearer in identifying referents and relating them to the discourse. A complete treatment of parentheticals in reference regeneration in summaries is beyond the scope of this paper, the emphasis of which is content-selection, rather than coherence. We plan to address this issue elsewhere; in this paper, we restrict ourselves to describing a baseline approach to incorporating parentheticals in regenerated references to people in summaries. Including Parentheticals Our text-simplification system (Siddharthan, 2003) provides us with with a list of all relative clauses, appositives and pronouns that attach to/co-refer with every entity. We used a named entity tagger (Wacholder et al., 1997) to collect all such information for every person. The processed references to the same people across documents were aligned using the named entity tagger canonic name, resulting in tables similar to those shown in figure 1. Figure 1: Example information collected for entities in the input. The canonic form of the named entity is shown in bold and the input article id in italic. IR stands for "initial reference", CO for subsequent noun co-reference, PR for pronoun reference, AP for apposition and RC for relative clause. We automatically post-edited our summaries using a modified version of the module described in Nenkova and McKeown (2003). This module normalizes references to people in the summary, by introducing them in detail when they are first mentioned and using a short reference for subsequent mentions; these operations were shown to improve the readability of the resulting summaries. Nenkova and McKeown (2003) avoided including parentheticals due to both the unavailability of fast and reliable identification and attachment of appositives and relative clauses, and theoretical issues relating to the selection of the most suitable parenthetical unit in the new summary context. In order to ensure a balanced inclusion of parenthetical information in our summaries, we modified their initial approach to allow for including relative clauses and appositives in initial references. We made use of two empirical observations made by Nenkova and McKeown (2003) based on hu-man summaries: a first mention is very likely to be modified in some way (probability of 0.76), and subsequent mentions are very unlikely to be postmodified (probability of 0.01-0.04). We therefore only considered incorporating parentheticals in first mentions. We constructed a set consisting of appositives and relative clauses from initial references in the input documents and an empty string option (for the example in figure 1, the set would be S "leader of the outlawed Kurdistan Worker's Party", "who is wanted in Turkey on charges of heading a terrorist organization"," leader of Kurdish insurgents", "who has been sought for years by Turkey", T V U ). We then selected one member of the set randomly for inclusion in the initial reference. A more sophisticated approach to the treatment of parentheticals in reference regeneration, based on lexical cohesion constraints, is currently underway. Evaluation We repeated the evaluations on the 80 document sets from DUC'03 and DUC'04, using our simplifi-cation+clustering based summarizer with the reference regeneration component included. The results are shown in the table below. At 95% confidence, the difference in performance is not significant. This is an interesting result because it suggests that rewriting references does not adversely affect content selection. This might be because the extra words added to initial references are partly compensated for by words removed from subsequent references. In any case, the reference rewriting can significantly improve readability, as shown in the examples in figures 2 and 3. We are also optimistic that a more focused reference rewriting process based on lexical-cohesive constraints and information-theoretic measures can improve Rouge content-evaluation scores as well as summary readability. Table 5 compares the average sentence lengths of our summaries (after reference rewriting) with those of the original news reports, human (model) summaries and machine summaries generated by the participating summarizers at DUC'03 and '04. Surface Analysis of Summaries These figures confirm various intuitions about human vs machine-generated summaries-machine summaries tend to be based on sentence extraction; Before: Pinochet was placed under arrest in London Friday by British police acting on a warrant issued by a Spanish judge. Pinochet has immunity from prosecution in Chile as a senator-for-life under a new constitution that his government crafted. Pinochet was detained in the London clinic while recovering from back surgery. After: Gen. Augusto Pinochet, the former Chilean dictator, was placed under arrest in London Friday by British police acting on a warrant issued by a Spanish judge. Pinochet has immunity from prosecution in Chile as a senator-for-life under a new constitution that his government crafted. Pinochet was detained in the London clinic while recovering from back surgery. Figure 2: First three sentences from a machine generated summary before/after reference regeneration. many have an explicitly encoded preference for long sentences (assumed to be more informative); humans tend to select information at a sub-sentential level. As a result, human summaries contain on average shorter sentences than the original, while machine summaries contain on average longer sentences than the original. Interestingly, our summarizer, like human summarizers, generates shorter sentences than the original news text. Equally interesting is the distribution of parentheticals. The original news reports contain on average one parenthetical unit (appositive or relative clause) every 3.9 sentences. The machine summaries contain on average one parenthetical every 3.3 sentences. On the other hand, human summaries contain only one parenthetical unit per 8.9 sentences on average. In other words, human summaries contain fewer parenthetical units per sentence than the original reports; this appears to be a deliberate attempt at including more events and less background information in a summary. Machine summaries tend to contain on average more parentheticals than the original reports. This is possibly an artifact of the preference for longer sentences, but the data suggests that 100 word machine summaries use up valuable space by presenting unnecessary background information. Our summaries contain one parenthetical unit every 10.0 sentences. This is closer to human summaries than to the average machine summary, again suggesting that our approach of treating the inclu-Before: Turkey has been trying to form a new government since a coalition government led by Yilmaz collapsed last month over allegations that he rigged the sale of a bank. Ecevit refused even to consult with the leader of the Virtue Party during his efforts to form a government. Ecevit must now try to build a government. Demirel consulted Turkey's party leaders immediately after Ecevit gave up. After: Turkey has been trying to form a new government since a coalition government led by Prime Minister Mesut Yilmaz collapsed last month over allegations that he rigged the sale of a bank. Premier-designate Bulent Ecevit refused even to consult with the leader of the Virtue Party during his efforts to form a government. Ecevit must now try to build a government. President Suleyman Demirel consulted Turkey's party leaders immediately after Ecevit gave up. sion of parentheticals as a reference generation task is justified. Conclusions and Future Work We have demonstrated that simplifying news reports by removing parenthetical information results in better sentence clustering and consequently better summarization. We have further demonstrated that using a reference rewriting module to introduce parentheticals as a post-process does not significantly affect the score on an automated contentevaluation metric; indeed we believe that a more sophisticated rewriting module might indeed improve performance on content selection. In addition, the summaries produced by our summarizer closely resemble human summaries in surface features such as average sentence length and the distribution of relative clauses and appositives. The results in this paper might be useful to generative approaches to summarization. It is likely that the improved clustering will make operations like information fusion (Barzilay, 2003;Dalianis and Hovy, 1996) within clusters more reliable. We plan to examine whether this is indeed the case. We feel that the performance of our summarizer is encouraging (it performs at 90% of human performance as measured by Rouge) as it is conceptually very simple-it selects informative sentences from the largest clusters and does not contain any theoretically inelegant optimizations, such as excluding overly long or short sentences. Our approach of extracting out parentheticals as a pre-process also provides a framework for reference rewriting, by allowing the summarizer to select background information independently of the main content. We believe that there is a lot of research left to be carried out in generating references in open domains and will address this issue in future work.
2014-07-01T00:00:00.000Z
2004-08-23T00:00:00.000
{ "year": 2004, "sha1": "0d8e3db6fcd99313773fcaa16074f2cc76ce1fef", "oa_license": null, "oa_url": "http://dl.acm.org/ft_gateway.cfm?id=1220484&type=pdf", "oa_status": "BRONZE", "pdf_src": "ACL", "pdf_hash": "3108dea32ebff3f1142d875c54cb860741950b91", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
269834793
pes2o/s2orc
v3-fos-license
On an equation arising by reduction of the Drinfeld-Sokolov hierarchy A seventh order ordinary differential equation (ODE) arising by reduction of the Drinfeld-Sokolov hierarchyis shown to be identical to a similarity reduction of an equationin the hierarchy of Sawada-Kotera.We also exhibit its link with a particular F-VI,a fourth order ODE isolated by Cosgrove which is likely to define a higher order Painlev\'e function. Introduction In a recent article [6], the authors consider the tau cover of the Drinfeld-Sokolov hierarchy and, in order to obtain explicit solutions, perform a similarity reduction [6, Eq. (5.1)] which defines a system of nonlinear ODEs in the independent variable x.By construction, this system possesses a Lax pair (L, M ) [6, Eq. (5.5)] whose zero-curvature condition is in which z is the spectral parameter.For all their choices but one of the underlying affine Kac-Moody algebra g, the authors did succeed to explicitly integrate the nonlinear ODE system in terms of various elliptic or Painlevé or higher Painlevé functions.The only system which could not be integrated results from the choice g = A (2) 2 , this is the seventh order nonautonomous system for u(x), ω(x) [6, Example 5.5 page 1487] ]ocnmp[ which can be viewed as a birational transformation between u(x) and ω(x), each variable obeying a seventh order ODE. The purpose of this work is to explicitly integrate this system, i.e. to map it either to Painlevé equations (second order), or to one of the five "higher Painlevé equations" (fourth and fifth order) isolated by Cosgrove [2,3], or to higher order (six and above) equations in the hierarchy of the previous ones. The method, developed in next sections, is classical and it relies on three pieces of information: (i) the Lax pair, (ii) the singularity structure, (iii) exhaustive lists of ODEs possessing the Painlevé property. In Section 2, by considering the invariants of the matrix Lax pair, we obtain a unique first integral, thus lowering the differential order only to six.This is an indication (not a proof) that the equations of Painlevé and Cosgrove should be insufficient to perform the integration. In Section 3, we therefore investigate the singularity structure of the system (2).The three families of movable simple poles are then compared with sixth or senventh order members of various, already classified, hierarchies.This allows one to integrate (2) in terms of a higher member of the Sawada-Kotera hierarchy. First integral The system (2) admits a three-dimensional zero curvature representation, this is [courtesy of Wu Chao-Zhong], (20u 4 + 60u 2 u ′′ + 84uu ′ 2 + 24uu (4) + 33u ′′ 2 + 60u ′ u (3) + 3u (6) + 108xu − 162ω), ]ocnmp[ On a reduction of the Drinfeld-Sokolov hierarchy 89 in which the eight constant operators can be represented by third order matrices, The trace of M 2 is an affine function of z 4 , and the coefficient of z 0 is a single first integral K of the system (2), tr As to the traces of higher powers of M , all nonzero, they do not generate other first integrals.
2024-05-18T15:35:20.101Z
2024-05-14T00:00:00.000
{ "year": 2024, "sha1": "943546a967697a3e9694d41c07f23f7b95ff6b18", "oa_license": "CCBYNC", "oa_url": "https://ocnmp.episciences.org/13583/pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "679f8f602d8adfdac5cf5e9540094e70fe3e71d6", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [] }
246748130
pes2o/s2orc
v3-fos-license
Awareness and perception of malaria and dengue at school and college level in the district of Multan The purpose of this study is to examine the awareness and perception of malaria and dengue fever in Multan Punjab, Pakistan while taking into account the important role of government policies and other variables. The goal of this study is to examine the awareness of students in Multan, Pakistan on malaria and dengue. This study is based on a quantitative approach of secondary evidence from scientific journals and questionnaire surveys. It is also based on observational evidence gathered in Multan Punjab Pakistan, in a field study. The survey with school children, teachers and healthcare professionals were both formal and semi-structuralize. Studies have found that malaria and dengue mainly affect children’s schooling through their absence, but can also induce brain loss and cognitive disability. In questionnaires, students were seen to have different understanding of the illness, but also to be able to serve as agents of health reform only through teachers. A sample size of 500 respondents has been selected from different colleges of district Multan Punjab, Pakistan. Correlation technique is used for the data analysis. According to our results it is concluded that the students at college level are aware of malaria and dengue diseases, but they are not capable of engaging and serving as agents for health reform. On the basis of results it is recommended that students must teach about epidemics diseases regarding how to handle these diseases. Introduction Malaria and dengue diseases both are considered to grow exponentially, both in regards to prevalence and mortality rates, as well as mosquito-borne illnesses that pose a global issue of public health because of ease on the globe. Malaria is the product of Plasmodium spp One of the most significant outbreaks of this disease occurred in India (99,913 cases reported and 220 deaths) in 2015, the worst of which was Delhi (15,867 cases reported; 60 dead). As dengue is, India's with malaria is afflicted with a separate VBD. South-East Asia is heavily afflicted and 77% of the disease prevalence is in India. [16] without widely available vaccines, the risk of certain diseases may be minimized successfully with environmental protection policies paired with worker's prevention steps [17]. Therefore, active group involvement through better awareness and health promotion activities is necessary to produce better outcomes in vector management [18,19]. As with many health issues in society, the population's awareness, behavior, and behaviors (KAPs) play a significant role in enforcing VBD control steps. The WHO has advocated the use of lay people as health educators in the war against common diseases. Schools include kids with a vital chance to learn regarding emerging health and endemic disorders and how to avoid them. Teachers will play a vital role in transmitting key preventive education strategies to kids and striving at a significant health predictor-health behavior. While significant, the function of school administrators as school counselors has not received much attention. Analysis of which aspects of intervention at the community level can be strengthened by supporting teachers' health education in schools does not exist much [20]. Limited studies study has shown the role of education providers in the battle against diseases like AIDS and oral illness [21]. Consequently, aspects of intervention and community health have to be discussed that can be turned into effective prevention initiatives at the educational level by teachers. The level of awareness, climate, and activities of the Population surrounding mosquitoborne diseases are unavoidable in establishing a sound and successful health education policy. It was agreed to conduct this analysis in the city of Rajkot with this context. Dengue fever (DF) is an infectious disease that is prevalent in the Asian subcontinent spread by A. aegypti [22]. (In recent years, the death and the morbidity connected with this has arisen as a prominent public health issue. The dengue occurrence has increased 30 times in the last 50 years, according to the World Health Organization (WHO). Globally, 50 to 100 million dengue infections have been reported to occur annually [23]. South-East Asia, comprising 52 percent of global risk, is among the DF / DHF areas with the greatest risk. Patients with DHF and dengue shock syndrome (DSS) can have as large a case-fatality rate as 44 percent. Indeed, in many urban, peri-urban, and rural regions, the issue is becoming hyper prevalent with recurrent epidemics. In several parts of India Dengue is endemic and epidemics in several parts of India are regularly recorded [24]. Vector regulation is the best way to combat dengue because there is no antidote. Unless community engagement occurs, the community's understanding of the disease, its manner of propagation, and reproduction sites is essential to determine the effectiveness of a community-based initiative. Information, sensitivity, and study in the practice function as a population educational diagnostic. This knowledge helps initiatives set communication goals in line with increased audience involvement and demand for resources, as well as establish customized methods suited for risky socioeconomic, political, and cultural circumstances. With about 2.5 trillion people who are at risk of infecting, the global prevalence of dengue infection is increasingly growing. The WHO reports that up to 50 million illnesses are dengue every year, resulting in 500 000 hospitalizations every year. 5. About 70 percent of 5 of these cases are studied in the field of Asia Pacific. Methodology The study was an Ex-Post-Facto method, that covered variables that had been expressed and which could not be influenced by the investigator. The goal of this analysis was to explore the awareness and perception of malaria and dengue in school and college. The study analyzed the relationship between awareness and perception of malaria and dengue. This study explored the interaction amongst demographic (age, gender, qualifications, etc.) and awareness and perception of malaria and dengue. This study followed a sample methodology. The questionnaire survey has been used to interpret the features, behaviors, or actions of a group from one sample to a population [25]. Study design As indicated [26], the study process consisted of six main stages. The first stage describes the study topic. The reason for this study initially results from several reasons, like the absence of detailed study into the perception and awareness of malaria and dengue in school and college students. The second stage defined the study problem. The theoretical and applied literatures on awareness and perception of school and college children on the topic of malaria and dengue have been thoroughly analyzed. The third stage involves study planning. In order to facilitate the necessary fieldwork, the schools, academics, and college institutions were contacted in the Multan area. The fourth stage involves collecting information and study data and the data was collected from the schools and colleges. The fifth stage involves the analysis of data that was gathered in stage four. Stage 6 includes conclusions that hopefully provide policymakers with convincing evidence in their efforts to improve the awareness and perception of malaria and dengue in schools and colleges of Pakistan. For data collection and analysis, the study method is a selection of tools to be used. According to [26], study and learners need to choose suitable methods in order to show their ability to understand and acknowledge their subject matter. Questions surveys are carried out to identify the opinions of students, teachers, as well as staff and their controlling factors in the country in the quantitative study methodology. This approach also includes the collection of statistical data for the analysis of dengue and malaria's awareness in schools and colleges. Statistic results are also used to evaluate awareness and perception of malaria and dengue of students. One of the methods used to collect information for this study was the questionnaire survey. In order to assess business problems regarding awareness of malaria and dengue, the survey entitled "Awareness and perception of malaria and dengue in schools and colleges" is carried out. The survey will provide relevant evidence of awareness of malaria and dengue in schools and colleges. The survey will attempt to identify the challenges and difficulties facing students in their effort to make good awareness about malaria and dengue. A questionnaire survey will be chosen to gather all the information and data required due to the advantages and suitability of the study questions. Questionnaire format. The close-ended question formats were used for the design of the questions in this survey. Besides, a formatting style Likert-scale based on five categories was also employed (Strongly disagree, disagree, neutral, agree, and strongly agree). Sample & sample size The study population is composed of students in various schools and colleges in Multan. In such schools and colleges, the total population of students was around 1000+. Taking into account the essence and aims of the study, the stratified random sampling method has been used. The survey contained both boys and girls. The ethics committees of the Ghazi University Dera Ghazi Khan provide approval for the study with subject to that before completing the questionnaire, scholars got willingness from each participant (as the participants were not minors as they were the students of final years classes and senior most of their respective institutes and are able to provide willingness, no need to get permission from their parents as they all were elders) and it is recorded in form of voice and this record will be submitted in the office of ethics committee with the coordination of Director Colleges Multan Punjab, Pakistan for study purpose only. Instrument The method used to gather data was the mix of questions approved for use in the review of path-goal theory. Questionnaire for this study that evaluated various units of all variables was merged to construct a detailed questionnaire (dependent, autonomous, and moderating). The final questionnaire administered comprises two sections: the first section involves demographic and personal information; the second section involves variables like awareness of malaria and dengue, perception of malaria and dengue, etc. Sources of data The study centered on the students who are studying in schools and colleges of Multan. Collection of data Via a range of approaches, the analysis and planned utilization of the data was told to all administrators of schools and colleges. Both questionnaires provided written material about the essence of the analysis sample in addition to the descriptive guidance. Participants were instructed not to mention specific names in the report. Participants were also told that results would be measured in composite scores to protect secrecy inside the data collection so that no personal identifying details will be shared. The sharing of knowledge on methods and applications of data collection included: demonstration at various departments to clarify leadership and analysis to all managers; email and telephone calls. As the study team operated in the schools and colleges itself at the period of the data collection, professional connections have often been used for the gathering of knowledge and for making effective usage of participants. Subconsciously, the implementation of the data collection process. Analysis of data In addition to the personal observations of the scholars, the obtained data was tabulated in Excel sheet and study. It was often correlational as it aimed to establish an association between distinct study variables. The interpretation of study participants was focused on all sorts of evidences. The Pearson / Product Moment correlation is ideally adapted for investigating the interaction amongst such variable [27]. The 0.05 degree of validity of all theories was checked. Data have been evaluated using the Social Science Statistical Package (SPSS-26) Results The above Table 1 shows the awareness of dengue fever of students. 48.6% (243) students have dengue fever before and 47.0% shows that they don't have dengue fever before, 4.4% are unsure about it. The table also shows that dengue fever students are more in number than other ones. The above Table 2 shows the awareness of the malaria fever of students. 50.6% of students have malaria fever before and 48.8% shows that they don't have malaria fever before, 0.6% (3) are unsure about it. The table also shows that malaria fever students are more in number than other ones. The above table shows the students who know anyone that have malaria fever in their respective area. The students who know that have malaria fever before are 50.4% and 44.6% show that they don't know who has malaria fever before, 5.0% (Table 3) are unsure about it. The table also shows that malaria fever students are more in number than other ones. The above table shows the students who know anyone that have dengue fever in their respective area. The students who know that have dengue fever before are 46.0% and 46.2% show that they don't know who has dengue fever before, 7.8% (Table 4) are unsure about it. The table also shows that dengue fever students are more in number than other ones. Correlation analysis The Table 5 shows the analysis outcomes; a strong correlation shows a strong agreement between the respondents. Where (important (α) > 0.05) that is to say there is no correlation between the respondents, and if ((α), < 0.05) there is substantial relation between the respondents. The correlation coefficient is shown in the table and the factors that cause malaria and dengue fever are also shown in this table. The correlation is significant at one percent level if it's two-tailed. Results and discussion Malaria is a chronic public health epidemic, which has been resolved yet regulated by several initiatives. Globalized malaria has been suppressed in 113 countries; 34 middle-income countries and malaria has been eliminated in several low-income countries. In malaria-endemic areas, most low-income countries worldwide continue to track malaria. Expression Malaria tracking the effect of malaria has been considerable in low-income countries, with Sub-Saharan Africa, where 47 out of 54 countries are endemic to malaria and most of them are malaria prevention programs. Despite (IPTp), Plasmodium falciparum infection during pregnancy was a public health concern over more than 20 years ago [28], especially throughout sub- Saharan Africa. The field has 75,000 to 200,000 deaths of infants, 900,000 deliveries of LBWs as well as 10,000 deaths of mother every single year [29]. The population of Sub-Saharan Africa is between 25 as well as 30 million pregnant women at risk Sub-Saharan Africa's high malaria incidence is well known to public health as well as global leaders, particularly amongst vulnerable populations like pregnant women and babies. Several trials have also been performed to investigate risk factors for the greater incidence of malaria morbidity and mortality in pregnant women and babies [30]. Comprehensive study was carried out on the interaction between pregnant Tanzanian women (a) SES (defined by age, education level, residence, and wealth index), (b) exposure to malaria media, (c) knowledge of signs or symptoms of Malaria, (d) perceived seriousness of malaria, as well as (e) evidence of malaria in preventive measures (knowledge of malaria signs and symptoms in the prevention of pregnancy). While testing travel, family duty as well as age, both variables had important predictors of the high dose population. The risk of high dose SP / Fansidar has been differentiated by malaria presence in the media as well as the identification of malaria signs and symptoms, and hence the knowledge of malaria prevention measures, as the wider risk of high dose groups, are the greater the increased exposure as well as knowledge of malaria in the medium or their preventive measures. The presumed severity of malaria was also able to estimate how likely it is for travel, family obligation as well as age to be among the high-dose community. The product, therefore, of perceived malaria variable seriousness obtained is in contrast to the expected the implications of this are that, in the initially expected high-dose class, pregnant women who viewed malaria almost as dire health danger were less likely. I interpret the results of this chapter, address the shortcomings of the analysis, prescribe future studies, propose social improvements as well as conclude [31]. HBM (HBM) was used as a basis to analyze the combination of women's care study activity with independent variables: SES (in terms of age, schooling, residence, and wealth), malaria media consumption, malaria signs and symptoms known, malaria severity perceived and malaria prevention awareness. Both variables were the significant predictor of the probability of women pursuing care with SP / Fansidar doses at ANC to avoid pregnant malaria, Transportation regulation, family liability, as well as age. Six structures lead the HBM (HBM) system: (a) the vulnerable nature of real diseases, (b), the associated disease severity, (f) the advantages of health interventions, (d) the perceived obstacle to intervention, (e) the readiness to respond as well as (f) the usefulness of oneself. HBM is founded on the conviction that individuals are more likely to prevent disease if they consider that their particular interventions will prevent disease [32]. HBM discusses in my study the behavior of pregnant women in their clinical behavior in the prevention of malaria during pregnancy. My results of the study are compatible with most HBM postulations. Consequently, I found that the understanding of malaria's severity does not immediately affect the prescription of pregnant women with ANCs after checks on travel, family duty as well as age to avoid malaria. This study rather found that the high dose of 2+SP / Fansidar prescribed was less usual among those who felt malaria to be a severe health threat. Other aspects such as the availability of SES as well as malaria-related communications have contributed greatly to raising the awareness of malaria, as well as to avoiding as well as curing women who have been embarked on treatment. Pregnant women have been well trained to avoid malaria as well as their diagnosis or management techniques. As previous studies have shown, my study also found that awareness about malaria infections among women pregnant women was a major factor in their ability to pursue as well as avoid malaria [33]. The interviews as well as secondary evidence showed that malaria has multiple impacts on the schooling of children. Malaria mostly affects schooling due to absence, but it really impacts the ability of children to read. The teachers interviewed underlined the lack of malaria as a potential outcome. This can be clarified by the easier identification of absence than physiological as well as cognitive injury. If a student is missing, it is necessary to inquire to figure out why, so it's more difficult for a student to know the real causes if it isn't clear that the kid is suffering through paludism if he has trouble recalling things or just has learning disorders. The scale of the lessons renders monitoring all their students much tougher for such an instructor. The teachers interviewed, who visit the students often every day, claimed that malaria in Multan wasn't really common. They saw the illness as either a concern, but only when one was afflicted (which they said was very unusual). That may be partially even though they found it difficult to monitor whether students were sick as well as absent. In one class, it was not rare for many students to demonstrate their opinions upon this disease. The teachers said that the students were acquainted with malaria, so this was false in the children's interviews. Those wrong comments could be because they felt children understood rather than because they actually failed to acknowledge the indifference of the children [34]. The study shows a varied awareness of malaria among school children as well as some apparent shortcomings, among other symptom awareness, disease effects as well as preventive measures. This is significant that the interactions of the children for malaria did not contribute to a greater understanding of the disease. The primary source of awareness for children was schools. Although primary education in Tanzania is compulsory yet most children are present, this could serve as a valuable tool to raise understanding of malaria. The study also indicates that schools as either an information source do not suffice and that other outlets, like mass media, may be significant in the distribution of knowledge through schools [35]. It is obvious that the prevalence of malaria in Multan is high just through staring at children's awareness. Similarly, the incidence of malaria depends not only on misinformation as well as on other causes, like insecurity as well as inadequate healthcare coverage. Poverty and bad health are associated yet hard to solve, although poverty can The Ministry of Health practically relies on global assistance to provide all the population with healthcare, sufficient for the country's healthcare facilities. It would not only be beneficial but it also affordable to include children as well as schools in the battle against malaria [36]. In order to make children health change agents, awareness is important. If teachers are happy to take part in delivering study, they will learn this expertise. The nation, the hospital as well as NGOs will need some assistance. All the participants treated malaria as just a big issue in society except for some of its students. The teachers should explain that the disorder is pervasive to help them understand the benefits of prevention in order to make kids behave as principals of health improvements. Anyone must share the responsibility for creativity in order to persuade the students. In the current crisis, an NGO is probably to provide all the required assistance for this project as there are inadequate resources available to the government. Child participation in the war against malaria is possible in Multan; however, this involves the dedication of the community. It is essential to note what has been achieved throughout the past in order to convince the benefits of creativity, to enhance children's knowledge against Malaria as well as to involve them with agents for health improvements. It is, therefore, necessary to study whether the invention meets human needs as well as violates old customs as well as regulations, or is deemed to be non-functional (compatibility). It must be studied unless the invention is just too complicated and whether that is easy to comprehend that can be used (complexity), in order to decide whether children will serve like health improvement agents. There are two dimensions; whether the kids are able to access as well as reveal knowledge about malaria to the group. Innovation must also be tested to see if it progresses and can see the outcomes (trial ability). It may be a means of measuring creativity if we encourage a school to train its students to also become agents for health improvements and can see the impact [37]. It may well be related to the normalization of the disease in the culture but rather that the disease can be seen as common in people's daily life, that no more respondents felt malaria is really a prevalent disorder in Multan. This may be one of the reasons for the lack of awareness and also the high incidence of children. When people don't really see malaria as just a threat, prevention steps are not likely to be interpreted as well as the infection cannot be minimized as well as prevented without intervention [38]. With a 99 percent standard deviation, the average findings of the analysis were important. This study promotes its use of illustration as a means to increase awareness, to enhance mindset as well as to increase the ability to reduce contact among mosquitoes as well as to minimize mosquito-breeding sites throughout the form of dengue campaigns. These pieces of evidence are therefore inadequate to infer the effect on the frequency of dengue fever [39]. Identifying these procedures for dengue campaigns could be a successful means of achieving this audience Recruitment has provided a study sample divided equally amongst men and women. There were more fifth-graders, usually accompanied by fourth-grader, but instead three. It might reflect the reticence of the third-grader, assuming that, while the processes have been clarified, they may continue to read/read ratings. In reality, it can also be circulated as a whole and, furthermore, the data has not been compiled, and remains uncertain. The students either at camp are not compiled. The study of free time hobbies has found that more people preferred homework, more than shockingly, computer games as well as television. Such highprofile occasions would include participating in films, churches, diving, and athletics. The dengue cartoon, for instance-, can be played in the film theatre, screening can be given through homework, or just a church initiative can be carried out. After all, Costa Rica, its video game named Pueblo Pitanga: Enemigos Silenciosos (Pitanga VI: Silent Enemy), has been adopted by the World Health Organization as well as Pan American Health Organization, emphasizing safe water quality methods to avoid dengue [40]. The majority of the respondents asked whether they knew of dengue fever. Just five had learned of dengue fever among the 54 participants. For the five, the knowledge source was as continues to follow: one student, one school; one student, TV/radio; 1 respondent, health worker; 1 respondent had not reported where they had learned about DF, as well as 1 respondent had written on the Web. However, only two interviewees indicated that mosquitoes were known to be a concern. These details suggest a determined vector epidemic, but nearly no dengue fever awareness. The HBM notes that the respondents are susceptible to mosquitoes. In combination with this instructional illustration, they may feel "perceived gravity," "perceived advantages" as well as "perceived challenges" while they understand what measures to take. They can also feel That the whole environment is a great time for constructive preparation. The expectation for contact has also shown that the population tends to provide information through the internet. That method of contact has been almost omitted from either the options list, although in the age group it was perceived to be an unusual option. Social networking should be used as an important method of targeting this population in future promotions. Such studies cannot be extended, through fact, to the creation of societies. In determining the accuracy of the study of the information portion, multiple considerations should be weighed. Second, a one-time sample was planned for the [41] sample, instead of a pretest/posttest. Throughout the "side effects" as well as "breeding" section, each survey questions demonstrated an incorrect response when uncontrolled, and even in the "transmission" segment, an uncontrolled reply was right. That was most likely responsible for an exceptionally large pre-test / post-test disparity in symptoms as well as a comparatively minor propagation disparity. Its classification of ignition has also generated substantial improvements in pre-to-posttest performance, following these structure shortcomings. In future experiments, this may be dealt with by adjusting the issue in order to correct as well as incorrect equivalent numbers of unregulated responses. The simple inclusion of the "I don't know" option may otherwise be necessary to improve the precision of the ratings. Throughout the Keeping as well as Activities areas, legitimacy was improved while the responses were not accurate or incorrect; in the Mindset portion, as well as stuff is done, respondents clearly showed whether they thought in the Activities portion [42]. Secondly, a study published for adults in [40] was conducted with adolescents. While certain improvements have been made to content/language, the issue retains nuanced as well as mature. However, children have been able to achieve high ratings and enhance the general reliability of the graphics. The effectiveness of the mechanism in general and/or its consistency of the graphics, in particular, can be related to this [43]. The findings revealed that respondents' conduct about DF was substantially changed, via an understanding of the fact that perhaps the disorder was severe as well as action should be done to help avoid it. The mindset of respondents against their efforts to support deter DF has undergone an incredibly dramatic shift. The HBM notes that risk assessment is key to behavioral change. In addition, compartmental changes are driven by "cues to behavior," that must be carried out in an environment. in addition to improvement. The evident All such criteria are discussed in tandem with explicitly specified as well as defined protections. In addition, leadership theory suggests children would like to collaborate among adults as well as multiple study affirm that children are capable reform actors [44]. While most people (> 50 percent) have sufficient knowledge of such DF in the sample, their reproductive areas of the dengue vector were not completely identified. People infected with dengue from 'dirty sites, like drains and trash, in which larvae from other mosquitoes were observed. Each views of people about breeding sites are markedly hierarchical. There are actually three symptoms of dengue: DF, DHF as well as dengue shock syndrome. But in both of them, fever is perhaps the most frequent cause. The lack of awareness of DF obtained from this study compares to the above observed in related study on KAP in India and, except for a few that have observed fever to be apparent signs, most participants have not been able to precisely classify DF's classic signs. Jamaica, Pakistan as well as Thailand were also the signs most commonly documented in related studies in India [45]. The study participants were presumably unable to claim the standard DF signs hadn't encountered the illness directly or observed a situation from a nearby associate or just a member of the group. The low understanding of DF signs in the study population can easily be interpreted with some other typical explanations for fever, including measles, typhoid, and so on. Although the information gap was not statistically significant between the rural and urban regions. There was insufficient knowledge of vector reproduction as well as biting intentions. Most interviewees stated the mosquitoes transferring DF races in drains as well as waste (67%) although less than half the participant's stagnant water. Mostly in the morning (58 percent) as well as evening (44 percent), over half of the participants record mosquito bite [46]. This is associated with some recent study, which found that most people were conscious whether dengue vectors could be morseling after sunrise or sunset typically throughout all homes, in huge container containers like the metal / plastic buckets, concrete tanks, and cisterns, people stored water for bathing/ potable water. Many small containers are most often used to gather water as well as to store water while there was insufficient water supply, for example, in metal / plastic containers. These containers were suitable breeding grounds for Anopheles' mosquito when kept without even a proper jar lid for just an extended time. Throughout this report, almost all (81 percent) of HHS conduct water storage in tanks, and around 40% store water in small proportions made of plastic as well as steel. The waste created either by the municipal squad was seen daily or alternately in most places, and yet people were indiscriminately throwing the waste out of their homes [47]. In short, most people in Pondicherry have inadequate knowledge of dengue fever, how it is spread, how the habitat of vector breeding, as well as the action of mosquito biting. The prevention activities in household containers and frequent parts were poor against Anopheles mosquito breeding. Another big explanation for the rising trends in dengue in this densely populated urban area may be the lack of basic population awareness of dengue epidemiology as well as vector bionomics [48]. Conclusion In Multan, children are aware of malaria today, but they are not capable of engaging and serving as agents for health reform. If teachers are able to give awareness to pupils, with the support of the government, the hospital, or an NGO, children have the ability to help deter the spread of malaria into their communities. This is possible because the workers of hospitals saw the children as key players in fighting malaria and that almost all respondents thought that malaria was a significant social concern. Supporting information S1 File. Questionaire of the study.
2022-02-12T05:17:43.316Z
2022-02-10T00:00:00.000
{ "year": 2022, "sha1": "77028e4f90e0da97a744efe5444aaf50258937a3", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "77028e4f90e0da97a744efe5444aaf50258937a3", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
36801619
pes2o/s2orc
v3-fos-license
Two-step melting in two dimensions: First-order liquid-hexatic transition Melting in two spatial dimensions, as realized in thin films or at interfaces, represents one of the most fascinating phase transitions in nature, but it remains poorly understood. Even for the fundamental hard-disk model, the melting mechanism has not been agreed on after fifty years of studies. A recent Monte Carlo algorithm allows us to thermalize systems large enough to access the thermodynamic regime. We show that melting in hard disks proceeds in two steps with a liquid phase, a hexatic phase, and a solid. The hexatic-solid transition is continuous while, surprisingly, the liquid-hexatic transition is of first-order. This melting scenario solves one of the fundamental statistical-physics models, which is at the root of a large body of theoretical, computational and experimental research. Generic two-dimensional particle systems cannot crystallize at finite temperature [1][2][3] because of the importance of fluctuations, yet they may form solids [4]. This paradox has provided the motivation for elucidating the fundamental melting transition in two spatial dimensions. A crystal is characterized by particle positions which fluctuate about the sites of an infinite regular lattice. It has long-range positional order. Bond orientations are also the same throughout the lattice. A crystal thus possesses long-range orientational order. The positional correlations of a two-dimensional solid decay to zero as a power law at large distances. Because of the absence of a scale, one speaks of "quasi-long range" order. In a two-dimensional solid, the lattice distortions preserve long-range orientational order [5], while in a liquid, both the positional and the orientational correlations decay exponentially. Besides the solid and the liquid, a third phase, called "hexatic", has been discussed but never clearly identified in particle systems. The hexatic phase is characterized by exponential positional but quasi-long range orientational correlations. It has long been discussed whether the melting transition follows a one-step first-order scenario between the liquid and the solid (without the hexatic) as in three spatial dimensions [6]), or whether it agrees with the celebrated Kosterlitz, Thouless [7], Halperin, Nelson [8] and Young [9] (KTHNY) two-step scenario with a hexatic phase separated by continuous transitions from the liquid and the solid [10][11][12][13][14][15][16][17][18]. Two-dimensional melting was discovered [4] in the simplest particle system, the hard-disk model. Hard disks (of radius σ) are structureless and all configurations of nonoverlapping disks have zero potential energy. Two isolated disks only feel the hard-core repulsion, but the other disks mediate an entropic "depletion" interaction (see, e.g., [19]). Phase transitions result from an "order from disorder" phenomenon: At high density, ordered configurations can allow for larger local fluctuations, thus higher entropy, than the disordered liquid. For hard disks, no difference exists between the liquid and the gas. At fixed density η, the phase diagram is independent of temperature T = 1/k B β, and the pressure is proportional to T , as discovered by D. Bernoulli in 1738. Even for this basic model, the nature of the melting transition has not been agreed on. The hard-disk model has been simulated with the local Monte Carlo algorithm since the original work by Metropolis et al. [20]. A faster collective-move "event-chain" Monte Carlo algorithm was developed only recently [21] (see [22]). We will use it to show that the melting transition neither follows the one-step first-order nor the two-step continuous KTHNY scenario. To quantify orientational order, we express the local orientation of disk k through the complex vector Ψ k = exp(6iφ kl ) , with the average over all the neighbors l of k. The angle φ kl describes the orientation of the bond kl with respect to a fixed axis. The sample orientation is defined as Ψ = 1/N k Ψ k . For a perfect triangular lattice, all the angles 6φ kl are the same and |Ψ k | = |Ψ| = 1 (see [22]). In Fig. 1, the local orientations of a configuration with N = 1024 2 disks at density η = N πσ 2 /V = 0.708 in a square box of volume V are projected onto the sample orientation and represented using a color code (see [22]). Inside this configuration, a vertical stripe with density ∼ 0.716 preserving orientational order over long distances coexists with a stripe of disordered liquid of lower density ∼ 0.700. Each stripe corresponds to a different phase. The two interfaces of length ≃ √ N close on themselves via the periodic boundary conditions. Stripeshaped phases as in Fig. 1a are found in the center of a coexistence interval η ∈ [0.700, 0.716], whereas close to its endpoints, a "bubble" of the minority phase is present inside the majority phase for η 0.700 and η 0.716 (see Fig. 2). This phase coexistence is the hallmark of a first-order transition. The first-order transition shows up in the equilibrium equation of state P (V ) (see Fig. 2 the free energy is not necessarily convex (as it would be in an infinite system) and the equilibrium pressure P (V ) = −∂F/∂V can form a thermodynamically stable loop due to the interface free energy. The pressure loop in the coexistence window of a finite system is caused by the curved interface between a bubble of minority phase and the surrounding majority phase (see Fig. 2b,d)). In a system with periodic boundary conditions, the pressure loop contains a horizontal piece corresponding to the "stripe" regime, where the interfaces are flat. This is visible near η ∼ 0.708 for the largest systems in Fig. 2. In a finite system, the Maxwell construction suppresses the interface effects. For the equation of state of Fig. 2a, this construction confirms the boundary densities η = 0.700 and η = 0.716 of Fig. 1 for the coexistence interval, with very small finite-size effects. The interface free energy per disk, the hatched area in Fig. 2, depends on the length ∝ √ N of the interface in the "stripe" regime so that ∆f = ∆F/N ∝ 1/ √ N (see Fig. 2f ). The first-order nature of the transition involving the liquid is thus established by i): The visual evidence of phase coexistence in Fig. 1, ii): The ∝ 1/ √ N scaling of the interface free energy per disk [23], and iii): The characteristic shape of the equation of state in a finite periodic system [24][25][26]. We stress that the system size is larger than the physical length scales so that the results hold in the thermodynamic limit (see [22]). In the coexistence interval, the individual phases are difficult to analyze at large length scales because of the fluctuating interface, and only the low-density coexisting phase is identified as a liquid with orientational correlations below a scale of ∼ 100σ (see Fig. 1a,d). Unlike constant N V simulations, Gibbs ensemble simulations can have phase coexistence without interfaces, but these simulations are very slow at large N (see [22]). The single-phase system at density η = 0.718, is above the coexistence window for all N (see Fig. 2), and it allows us to characterize the high-density coexisting phase. Positional order can be studied in the two-dimensional pair correlation g(∆r), the high-resolution histogram of periodic pair distances ∆r ij = r i − r j sampled from all N (N − 1)/2 pairs i, j of disks. To average this twodimensional histogram over configurations (as in Fig. 3) the latter are oriented such that the ∆x axis points in the direction of the sample orientation Ψ. At short distances, hexagonal order is evident at η = 0.718 (see Fig. 3a). The excellent contrast between peaks and valleys of g(∆r) at small |∆r| 2σ underlines the single-phase nature of the system at this density. The cut of the histogram along the positive ∆x axis leaves no doubt that the system has exponentially decaying positional order on a length scale of ∼ 100σ and cannot be a solid. The (one-dimensional) positional correlation function c k (r), computed by Fourier transform of g(∆r), fully confirms these statements (see [22]). The orientational correlations at density η = 0.718 decay extremely slowly and do not allow us to distinguish between quasi-long range and long-range order (see [22]). However, short-ranged positional correlation is inconsistent with long-ranged orientational order. It follows that pressure is plotted vs. volume per particle (v = V /N ) (lower scale) and density η (upper scale)). In the coexistence region, the strong system-size dependence stems from the interface free energy. The Maxwell constructions (horizontal lines) suppress the interface effects (with a convex free energy) for each N . "Stripe" (c, for N = 1024 2 ) and "bubble" configurations (b,d) are shown in the coexistence region, together with two single-phase configurations (a,e). The interface free energy per disk β∆f (hatched area) scales as 1/ √ N (f ). the orientation must be quasi-long ranged with a small exponent 0, and that the system at η = 0.718 and the high-density coexisting phase are both hexatic. The two-dimensional pair correlation g(∆r) − 1 of Fig. 3b allows us to follow the transition from the hexatic to the solid: The positional order increases continuously with density and crosses over into power-law behavior at density η ∼ 0.720, with an exponent ≃ −1/3 which corresponds to the stability limit of the solid phase in the KTHNY scenario. The hexatic-solid transition thus takes place at η 0.720. At this density, the positional correlation function at large distances r, displays the finite-size effects characteristic of a continuous transition, but up to a few hundred σ, c k is well stabilized with system size (see [22]). Moreover, no pressure loop is observed in the equation of state, and the compressibility remains very small. The system is clearly in a single phase. Unlike the liquid-hexatic transition, the hexatic-solid transition therefore follows the KTHNY scenario, and is continu-ous. The single-phase hexatic regime is confined to a density interval η ∈ [0.716, 0.720]. Although narrow, it is an order of magnitude larger than the scale set by density fluctuations for our largest systems and can be be easily resolved (see [22]). In the hexatic phase, the orientational correlations decay extremely slowly. The exponent of the orientational correlations is close to zero and negative. It remains far from the lower limit of −1/4 at the continuous KTHNY transition, as this transition is preempted by a first-order instability. The event-chain algorithm is about two orders of magnitude faster than the local Monte Carlo used up to now, allowing us to thermalize for the first time dense systems with up to 1024 2 disks. To illustrate convergence toward thermal equilibrium and to check that hard disks in the window of densities η ∈ [0.700, 0.716] are indeed phase-separated, we show in Fig. 4 two one-week simulations of our largest systems after quenches from radically different initial conditions, namely the (unstable) crystal, with |Ψ| = 1, and the liquid, for which |Ψ| ≃ 0. For both initial conditions, a slow process of coarsening takes place (see Fig. 4a,b). Phase separation is observed after ∼ 10 6 displacements per disk, and the sample orientation takes on similar absolute values (see Fig. 4c). Effective simulation times of many earlier calculations were much shorter [14,15], and the simulations remained in an out-of-equilibrium state which is homogeneous on large length scales, whereas the thermalized system is phaseseparated and therefore inhomogeneous. The production runs for N = 1024 2 were obtained from Markov chains with running times of nine months, 30 times larger than those of Fig. 4a,b. The solution of the melting problem presented in this work provides the starting point for the understanding of melting in films, suspensions, and other soft-condensedmatter systems. The insights obtained combine thermodynamic reasoning with powerful tools: advanced simulation algorithms, direct visualization, and a failsafe analysis of correlations. These tools will all be widely applicable, for example to study the cross-over from two to three-dimensional melting as it is realized experimentally with spheres under different confinement conditions [17]. In simple systems such as hard disks and spheres, entropic and elastic effects have the same origin: elastic forces are entropically induced. For general interaction potentials, entropy and elasticity are no longer strictly linked and order-disorder transitions, which can then take place as a function of temperature or of density, might realize other melting scenarios [27]. Theoretical, computational and experimental research on more complex microscopic models will build on the hard-disk solution obtained in this work. We are indebted to K. Binder and D. R. Nelson for helpful discussions and correspondence. We thank J. . Approach to thermal equilibrium from different initial conditions. a,b: 1024 2 hard disks at density η = 0.708, after a quench from a high-density crystal (a) and from a low-density liquid (b), showing coarsening leading to phase separation (Color code for Ψ k as in Fig. 1b, see also [22]). Each of the runs takes about one week of CPU time. c: Absolute value of the sample orientation for the simulations in a,b, compared to runs with the local Monte Carlo algorithm from the same initial conditions (time in attempted displacements per disk). The correlation time of the event-chain algorithm, on the order of 10 6 displacements per disk, estimated from c, agrees with the correlation time estimated in our production runs with 6 × 10 7 total displacements per disk. manuscript.
2011-08-30T15:07:46.000Z
2011-02-20T00:00:00.000
{ "year": 2011, "sha1": "3f03a8d6c70ea5d44efef0d9dd7882623714f4fa", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1102.4094", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3f03a8d6c70ea5d44efef0d9dd7882623714f4fa", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
18245223
pes2o/s2orc
v3-fos-license
Abdominal Subcutaneous Fat Thickness Measured by Ultrasonography Correlates with Hyperlipidemia and Steatohepatitis in Obese Children Purpose The aim of this study is to evaluate the relationship between abdominal subcutaneous fat thickness measured by ultrasonography (US) and serum lipid profile and liver transaminases in obese children. Methods One hundred and sixty-six children diagnosed with obesity from May 2001 to December 2013 were included in this study. Data on serum lipid profile and liver transaminases were collected from clinical records. Abdominal subcutaneous fat thickness and grade of hepatic steatosis were evaluated by US. Results Of the 166 children, 107 were diagnosed with hepatic steatosis by US, 46 with grade I, 56 with grade II, and five children with grade III. According to the grade of hepatic steasosis, the average values of midline abdominal subcutaneous fat thickness and right flank abdominal subcutaneous fat thickness measured 2.9±0.8 cm and 1.9±0.7 cm in the normal group, 3.3±0.8 cm and 2.0±0.7 cm in grade I, 3.8±0.8 cm and 2.3±0.8 cm in grade II, and 4.1±0.8 cm and 2.8±1.4 cm in grade III, respectively. Abdominal subcutaneous fat thickness correlated with grade of hepatic steatosis (p<0.01). In addition, abdominal subcutaneous fat thickness correlated with concentration of serum lipids and liver transaminases in the age group of 12-14 years (p<0.01). Conclusion Abdominal subcutaneous fat thickness measured by US can be used as a reliable predictor of possible hyperlipidemia and steatohepatitis in children, especially during the adolescent stage. INTRODUCTION According to a report from the World Health Organization in 2011, the prevalence of obesity has doubled in the last 30 years [1], and the population of overweight children and adolescents under 18 years of age is 17 million with an annual increase of 0.5-1 percent [2]. Childhood obesity can cause many complications such as hyperlipidemia and steatohepatitis at an early age. It may also lead to obesity in adulthood and thereby result in many lifestyle diseases [3,4] and affect quality of life [5,6]. Lee et al. [7] reported that 28.3% and 16.7% of obese children have above normal levels of serum triglycerides and total cholesterol, respectively. Fatty liver, as a result of obesity, can progress to liver cirrhosis [8], and in a report by Zou et al. [9], 55.7% of obese children were found to have non-alcoholic fatty liver diseases. To measure the thickness of visceral and subcutaneous fat, various modalities of imaging have been tried on adults, such as computed tomography, dual-energy X-ray absorptiometry, and abdominal ultrasonography (US) [10][11][12]. In children, there have been attempts to measure adipose tissue distribution using magnetic resonance imaging and to correlate the measurements with the body mass index (BMI) or elevated serum aminotransferases [13]. However, abdominal US, which is a relatively easy and safe method of imaging for children, has yet to be evaluated for the ability to measure the abdominal subcutaneous fat thickness and to estimate the degree of hyperlipidemia or steatohepatitis based on the measurements. The aim of this study is to measure abdominal subcutaneous fat thickness using abdominal US in children with obesity and to evaluate the relationship between abdominal subcutaneous fat thickness and hyperlipidemia or steatohepatitis. MATERIALS AND METHODS The BMI of all the children brought to Gachon University Gil Medical Center in Incheon, Korea from May 2001 to December 2013 was calculated us-ing their weight and height. Using the growth chart issued by the Korea Centers for Disease Control and Prevention and the Korean Pediatric Society in 2007, BMI above the 95th percentile was defined as obese. To be included in this study, serum lipid and liver aminotransferase levels had to be measured within two weeks prior or subsequent to performing US on a child diagnosed with obesity on whom an abdominal US was performed. A total of 166 children were included, and the data were collected retrospectively. The children were divided into four groups: toddlers and preschool age children (group A, 2-5 years old), early elementary school age children (group B, 6-8 years old), late elementary school age children (group C, 9-11 years old), and adolescents in middle school (group D, 12-14 years old). Clinical characteristics of the different age groups are available in Table 1. Diagnosis of hepatic steatosis Hepatic steatosis was diagnosed by four experienced pediatric radiologists, using US (ultrasound system, Acuson Sequoia-512 [Siemens, Berlin, Germany] and ultrasound system, iU22 [Philips, Amsterdam, Netherlands]). A 6.0 MHz probe was used to evaluate the echogenicity of the liver in the diagnosis of hepatic steatosis. If the echogenicity of the liver was greater than that of the right kidney, hepatic steatosis was suspected and graded as follows [14]: Grade I (mild): slight diffuse increase in the fine echoes in the hepatic parenchyma with normal visualization of the diaphragm and intrahepatic vessel borders. Grade II (moderate): moderate diffuse increase in the fine echoes with slightly impaired visualization of the intrahepatic vessels and diaphragm. Grade III (severe): marked increase in fine echoes with poor or no visualization of the intrahepatic vessel borders, diaphragm, and posterior portion of the right lobe of the liver. Measurements of abdominal subcutaneous fat Abdominal subcutaneous fat thickness was measured in the supine position during normal respiration with minimal pressure applied by the US probe. Midline abdominal subcutaneous fat thickness (MASFT) was measured transversely at one centimeter caudal to the umbilicus level and the right flank abdominal subcutaneous fat thickness (RFA-SFT) was measured coronally at two locations of the right flank, with the average value being recorded (Fig. 1). Measurements of MASFT and RFASFT of all 166 children were correlated with other parameters. Statistical analysis All statistical analysis was carried out using SPSS ver. 12.0 (SPSS Inc., Chicago, IL, USA), and all data were expressed as mean±standard deviation. Correlation of parameters were analyzed using Spearman's rank correlation coefficient, and statistical significance was defined as p<0.05. Correlation between abdominal subcutaneous fat thickness and grade of hepatic steatosis was analyzed using one-way ANOVA, and post-hoc test was carried out using the Duncan's multiple range test. The research process was approved by Gachon University Gil Medical Center Institutional Review Board (GCIRB2014-247). Clinical characteristics Of the 166 children included in this study, 110 were male and 56 were female. The average age was 9.4±2.7 years (9.6±2.6 years for boys and 9.1±2.9 years for girls). The average weight of all the children was 54.4±16. 8 (Table 1). Serum laboratory results The average values of triglyceride, total cholesterol, high density lipoprotein-cholesterol, and low density lipoprotein (LDL)-cholesterol were 138.0±71 (Table 1). Abdominal US Of the 166 children who underwent abdominal US, 107 children were diagnosed with hepatic steatosis; 46 children with grade I (mild), 56 with grade II (moderate), and 5 with grade III (severe) ( Table 2). According to the grade of hepatic steatosis, the average values of MASFT and RFASFT were 2.9±0.8 cm and 1.9±0.7 cm in the normal group, 3.3±0.8 cm and 2.0±0.7 cm in grade I, 3.8±0.8 cm and 2.3±0.8 cm in grade II, 4.1±0.8 cm and 2.8±1.4 cm in grade III, respectively ( Table 3). Correlation between abdominal subcutaneous fat thickness and the grade of hepatic steatosis observed on abdominal US MASFT and the grade of hepatic steatosis showed a statistically significant correlation (p=0.000). Posthoc comparison showed the average value of MASFT in the normal group was not significantly thinner than that of grade I, but was significantly thinner than that of grade II and III. The average value of MASFT of grade I was significantly thinner than that of grade III (Table 3, Fig. 2). RFASFT also had a statistically significant correlation with the grade of hepatic steatosis (p=0.007). Post-hoc comparison showed the average values of RFASFT in the normal group, grade I, and grade II were significantly thinner than that of grade III (Table 3, Fig. 3). DISCUSSION There have been studies that have attempted to measure the abdominal fatty tissues in children through computed tomography or magnetic resonance imaging and to observe the relationship between the abdominal fatty tissues, BMI, and liver aminotransferases [13,15]. Even though abdominal US provides safe and easy imaging, there are no studies that address the relationship between abdominal subcutaneous fat thickness measured by abdominal US and hyperlipidemia or steatohepatitis in children. This study addressed this issue and found that ab- dominal subcutaneous fat thickness (MASFT and RFASFT) correlated with BMI and hepatic steatosis. In addition, hyperlipidemia and steatohepatitis correlated with the abdominal subcutaneous fat thickness (MASFT and RFASFT) in the age group of 12-14 years (group D). This implies that measuring the abdominal subcutaneous fat thickness will aid in the estimation of complications that result from obesity in the adolescent population. To ensure a more accurate measurement of abdominal subcutaneous fat thickness, both transverse measurement (MASFT) and coronal measurement (RFASFT) of the abdominal subcutaneous fat thickness were used. Prior studies have found a higher degree of obesity to be related to higher serum levels of lipids [7,16,17]. This study also found that higher BMI was significantly related to higher levels of total cholesterol (ρ=0.163, p=0.036) and LDL-cholesterol (ρ=0.155, p=0.046). Prolonged obesity in children leads to chronic complications, such as type 2 diabetes, hypertension, dyslipidemia, and carotid-artery sclerosis [4]. Fatty liver disease has been associated with obesity in children [9,18]. Zou et al. [9] and Boyraz et al. [18] reported that 55.7% and 48.1% of obese children have non-alcoholic fatty liver disease, respectively. This study also found that 64% of obese children have hepatic steatosis. Fatty liver disease caused by obesity is more likely to follow a benign course; how-ever, some report various forms of liver damage accompanying obesity [19][20][21]. Andersen and Gluud [22] reviewed 41 papers and reported that amongst the 1,515 obese patients who underwent liver biopsy, 80% had signs of fatty degeneration and 3% had fatty liver cirrhosis. The grade of hepatic steatosis has been correlated with the degree of obesity [23]. This study also found that higher BMI was correlated with thicker MASFT (ρ=0.569, p=0.000) and RFASFT (ρ=0.452, p= 0.000), and that abdominal subcutaneous fat thickness differed significantly between the grades of hepatic steatosis. Therefore, measurement of abdominal subcutaneous fat thickness can aid in the estimation of the grade of hepatic steatosis and can have clinical implications. Especially in the age group of 12-14 years, thicker abdominal subcutaneous fat was correlated with higher levels of serum total cholesterol (ρ=0.466, p=0.006), LDL-cholesterol (ρ=0.563, p=0.001), AST (ρ=0.477, p=0.004), and ALT (ρ=0.564, p=0.001). Therefore, in children over 12 years of age undergoing abdominal US, abdominal subcutaneous fat thickness measured by US will aid in the estimation of the degree of hyperlipidemia and steatohepatitis, in addition to measuring serum lipid profile and liver transaminases. The limitations of this study are as follows: four radiologists participated in the measurement of the MASFT, RFASFT, and the grade of hepatic steatosis, and interpersonal differences could have existed, but not been corrected. In addition, biopsies were not performed in any of our patients, thus histological confirmation of the grade of hepatic steatosis could not be carried out. In conclusion, abdominal subcutaneous fat thickness measured by US can be used as a reliable predictor of possible hyperlipidemia and steatohepatitis in children, especially during the adolescent stage.
2016-05-04T20:20:58.661Z
2015-06-01T00:00:00.000
{ "year": 2015, "sha1": "8871b0848594149ebfd31931109d80fcc211665c", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc4493243?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "8871b0848594149ebfd31931109d80fcc211665c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
125161987
pes2o/s2orc
v3-fos-license
On finding the analytic dependencies of the external field potential on the control function when optimizing the beam dynamics When developing a particle accelerator for generating the high-precision beams, the injection system design is of importance, because it largely determines the output characteristics of the beam. At the present paper we consider the injection systems consisting of electrodes with given potentials. The design of such systems requires carrying out simulation of beam dynamics in the electrostatic fields. For external field simulation we use the new approach, proposed by A.D. Ovsyannikov, which is based on analytical approximations, or finite difference method, taking into account the real geometry of the injection system. The software designed for solving the problems of beam dynamics simulation and optimization in the injection system for non-relativistic beams has been developed. Both beam dynamics and electric field simulations in the injection system which use analytical approach and finite difference method have been made and the results presented in this paper. Introduction The paper mostly focuses on exploring the ways of algorithmic and software realization of the optimal design methodology in the beam dynamics area, which is proposed in [1-3] and intended to be applied in the injection systems producing the high-precision beams. The development of the optimal design methodology for beam dynamics is considered to be rather complex and laborious problem. It has given rise to a large and growing body of research [4][5][6][7][8][9][10][11][12][13][14]. The main issue in the optimal design techniques that we try to address in the paper consists in finding an analytical expression of the control potential function     defined over the domain contour, together with developing an algorithm that could compute the electrostatic field  U inside the working domain using this control function. The control function is to be used in further optimization procedures. The paper starts with some algorithmic study relating to the numerical solution of the integral of Cauchy type that is applied in the given problem to calculate the electrostatic field. Then we consider a case study model of the axial symmetrical field in an injection system. Some numerical data obtained in the C++ computer simulation are presented. A possible basic algorithm of finding the integral of Cauchy type Let us consider a three-dimensional simply connected bounded domain having the axial symmetry and let G be its diametric cross section. This two-dimensional domain G is bounded by a contour L that is supposed to be a smooth closed curve. Hereinafter the real plane R 2 containing the domain G will be where Then the complex potential of the threedimensional external field can be represented as follows (see [3], p. 97): The real part of H, i.e. the function H U Re  , will be considered as a function determining the electrostatic field in the three-dimensional domain. The complex contour integral (1) can be written as . Therefore, the potential function U can be defined as a real line integral where In a similar manner, we obtain other constituents of potential So, at the point  the potential  U equals to This expression evidently contains the pairs of terms in which the arguments of the function arctan are inverse ones. Taking into account the following trigonometric identity we can represent (8) as a sum of binomials forming such a pair of terms. For instance, taking the first term from (5) and the second from (6), we get 4 2 2 arctan 2 Similarly, summing the first term from (4) and the second from (5) gives Finally, the summation of all four pairs in (8) By analogy with the derivation of the formula (4), and taking into account that Note that the formula (4) can be thought as a special case of (9) if the boundary potential  is constant, i.e. when 0 1  k . Algorithm description and a case study simulation An algorithm considered in the paper includes the following stages: -Computing the electrostatic field in the working domain of an injection system under the given initial configuration and potentials of electrodes. The computation is carried out using, e.g., the iterative Liebmann's procedure on a square grid, [15], as is done in the paper. Let us consider these stages in greater detail. As an example in the case study we take an axiallysymmetric working domain for an injection system with three electrodes, which cross-section is shown in Fig. 2 Fig. 3, curve 1. To approximate the profile of the contour potential, we have chosen a rational-fraction approximation: Calculations were performed using both stepwise and piecewise linear approximations of the boundary potential. The corresponding values of the potential function on the axis, calculated on the basis of these approximations, differ very slightly (less than 0.1%), that is, apparently, due to using the fine mesh. Conclusions The numerical technique using the integral of Cauchy type and intended for calculating the electrostatic field inside the axially symmetric domain has been considered. The simulation results have shown that the given approach seems to be practicable but requires some theoretical and algorithmic studies to achieve better accuracy. The algorithm proposed in the paper can be used in computing and optimization of beam dynamics in the injection systems (see, e.g., [16]). x, mm 8
2019-04-22T13:08:55.268Z
2017-12-01T00:00:00.000
{ "year": 2017, "sha1": "4608f0f14f76b70037d9edf26460eb9133b28f39", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/941/1/012093", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "9422a16eeffc2fd8072934b9fb4c3cfefc181b1e", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
669435
pes2o/s2orc
v3-fos-license
New Synthetic Receptors for Molecular Recognition of Anions and Their Practical Applications Awarding Institute: National Institute of Technology Karnataka (India) Date Awarded: September 11th, 2014 Supervisors: Dr. Darshak R. Trivedi, National Institute of Technology Karnataka (India) Schiff bases as fluoride ion receptors Ar eceptor based on the Shiff base 1-naphthohydrazide was synthesised for the selective detection of fluoridei ons. (E)-N'-(4-Nitrobenzylidene)-1-naphthohydrazide (S1R1)w as found to be selective towards fluoridei ons over other anions in organic media ( Figure 1A). The presence of ac arbonyl group in the receptor makest he proton of the binding site more acidic, and therefore, the receptor can become deprotonate with addition of ab asic anion such as fluoride, giving rise to an observable colour change. The mechanism involvedi nt he colourc hange was determined to be deprotonation of acidic proton followed by stabilization of the complex througha ni ntramolecular charge-transfer (ICT) transition, as evidenced by the formation of an HF 2 peak in 1 HNMR titration. However,t he acidic protoni s easily solvated even with trace amountso fw ater,a nd because of this, S1R1 is unable to detect fluoride in organo-aqueous media. An alternative receptor, (E)-N'-(2-hydroxy-3-methoxybenzylidene)-1naphthohydrazide (S1R2)w ithahydroxy functionality containing ah ighly base-labileh ydroxy group, was synthesised that detectsb asic fluoridei ons via ad eprotonation mechanism,n ot only in organic solutions (Figure1B) but also in organo-aqueous media ( Figure 1C). In organo-aqueous solution, addition of the basic fluoridei on leads to deprotonationo fb ase-labile hydroxy proton, giving rise to ac olour change that allows colorimetricd etection. Detection of inorganic fluorideion in aqueous media New receptors were designed and synthesized for the colorimetric detection of fluoridei on based on the benzohydrazide scaffold. N'-Benzoyl-4-nitrobenzohydrazide (S2R1)was found to be highly selective towardsf luoridei on over other anions. This receptor was ablet od etect inorganic fluoride, such as NaF,i na queous solution( Figure 2). The presenceo ft wo carbonyl groups in the receptor makes the NH proton highly acidic;t herefore, theser eceptors are capable of competing with water molecules to bind fluoride ions. In the presence of NaF, S2R1 in aqueous solutions underwent as ignificant colour change from colourless to yellow with Dl max of 149 nm. The mechanism involved in the colourc hange was determined to be deprotonation, formationo fi midic acid intermediate, followed by stabilization of complex through ICT.T his was confirmed by 1 HNMR titrations, where the formation of the imidic acid tautomer was observed. In addition, S2R1 successfully detected fluoride ions in sea water and commercially available mouth wash (Figure 2), and the amount of fluoridep resenti nt he samples could be quantified using UV/vis spectroscopy. Figure 1. Change in colour of A) S1R1 and B) SIR2:a)freereceptor,b)F À ,c)Cl À ,d)Br À ,e )I À ,f)AcO À ,g )HSO 4 À and h) H 2 PO 4 À ions;C )Colourc hangeo fS1R2 in MeCN/H 2 O( 9:1) after adding F À ions:a)free receptor,b)S1R2 + NaF (3 equiv) and c) S1R2 + TBAF (3 equiv). Reproduced with permission from Ref. This property was applied to determine the percentage compositiono fb inarys olvent mixtures. S3R1 wasa ble to detect Cu 2 + ions colorimetrically,w here it exhibited ac olourc hange from pale yellow to orange-red. Using this dual detection property,t he receptor was subjectedt om olecular logic-gatea pplications wherein it showedo n-off switching operations where the receptor gave output signals corresponding to the INHIBIT circuit with inputsignals. Colorimetric discrimination of isomeric dicarboxylateanions As eries of new receptors were synthesised to demonstrateg eometrical isomeric discrimination of dicarboxylate anions, in particular,m aleate and fumarate ions. Amongt hese receptors, 2,2'-{(1E,1'E)-[1,4- Figure 4A). The colour change arises due to ab athochromic shift of 133 nm in the UV-vis spectrum. This shift occurs because of the formation of ac harge-transfer complexb etween the receptorsa nd maleate ion. The maleate ion binds to the receptor through hydrogen bonding, as confirmed by 1 HNMR titrations. The selectivity of the receptors for the maleate ion and the notable colour change can be correlated with the change in receptor orientation upon binding with the maleate ion. In addition, these receptors were shown to detect fluoridei ons in ac olorimetric manner by ac olour change from pale yellow to blood red ( Figure 4B). This colorimetric detection was made possible by the intermolecularp rotontransfer interaction, established between the phenolic oxygen and the fluoride ions, whichfurther lead to intramolecular chargetransfer between maleate ions and the receptors. Discrimination of maleate over fumarateand ratiometric fluoride ion detection (N',N'''E,N',]bis(4-nitrobenzohydrazide) (S5R1)w as synthesised for the colorimetric discrimination of maleateover fumarate ions. S5R1,with abenzohydrazide functional group as ab inding site, exhibited as ignificant colourc hangef rom colourless to orange-red only in the presence of maleate ions, whereas S5R1 in the presence of fumarate ions failed to exhibit any colourc hange ( Figure 5A). The colour change arisesd ue to the formation of an intermolecular hydrogen-bond complex between the maleate ion and the receptor,a s confirmed by 1 HNMR titrations. In contrast, ar eceptor that does not contain carbonyl groups hasr estricted flexibility and steric hindrance,a nd therefore does not show any response either with maleate ions or with fumarate ions. S5R1 was examined for colorimetric detectiono ff luoride ions wherein the change in colourw as observed along with the concentration of fluorideions. S5R1 displayed ac olourc hange from colourless to orange upon adding one equivalent of fluoride ions. Further,a t higher concentrations of fluoride ions, the orange colour transformed to blood red ( Figure 5B). This ob- In addition, these receptors were able to extract the fluoridei ons from aqueous media to organic solutions, which resulted in ac olourc hange. The practical application of these receptors was evaluatedb ye xtracting fluoride ions from sea water.T hough S6R1 failed to extract fluoridei ons from sea water, S6R2 extracted fluoride ions from sea water with 99 %e fficiency ( Figure 6B). In addition, S6R2 wasa ble to quantify the amount of fluoride ions present in the sea water,a nd the level was found to be 1.4 ppm, which is in good agreement with earlier reports. Keywords: anion receptors · charge transfer · colorimetric detection · extraction · solvatochromism Publications arising from this work:
2016-05-12T22:15:10.714Z
2015-07-06T00:00:00.000
{ "year": 2015, "sha1": "efb9d587f8b160224dd8981f3a52fd4c5315d989", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/open.201500119", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "efb9d587f8b160224dd8981f3a52fd4c5315d989", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
16696846
pes2o/s2orc
v3-fos-license
VS-501: a novel, nonabsorbed, calcium- and aluminum-free, highly effective phosphate binder derived from natural plant polymer Inadequate control of serum phosphate in chronic kidney disease can lead to pathologies of clinical importance. Effectiveness of on-market phosphate binders is limited by safety concerns and low compliance due to high pill size/burden and gastrointestinal (GI) discomfort. VS-501 is a nonabsorbed, calcium- and aluminum-free, chemically modified, plant-derived polymer. In vitro studies show that VS-501 has a high density and a low swell volume when exposed to simulated gastric fluid (vs. sevelamer). When male Sprague–Dawley (SD) rats on normal diet were treated with VS-501 or sevelamer, serum phosphate was not significantly altered, but urinary phosphate levels decreased by >90%. VS-501 had no effect on serum calcium (Ca) or urinary Ca, while 3% sevelamer significantly increased serum and urine Ca. In 5/6 nephrectomized (NX) uremic SD rats on high-phosphate diet, increasing dietary phosphate led to an increase in serum and urine phosphate, which was prevented in rats treated with VS-501 or sevelamer (0.2–5% in food). High-phosphate diet also increased serum fibroblast growth factor-23 and parathyroid hormone in 5/6 NX rats that was prevented by VS-501 or sevelamer. VS-501 or sevelamer increased fecal phosphate in a dose-dependent manner. More aortic calcification was observed in 5/6 NX rats treated with 5% sevelamer, while VS-501 and sevelamer did not show significant effects on cardiac parameters, fibrosis, intestine histology, and intestinal sodium-dependent phosphate cotransporter gene expression. These results suggest that VS-501 is effective in binding phosphate with no effects on calcium homeostasis, and may have improved pill burden and GI side effects. Introduction Chronic kidney disease (CKD) is a serious public health problem. According to the National Kidney Foundation, 26 million people in America (~13% of the US population) have CKD and millions more are at an increased risk. CKD progresses through five stages; Stage 5 CKD requires renal replacement therapy (dialysis or transplantation). From information provided by the National Kidney Foundation, CKD triggers many other health care issues such as anemia, cardiovascular diseases, hyperphosphatemia, secondary hyperparathyroidism, and other complications. The 5-year average mortality rate is~33% (Tonelli et al. 2006), and the mortality risk increases with disease progression (Go et al. 2004). Inadequate control of serum phosphate levels in CKD can lead to various pathologies of clinical importance such as further deterioration of kidney function, cardiovascular complications, renal osteodystrophy, and increased mortality. Numerous studies have shown that there is a robust association between serum phosphorous levels and all-cause mortality in dialysis-dependent individuals (Block et al. 2004;Kalantar-Zadeh et al. 2006). In predialysis CKD, clinical evidence also demonstrates that elevated serum phosphate is linked to an adverse effect on renal/cardiovascular function and an increased mortality risk, independent of other traditional risk factors (Foley et al. 2005;Eddington et al.2010;Bellasi et al. 2011). Eddington et al. (2010) further showed that Stage 3/4 predialysis CKD patients who had serum phosphate below the targets recommended in the K/DOQI (2003) guidelines had the best survival even though guidelines for serum phosphate in CKD were devised using only studies involving dialysis patients (Eddington et al. 2010). A majority of currently available oral phosphate binders, that is calcium-containing and calcium-free phosphate binders, work by binding phosphate in the gastrointestinal (GI) tract, leading to less phosphate to be absorbed into the body. Current therapies (Calciumacetat-Nefro, Renagel, PhosLo, Fosrenol, etc.) have the following shortcomings: (1) suboptimal and inefficient phosphate binding, (2) high pill burden (large number of pills per day and large pill size), unpalatable and hence low compliance, (3) side effects in the GI tract, and (4) safety concerns such as hypercalcemia, aluminum toxicity, negative influence on other medication, and accumulation in organs (Chiu et al. 2009;Wang et al. 2013). Patient compliance with on-market drugs is a significant clinical management issue because of GI tolerability and pill burden (size and number). Considering the importance of controlling phosphate metabolism in CKD patients, there is a need for improved phosphate-controlling drugs. To this end, we have discovered VS-501 that is derived from a natural polymer commonly used in the food industry. VS-501 is nonabsorbed, calcium-and aluminum-free, and effectively binds phosphate in the GI tract. The preclinical studies summarized in this report show that VS-501 is effective in binding phosphate with no effects on calcium homeostasis, and may have potential in reducing pill burden and Gl side effects. Materials and Methods Materials VS-501 was made by Vidasym (Chicago, IL). The synthesis has been published previously (Wu-Wong 2013). Other reagents were of analytical grade. In vitro polymer characterization The density of the compressed powder was determined by a helium pycnometer. For swelling volume determination, VS-501 or sevelamer at 0.1 g (dry powder) was incubated with 5 mL of simulated gastric fluid (0.2% [w/v] NaCl, 0.7% [v/v] hydrochloride [HCl], without pepsin) at 37°C for different periods of time. To determine phosphatebinding capacity in vitro, VS-501 at 0.1 g was incubated with 10 mL of a 20 mmol/L phosphate solution (1.37 mL of 85% phosphoric acid, 3.18 g of sodium carbonate and 4.68 g of NaCl in 1 L of water) at room temperature at different pH (as indicated) for 24 h. In separate studies, VS-501 at 0.1 g was incubated with a phosphate solution containing different phosphate concentrations (as indicated) and sodium carbonate and NaCl as described above at neutral pH for 24 h at room temperature. The samples were centrifuged and the supernatant collected for phosphate determination using a phosphate colorimetric assay (Catalog #K410-500; BioVision, Milpitas, CA). Normal rat studies Male Sprague-Dawley (SD) rats were fed a normal diet (containing 1% calcium and 0.7% phosphorus in powder form) containing VS-501 or sevelamer-HCl (concentrations as indicated) in food for 6 days. On the first (before dosing) and last days of treatment, rats were placed in metabolic cages with one rat per cage. Urine and/or feces samples were collected for 24 h. Blood samples were collected from each rat for serum preparation. Physiological parameters were determined as described below. This and all other animal studies were conducted under the auspice of the Office of Animal Care and Institutional Biosafety, University of Illinois at Chicago. The study conformed to the Guide for the Care and Use of Laboratory Animals published by the US National Institutes of Health (NIH Publication no. 85-23, revised 1996). 5/6 nephrectomized uremic rat studies The 5/6 nephrectomized (NX) rats were prepared and handled as previously described (Wu-Wong et al. 2011, 2013a. Briefly, nephrectomy was performed on male, Sprague-Dawley rats weighing~200 g with a standard two-step surgical ablation procedure. At 6 weeks after the second surgery when uremia was firmly established (as indicated by elevated serum creatinine and blood urea nitrogen [BUN] levels), rats were fed a high-phosphate diet (normal diet containing 1% calcium and 0.7% phosphorus in powder form plus additional KH 2 PO 4 at 0.67% and K 2 HPO 4 at 0.33% by dry weight in food) and treated with VS-501 or sevelamer carbonate (concentrations as indicated) in food for 4 weeks. Sevelamer carbonate was tested in these studies to gain additional information on the newer version of sevelamer. Rats were placed in metabolic cages with one rat per cage on Days 0 (predosing), 14 (Week 2), and 28 (Week 4); urine and feces samples were collected during a period of 24 h. Blood samples were collected from each rat for serum preparation. Physiological parameters were determined as described below. Measurements of physiological parameters Serum and urine calcium (Ca) was measured using a Stanbio LiquiColor calcium assay kit (Boerne, TX). Serum parathyroid hormone (PTH) was measured using a rat intact PTH ELISA kit obtained from Immutopics (San Clemente, CA). Serum fibroblast growth factor (FGF)-23 was determined using a rat/mouse FGF-23 (C-Term) ELISA kit obtained from Immutopics. The serum and urine phosphorus/phosphate (Pi) levels were determined using a phosphate colorimetric assay (Catalog #K410-500; BioVision). Serum creatinine and BUN concentrations were measured using a chemistry analyzer. For fecal phosphate determination, samples of 2 gm from each feces sample were ashed at 800°C for 30 min. Ash was extracted with 5 mL of 12N HCl by vortexing and shaking at room temperature for~60 min. The supernatant was collected by centrifugation and neutralized using an equal volume of 12N NaOH. The mixture was again centrifuged and the supernatant was collected for phosphate determination by the BioVision phosphate colorimetric assay. Total urinary and fecal phosphate levels during a 24-h period were calculated. Tissue preparation and staining Tissue samples were fixed in formalin for 1-3 days, and then transferred to 70% alcohol. Samples were embedded in wax and cut into 5-lm sections. Sections were stained with hematoxylin-eosin (H-E). For fibrosis, sections were stained with Masson Trichrome reagent, and imaged and analyzed using a Vectra Intelligent Multispectral Slide Analysis System (Perkin-Elmer, Waltham, MA). For calcification, aorta sections were stained by the von Kossa method and counterstained with nuclear fast red ). Echocardiographic assessment Animals were sedated with isoflurane (1.5%, inhaled), placed in the decubitus position on a warming pad to maintain normothermia and the chest shaved/depilated. Transthoracic echocardiography was conducted using a 17.5 MHz high-resolution transducer plus an integrated system (Vevo 770 High-Resolution Imaging System, Visu-alSonics, Toronto, Canada), and B-mode, M-mode, pulsed Doppler, and tissue Doppler images were obtained. All cardiac parameters were calculated using VisualSonics Vevo 770 analysis software (v. 3.0.0) with a cardiac measurements package. Measurement of GI calcium transport Duodenal calcium absorption was measured ex vivo as described previously Wu-Wong et al. 2013b). Briefly, segments of proximal small intestine were removed from each rat, everted, and filled with incubation buffer (125 mmol/L NaCl, 10 mmol/L fructose, 0.25 mmol/L CaCl 2 , 30 mmol/L Tris, pH 7.4 at 37°C). Gut sacs were incubated for 90 min in incubation buffer at 37°C with occasional shaking. At the end of the incubation period, calcium concentration in the serosal and mucosal compartments was measured and the serosal/ mucosal calcium ratio was calculated. Real-time reverse transcription PCR Real-time reverse transcription PCR (Real-time RT-PCR) was performed with an ABI 7500 Fast Real-Time PCR System (Applied Biosystems, Foster City, CA). Each sample consisted of a final volume of 25 lL containing 200 ng of mRNA, 100 nmol/L (final concentration) each of the forward and reverse PCR primers and 250 nmol/L (final concentration) of the TaqMan TM probe (Applied Biosystems). Temperature conditions consisted of a step of 30 min at 48°C and a step of 10 min at 95°C, followed by 45 cycles of 60°C for 1 min and 95°C for 15 sec. Data were collected during each extension phase of the PCR reaction and analyzed with a software package (Applied Biosystems). Threshold cycles were determined for each gene. Data analysis Differences among different groups were assessed using a one-way analysis of variance (ANOVA) followed by a Dunnett's post hoc test. Statistical comparisons between two treatment groups were performed by unpaired t-test with 95% confidence intervals of difference. In vitro characterization As mentioned above, large pill size/number is one of the reasons causing patient compliance issues for phosphate binders such as sevelamer to effectively control hyperphosphatemia (Chiu et al. 2009;Wang et al. 2013). The density of a polymer will have a significant impact on its pill size. Thus, we compared the density of VS-501 versus sevelamer. The density of the compressed powder was determined by a helium pycnometer to be 1.91 g/cm 3 for VS-501, and 1.27 g/cm 3 for sevelamer carbonate (Table 1). GI discomfort is another reason causing patient compliance issues for some on-market phosphate binders such as sevelamer. Larger swelling volume is often associated with more GI discomfort. The swell volume of VS-501 versus sevelamer was determined at different time points after exposure to simulated gastric fluid. As shown in Table 1, the swell volume of VS-501 is much less than that of sevelamer. We then determined the in vitro Pi-binding capacity of VS-501. Figure 1A shows that VS-501 binds phosphate with an estimated maximal binding capacity at 1.3 mmol/ g and Kd at 10 mmol/L. Figure 1B shows that VS-501 binds phosphate within a wide physiologically relevant pH range. Normal rats on normal diet Normal rats were chosen to screen compounds because they are easier to handle than the kidney disease uremic rat model (discussed below). We compared the efficacy of VS-501 versus sevelamer in normal rats on normal diet. Serum phosphate ( Fig. 2A) was not significantly altered, but urinary phosphate levels ( Fig. 2B) were decreased significantly in the VS-501 and sevelamer-HCl treatment groups. VS-501 had no effect on serum/urinary Ca while sevelamer-HCl increased serum/urinary Ca ( Fig. 2C and D). Figure 2E shows that sevelamer at 3% significantly decreased serum PTH, while VS-501 had a modest effect. As an attempt to assess the impact of VS-501 on drinking and eating patterns, daily water and food consumption trends were tracked, and urine volume and fecal weight were determined in the high-dose groups. Figure 3A and B show the results from daily tracking of water and food consumptions in the 3% VS-501 and sevelamer groups; water consumption was consistently higher in the sevelamer group. Figure 3C and D show that sevelamer-HCl at 3% significantly increased urine volume and fecal weight, while VS-501 had a modest effect. Efficacy in 5/6 NX rats on high-phosphate diet Since VS-501 is intended for treating hyperphosphatemia associated with CKD, it is important to evaluate the efficacy of VS-501 in a CKD animal model. The CKD field has the advantage of the 5/6 NX uremic rat model that, albeit a difficult model to handle, is highly predictive of the human conditions. In addition, the 5/6 NX rats, similar to human CKD patients, also develop cardiovascular complications such as left ventricular hypertrophy, which makes them useful for assessing a compound's cardiovascular protective effects. Consistent with our previous studies in 5/6 NX rats (Wu-Wong et al. 2010, 2013a, serum BUN and creatinine levels were elevated significantly in 5/6 NX rats, indicating established uremia even at 6 weeks after surgery (Table 2). Treatment with VS-501 or sevelamer carbonate had no significant effects. As shown in Figure 4, increasing dietary phosphate led to a modest increase in serum phosphate and a significant of the drugs were also observed at both time points (Week 2 and Week 4 after dosing). VS-501 or sevelamer at 5% seemed to over-suppress serum and urinary phosphate. No significant changes in serum chloride, potassium, and sodium levels were observed (data not shown). As shown in Figure 5, increasing dietary phosphate led to an increase in fecal phosphate levels in sham and in the groups treated with vehicle (unmodified polymer). The increase in phosphate levels was observed at both time points (Week 2 and Week 4 after dosing). VS-501 and sevelamer further increased fecal phosphate in a dose-dependent manner, suggesting that these polymers carried phosphate into feces. Figure 6 shows that serum PTH and FGF-23 levels were significantly higher in 5/6 NX rats (vs. sham). Increasing phosphate in the diet further elevated serum FGF-23 and PTH levels over time. VS-501 or sevelamer at 1% and 5% effectively prevented the progressive rise overtime in serum PTH and FGF-23 levels induced by a high-phosphate diet. As shown in Table 3, increasing dietary phosphate had no significant effects on serum and urinary calcium levels in sham and in the groups treated with vehicle (unmodified polymer) or VS-501. Interestingly, although serum Ca levels were not affected by sevelamer at 1-5%, urinary Ca levels trended higher in a dose-dependent manner. Cardiovascular parameters in 5/6 NX rats Since cardiovascular complication is a serious concern in CKD, it is of interest to investigate whether phosphate control by VS-501 or sevelamer affects cardiovascular parameters. Previously we have shown that, at 8 weeks after the renal ablation surgery, the left-ventricle weight (LVW) versus body weight (BW) ratio as a percentage of control was significantly higher in 5/6 NX rats (vs. sham), and treatment with vitamin D analogs reduced the LVW/BW ratio in a dose-dependent manner (Wu-Wong et al. 2011, 2013a. Consistent with our previous reports, the LVW/ BW ratio was elevated in 5/6 NX rats, but VS-501 or sevelamer had no significant effects (data not shown). The E/A ratio, a parameter representative of diastolic cardiac function, was significantly reduced in NX rats at 6 weeks after the renal ablation, and further reduced after 4 weeks on high-phosphate diet. Sevelamer and VS-501 at 5% prevented the further reduction in the E/A ratio, but did not restore the parameter to the level observed in the sham group (Fig. 7A). Results from fibrosis staining by Masson Trichrome are consistent with our previous observations (Wu-Wong et al. 2011, 2013a that, compared to sham, a significant increase in collagen deposition was observed in the heart in 5/6 NX group. However, unlike our previous observations that vitamin D analogs significantly reduce fibrosis (Wu-Wong et al. 2011, 2013a, neither VS-501 nor sevelamer had any significant effects (data not shown). To investigate vascular calcification, aorta samples were randomly selected from each 5/6 NX group and stained for calcification. The staining was scored separately by two investigators in a blinded manner. Ultimately, the staining assessment by two investigators was similar. Two representative samples indicating 0% and 70% positive staining of the aortic cross-sectional area were shown in Figure 7B. The averaged percentage of positive staining in each treatment group was shown in Figure 7C; the results demonstrate a significant increase in aortic calcification in the 5% sevelamer group (vs. NX-vehicle). GI parameters in 5/6 NX rats As a measure to investigate whether chronic dosing of phosphate binders affects intestine physiology, the intestinal integrity was first evaluated by H-E staining of intestinal samples. There was no significant difference across the different treatment groups (data not shown). To investigate further whether VS-501 and sevelamer affects intestine physiology, duodenal calcium absorption was measured ex vivo in 5/6 NX rats. Unlike our previous observations that vitamin D analogs such as calcitriol and paricalcitol increase intestinal calcium transport Wu-Wong et al. 2013b), no significant difference was observed across different groups (data not shown). Consistent with the intestinal calcium transport results, and unlike our previous observations that vitamin D analogs such as calcitriol and paricalcitol induce the expression of intestinal Calb3 (the gene encoding calbindin D9K) and TRPV6 (the gene encoding CaT1 and ECaC2) that are involved in intestinal calcium transport Wu-Wong et al. 2013b), no significant difference was observed across the different groups (data not shown). The expression of the intestinal type II sodiumdependent phosphate cotransporter (NPT2b) gene was also determined. Figure 8 shows that there was no significant difference across different groups. Discussion We have demonstrated in this report that VS-501, a novel phosphate binder derived from a natural plant polymer, has the potential to overcome some of the issues associated with current phosphate binders used for treating CKD patients. VS-501 has a significantly higher density (vs. sevelamer), suggesting its potential for reduced pill size and pill number. Furthermore, VS-501 has a significantly lower swell volume when exposed to simulated gastric fluid, and this characteristic may lead to a lower incidence of GI discomfort than sevelamer. It is of interest to note that VS-501 binds phosphate within a wide physiologically relevant pH range, which suggests that VS-501 binds phosphate at different sites in the GI tract. These unique attributes of VS-501 may improve patient compliance, which is one of the critical factors preventing current phosphate binders from effectively controlling hyperphosphatemia in CKD. From tracking daily water and food consumption in normal rats on normal diet, we observed that water consumption was consistently higher in the sevelamer group, which may explain the higher urine volume in those rats. At the same time, a decrease in food consumption on Day 1 in the sevelamer group may be related to the taste . Cardiovascular parameters in 5/6 NX rats on high-phosphate diet. NX rats were treated with 5% vehicle (unprocessed polymer), 5% sevelamer-carbonate, or 5% VS-501 in food for 4 weeks as described above. (A) E/A ratio: Cardiac function was determined at predosing (pre) and Week 4 after dosing (W4) as described in Methods. Statistical comparisons between two groups were performed by unpaired t-test with 95% confidence intervals of difference. #P < 0.05, ##P < 0.01 versus Sham-predosing. *P < 0.05 versus NX-predosing. (B) Calcification staining: Aorta samples were randomly selected from each 5/6 NX group with n = 16 (four aorta samples per rat, four rats per group) and stained by the von Kossa method. The staining was scored separately by two investigators in a blinded manner and the scores were averaged for each sample. Two representative samples indicating 0% and 70% positive staining of the aortic cross-sectional area are shown. (C) Quantification of calcification: The percentages of positive staining in each treatment group were calculated and expressed as mean AE SE. Statistical comparisons between two groups were performed by unpaired t-test with 95% confidence intervals of difference.*P < 0.05 versus Sham. of sevelamer. No significant change in food consumption was observed in VS-501-treated normal rats. Higher fecal weight in the sevelamer group is consistent with results from the in vitro swell volume studies. Our data show that, while VS-501 has no effects on serum or urinary Ca in normal rats on normal diet, sevelamer significantly increases both measures. It has been reported previously that sevelamer may reduce serum PTH (Fournier, 2000;Burke et al., 2003), whereas other investigators showed that switching from Ca-containing Pi binders such as CaCO 3 to sevelamer decreased the serum levels of calcium, resulting in the elevation of iPTH levels (Sato et al., 2005;Iwata et al., 2007). Although both sevelamer and VS-501 prevented the additional increase in serum PTH induced by a high-phosphate diet in 5/6 NX rats, sevelamer did not suppress PTH in a manner similar to that observed in normal rats on a normal diet, which coincides with the observation that sevelamer did not affect serum calcium in 5/6 NX rats. Our results suggest that the impact of sevelamer on serum PTH in normal rats is likely linked to its effect on calcium homeostasis. Normal rats fed a normal diet do not develop vascular calcification, but increased aortic calcification was noted in 5/6 NX uremic rats on a high-phosphate diet, which was further increased in the high-dose sevelamer group. A report by Block et al. (2012) shows that all phosphate binders currently used in clinical settings including sevelamer potentially increase vascular calcification although the mechanism of action is not known. Our observations offer a possible explanation that sevelamer may disturb calcium homeostasis, leading to increased vascular calcification. FGF-23 is a phosphorus regulating factor (Wolf 2010). FGF-23 levels increase progressively beginning in early stages of kidney disease in order to maintain normophosphatemia despite decreased nephron mass (Gutierrez et al. 2005), and abnormal FGF-23 levels are associated with increased cardiovascular events and mortality in CKD (Gutierrez et al. 2008). Results from this study that phosphate binders prevent the increase in FGF-23 in uremic rats induced by high phosphate diet are in line with the clinical observations made in human CKD patients (Gupta et al. 2004;Martin and Gonzalez 2011;Karczmarewicz et al. 2012). Although no improvement in fibrosis or cardiac function was observed in uremic rats treated with VS-501 or sevelamer, the phosphate binders seem to prevent the continued deterioration of cardiac diastolic function observed in vehicle-treated rats. It may be worth noting that, while both sevelamer and VS-501 prevented the further increase in FGF-23, sevelamer caused more aortic calcification. Whether there is a direct link between FGF-23 and vascular calcification is still being debated (Moldovan et al. 2013;Ozkok et al. 2013). Our results suggest that disturbance of calcium homeostasis may contribute to vascular calcification independent of FGF-23. It is not known whether the long-term use of phosphate binder therapy alters NPT2b gene expression and/ or GI physiology in humans, and whether different phosphate binders might exhibit different effects on these parameters. Although the animal data may not be directly applicable to humans, the results from this study suggest that 1 month of dosing with VS-501 or sevelamer in 5/6 NX rats had no significant effects on intestinal histology and NPT2b gene expression. In conclusion, this study suggests that VS-501 is effective in binding phosphate with low swell volume and without an effect on calcium homeostasis. In addition, due to its high density, the pill size/number can be significantly reduced compared to sevelamer. Center for Cardiovascular Research Physiology Core facility via the Research Resources Center at the University of NPT2b/GAPDH, ratio Figure 8. Intestinal type II sodium-dependent phosphate cotransporter (NPT2b) gene expression. NX rats were treated with 5% vehicle (unprocessed polymer), 5% sevelamer carbonate, or 5% VS-501 in food for 4 weeks as described above. RNA samples were prepared from small intestines using the standard RNA isolation procedure. The real-time RT-PCR was performed as described in Methods. NPT2b values were normalized with GAPDH. Statistical comparisons between two groups were performed by unpaired t-test with 95% confidence intervals of difference. Mean AE SE was calculated for each group. 2014 | Vol. 2 | Iss. 3 | e00042 Page 10 Illinois at Chicago. This manuscript is original work not previously published in any substantial part, and is not under consideration of publication elsewhere. The manuscript has been read and approved for submission by all authors. The signature of the corresponding author is on behalf of all the authors.
2016-08-09T08:50:54.084Z
2014-04-22T00:00:00.000
{ "year": 2014, "sha1": "77de0563b1dfa85f81cf4df0569f2a75e9ce37ca", "oa_license": "CCBYNCND", "oa_url": "https://bpspubs.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/prp2.42", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "77de0563b1dfa85f81cf4df0569f2a75e9ce37ca", "s2fieldsofstudy": [ "Chemistry", "Materials Science", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
25085692
pes2o/s2orc
v3-fos-license
Liver fibrosis markers of nonalcoholic steatohepatitis. Nonalcoholic fatty liver disease (NAFLD) is one of the major causes of chronic liver injury. NAFLD includes a wide range of clinical conditions from simple steatosis to nonalcoholic steatohepatitis (NASH), advanced fibrosis, and liver cirrhosis. The histological findings of NASH indicate hepatic steatosis and inflammation with characteristic hepatocyte injury (e.g., ballooning degeneration), as is observed in the patients with alcoholic liver disease. NASH is considered to be a potentially health-threatening disease that can progress to cirrhosis. A liver biopsy remains the most reliable diagnostic method to appropriately diagnose NASH, evaluate the severity of liver fibrosis, and determine the prognosis and optimal treatment. However, this invasive technique is associated with several limitations in routine use, and a number of biomarkers have been developed in order to predict the degree of liver fibrosis. In the present article, we review the current status of noninvasive biomarkers available to estimate liver fibrosis in the patients with NASH. We also discuss our recent findings on the use of the glycated albumin-to-glycated hemoglobin ratio, which is a new index that correlates to various chronic liver diseases, including NASH. INTRODUCTION Hepatic steatosis indicates the accumulation of fat in excess of 5%-10% of the total liver weight [1] . Alcoholic consumption is one of the main causes of liver damage, and the presence of steatosis in alcoholic liver disease is related to the progression of liver fibrosis and cirrhosis [2,3] . Although hepatic steatosis due to nonalcoholic factors was regarded as a nonprogressive benign disease, it has been noted that obese patients and those with diabetes mellitus may develop steatohepatitis that pathologically mimics alcoholic liver injury [1,4] . In 1980, Ludwig et al [5] reported 20 cases of nonalcoholic steatohepatitis in which the histological findings were nearly identical to alcoholic liver damage and which could progress to cirrhosis. In 1986, Schaffner et al [6] proposed the idea of "nonalcoholic fatty liver disease (NAFLD)" which was clinically similar to alcoholic liver disease, irrespective of the absence of an excessive alcohol intake. The definition of NAFLD requires the evidence of hepatic steatosis, either by imaging or by histology, in the absence of the typical causes for secondary hepatic fat accumulation, such as significant alcohol consumption, the use of steatogenic medication or hereditary disorders [7] . NAFLD is histologically classified into either nonalcoholic fatty liver (NAFL) or nonalcoholic steatohepatitis (NASH). The histological findings of NAFL demonstrate hepatic steatosis without the evidence of hepatocellular injury (e.g., ballooning of the hepatocytes), and NAFL usually follows a benign clinical course. Conversely, the histological findings of NASH are barely distinguishable from those of alcoholic liver disease, which are characterized by the presence of hepatic steatosis and inflammation with a distinctive hepatocyte injury (e.g., ballooning degeneration), and NASH is considered to be a potentially healththreatening disease that may progress to cirrhosis in 10%-15% of patients [8] . NAFLD typically develops based on various metabolic disorders such as obesity, diabetes mellitus, and dyslipidemia; however, the prognosis and outcome of the patients with advanced liver fibrosis are predominantly determined by the liver disease-related clinical events, including hepatic failure and hepatocellular carcinoma [9,10] . Therefore, physicians are required to accurately differentiate NASH from NAFL and evaluate the severity of liver fibrosis in order to determine the prognosis and optimal treatment [7] . BIOMARKERS In patients with chronic liver diseases (CLDs), continuous inflammation and tissue injury cause fibrotic changes in the liver. Liver fibrosis leads to several serious problems, including disturbed metabolic functions, an increased risk of cancer development, and portal hypertension-associated symptoms such as ascites and gastroesophageal varices. Although imaging modalities are capable of detecting the presence of hepatic steatosis, it is not easy to diagnose NASH without a histological assessment. A liver biopsy therefore remains not only the most reliable diagnostic tool for confirming NASH, but also the most promising means of identifying many of the important clinical features of the patient, including the severity of hepatic inflammation and fibrosis [7] . Although a liver biopsy can histologically determine the degree of liver fibrosis, the procedure is a costly and uncomfortable technique, which is associated with a small risk of complications [11][12][13] . In addition, there is the potential for a sampling error, because only 1/50000 of the organ is available for the histological assessment [12] . Furthermore, inter-and intra-investigator variances are present in up to 20% of the clinical samples [13] . Recently, several imaging tools have been developed to estimate liver fibrosis, and the clinical utility of these new modalities has been reported [1,9,[14][15][16][17] . Unlike liver biopsy, these modalities can be repeatedly performed over a period of time with minimal invasion. However, these excellent but expensive items are not readily available in all institutions, while noninvasive biomarkers of fibrosis can be easily measured in a large number of patients. Therefore, there is a need for serum markers which can be routinely assessed via laboratory tests. To date, many noninvasive markers have been proposed to evaluate the degree of liver fibrosis [18][19][20] . During the turnover of fibrosis in the liver, the components of the extracellular matrix (ECM) are considered to be released into circulation, and some ECM-associated molecules, such as hyaluronic acid and type Ⅳ collagen, have been used as biomarkers to estimate the degree of liver fibrosis [21,22] . Additionally, a decreased platelet count was reported to correlate with the progression of liver fibrosis and therefore be a marker of the severity of liver fibrosis [23,24] . In addition to these markers, the AST-to-ALT ratio (AAR) is regarded as a well-known classical biomarker which Enomoto H et al . Liver fibrosis markers of NASH increases with the advancement of liver fibrosis [24,25] . In 2001, Imbert-Bismut et al [26] investigated hepatitis C virus (HCV)-positive patients and proposed a novel index, the "FibroTest score," which is computed based on the patient's age, gender and levels of serum haptoglobin, α2-macroglobulin, apolipoprotein A1, γ-glutamyl transpeptidase (GGT) and bilirubin. In 2002, Forns et al [27] developed another scoring system, the "Forns score," which involves an algorithm that includes the platelet count, the GGT, the patient' s age and cholesterol level. These scores are novel and important in that they allow for the degree of liver fibrosis to be assessed using only blood tests. However, the FibroTest score is a combination of six parameters, and the Forns score is calculated using a complicated formula. Therefore, neither of these markers is easy to apply in daily practice. In 2003, Wai et al [28] investigated several combinations of clinical variables which are commonly used in daily practice, and proposed the AST-to-platelet ratio index (APRI). This index allows for the estimation of liver fibrosis using a simple formula that is calculated using only two daily clinical variables. Subsequently, the FIB-4 index, which is determined by the age, AST, ALT, and platelet count, was proposed. In addition, many other biomarkers, such as the Fibrosis Probability Index [29,30] , FibroMeter [31] , Lok index [32] , FibroIndex [33] , Original European Liver Fibrosis (OELF) test [34] and FIBROSpect [35] , were reported with regard to their clinical utility in the assessment of the degree of liver fibrosis. LIVER FIBROSIS MARKERS IN NAFLD/ NASH Despite the fact that liver biopsy is the most reliable method to diagnose and evaluate the progression of NAFLD/NASH, NAFLD affects 10% to 24% of the general population in various countries [36] , and it is unrealistic to perform liver biopsies in all NAFLD patients. Therefore, many biomarkers of liver fibrosis have been applied as liver fibrosis markers for NAFLD/ NASH patients. General fibrosis markers in NAFLD/NASH Many biomarkers for liver fibrosis, which had been previously evaluated for the patients with viral hepatitis (particularly HCV-infected patients), have also been validated in patients with NAFLD/NASH. In addition to the patients with viral hepatitis, the serum levels of hyaluronic acid and type Ⅳ collagen were reported to increase in association with the progression of liver fibrosis in NAFLD [37,38] . Sakugawa et al [38] [39] investigated a total of 1,048 patients with NAFLD and reported a significant association between the decreased platelet count and the severity of liver fibrosis. Although a platelet count was reported to show an excellent AUROC of 0.92 for the prediction of cirrhosis (Stage 4), it showed only a moderate AUROC of 0.77 for the prediction of advanced fibrosis (Stage 3-4). Irrespective of the clinical relevance of these markers, it is difficult to evaluate the degree of liver fibrosis adequately according to these variables alone. Several liver fibrosis markers, which have been validated with multiple-variable algorithms, such as the APRI [28] , FIB-4 [29] , FibroTest [26] , and the Enhanced Liver Fibrosis (ELF) test [40] , have also been validated in the NAFLD population, where they may identify patients with liver fibrosis. These indices have been reported to demonstrate AUROCs between 0.67-0.90 for the differentiation of the severity of fibrosis [41][42][43][44][45] . The APRI, which is a simple marker calculated by two variables (AST and platelet count), was reported to have an AUROC of 0.85 for advanced fibrosis (Stage 3-4) in 111 NAFLD patients [41] . Since the APRI is easily measurable without any special equipment, its diagnostic performance was evaluated and compared with that of the other fibrosis markers. The AUROCs of APRI for the prediction of advanced/ severe fibrosis (Stage 3-4) were reported to range from 0.67 to 0.87 [42] . The FIB-4 index calculated with four variables (age, AST, ALT, and platelet count) was reported to have an AUROC of 0.80 for advanced fibrosis (Stage 3-4) in 541 NAFLD patients, although the score was difficult to use for the diagnosis of NASH [43] . The FibroTest is an algorithm derived from a regression analysis of haptoglobin, α2-macroglobulin, apolipoprotein A1, bilirubin, GGT, age and gender. Its predictive values have been reported to have an AUROC of 0.81 for advanced fibrosis (Stage 3-4) and an AUROC of 0.88 for cirrhosis (Stage 4) in NAFLD [44] . The ELF test [40] has been proposed to be a modified panel of the OELF test [34] . The OELF test includes four variables [age, HA, N-terminal peptide of procollagen Ⅲ (P3NP), and tissue inhibitor of matrix metalloproteinase 1 (TIMP 1)], whereas the ELF test is calculated by the three variables (excluding age). When the ELF was validated for NAFLD [40] , its predictive values were determined as an AUROC of 0.82 for moderate fibrosis (Stage 2-4) and an AUROC of 0.90 for advanced fibrosis (Stage 3-4). Additionally, the ELF test has been suggested to be associated with the clinical outcome [45] . However, most of these markers (other than the ELF panel) were primarily validated for the patients with HCV-related CLD, and their diagnostic performances were not adequate when these markers due to its high negative predictive value (≥ 95%) [51] . In addition to the aforementioned biomarkers, several diagnostic markers have also been proposed for the assessment of liver fibrosis in NAFLD/NASH. The FibroMeter is an index which is determined by the age, weight, fasting glucose, AST, ALT, ferritin and platelet count, and has also been validated in a NAFLD population [52] . This marker was reported to have a high diagnostic performance with an AUROC of 0.94 for significant fibrosis (Stage 2-4), 0.94 for severe fibrosis (Stage 3-4) and 0.90 for cirrhosis (Stage 4). The NAFIC score is a simple scoring system determined by three variables, including the serum ferritin level [≥ 200 ng/mL (female) or ≥ 300 ng/mL (male)], fasting insulin (≥ 10 mU/mL), and type Ⅳ collagen 7S (≥ 5.0 ng/mL). The index was reported to show an AUROC of 0.834 for significant fibrosis and 0.869 for severe fibrosis (Stage 3-4) in Japanese patients [53] . The NAFLD Diagnostic Panel is an index, which is obtained by the following items: DM, gender, BMI, triglycerides, and CK18 fragments (M30: apoptosis, M65-M30: necrosis). The panel was reported to have an AUROC of 0.80 for predicting any degree of fibrosis (Stage 1-4) and an AUROC of 0.81 for predicting advanced fibrosis (Stage 3-4) [54] . Irrespective of the excellent diagnostic performance of the methods shown in the abovedescribed studies, the patients were heterogeneous in characteristics and were sometimes highly selected; the clinical significance of the markers should therefore be confirmed and validated in different cohorts. The biomarkers developed for NAFLD/NASH for predicting the degree of fibrosis are shown in Table 2. Liver fibrosis markers based on glycosylated proteins Glycosylation is one of major posttranslational enzymatic modifications of the proteins. Because many glycosylated proteins in the serum are generated in were applied to the patients with NAFLD/NASH. Table 1 shows the validations of general biomarkers for the histological degree of fibrosis in NAFLD/NASH patients. Metabolism-based fibrosis markers developed for NAFLD/NASH Most of the patients with NASH have several metabolic dysfunctions, including obesity, diabetes mellitus, and dyslipidemia, and their clinical features may differ from other chronic liver diseases, such as hepatitis virusassociated CLDs [7] . Therefore, simple markers derived from a logistic regression analysis of large cohorts with NAFLD/NASH have also been developed and validated. In 1999, Angulo et al [46] reported three factors (older age, obesity, and the presence of diabetes mellitus) to be independent predictors of severe hepatic fibrosis in the patients with NASH. In 2001, the HAIR scoring system, which was generated based on three clinical items (the presence of systemic Hypertension, elevated ALT and Insulin Resistance) was reported to have a sensitivity of 80% and specificity of 89% for NASH in the patients undergoing bariatric surgery [47] . Ratziu et al [48] reported the BAAT score (consisting of the BMI, ALT, Age and Triglyceride levels) had an AUROC of 0.84 for the prediction of septal liver fibrosis (Stage 2-4). In 2007, Angulo et al [49] proposed the NAFLD fibrosis score (determined by the presence of diabetes, AST, ALT, the BMI, platelet count and albumin) and reported it to be a specific marker for NAFLD with an AUROC of 0.84 for advanced fibrosis (Stage 3-4). In a recent meta-analysis of 13 studies consisting of 3064 patients, the AUROC for the NAFLD fibrosis score was found to be 0.85 for the prediction of advanced fibrosis [50] . The BARD score, which was determined by three items (BMI > 28 kg/m 2 , AST/ALT Ratio > 0.8, and Diabetes), was evaluated in a cohort of 827 NAFLD patients and showed an AUROC of 0.81 for predicting advanced liver fibrosis (Stage 3-4). Notably, the BARD score was reported to be valuable for excluding the patients without advanced fibrosis 7430 June 28, 2015|Volume 21|Issue 24| WJG|www.wjgnet.com [40] , 2008 APRI: AST-to-platelet ratio index. the liver, a decreased liver function is expected to relate to the changes in protein glycosylation and recent studies suggest that serum N-glycome may be a valuable biomarker of CLDs [55][56][57][58] . According to the differences in the N-glycome patterns, two biomarkers, the GlycoCirrhoTest [55] and the GlycoFibroTest [56] , have been reported to predict the presence of cirrhosis and fibrosis, respectively. In addition, new glycomicsbased approaches were reported to succeed in the noninvasive evaluation of liver fibrosis [57][58][59][60] , and a recently established glycosylated protein-associated maker, M2BP (Wisteria floribunda agglutinin-positive Mac-2 binding protein), was reported to be a useful marker of liver fibrosis in various CLDs, including NASH [60] . Liver fibrosis markers based on glycated proteins: The glycated albumin-to-glycated hemoglobin ratio as a biomarker of liver fibrosis Although the qualitative changes of glycosylated proteins are excellent tools for estimating the degree of liver fibrosis, the methods are not readily applied to daily practice. The term "glycation" is now generally used as a non-enzymatic spontaneous modification of proteins by saccharides [61,62] , and glycated proteins, particularly glycated hemoglobin (HbA1c) and glycated albumin (GA), are widely used as indices of the glycemic control in the patients with diabetes mellitus [63,64] . We herein focused on the quantitative changes of these commonly measured glycated proteins during the progression of liver fibrosis. The lifespan of erythrocytes is approximately 120 d, and the HbA1c level typically reflects the degree of glycemia for the previous months [65] . The GA level correlates with the plasma glucose level over the previous few weeks, because the turnover of albumin is approximately three weeks [66,67] . Although the normal GA to HbA1c ratio (GA/HbA1c ratio) is approximately 3, the value changes based on the patient's condition [68] . Because of hypersplenism, the lifespan of erythrocytes in the CLD patients is shorter than that noted in healthy individuals; thus, the HbA1c levels are lower in the patients with CLD relative to the plasma glucose level. In contrast, the turnover period of serum albumin in the CLD patients is longer than that observed in healthy persons in order to compensate for the decreased production of albumin in the liver. Therefore, the GA levels in the CLD patients are higher, relative to the degree of glycemia [68,69] . Since the HbA1c levels are lower and the GA levels are higher in the CLD patients, the GA/HbA1c ratio is considered to be higher in the patients with CLD in comparison to healthy subjects. We previously investigated the GA/HbA1c ratio in CLD patients and reported that the GA/HbA1c ratio indicated an inverse correlation with the indicators of the hepatic function (e.g., the hepaplastin test, cholinesterase and abumin levels), regardless of the mean plasma glucose level, thus suggesting that the GA/HbA1c ratio increases as the liver fibrosis progresses [70] . However, this report did not discuss the association of the GA/HbA1c ratio with the histological stage of fibrosis in the CLD patients. We further investigated the relationships between the GA/HbA1c ratio and the histological findings in various types of CLD, including HCV-related CLD, hepatitis B virus (HBV)-related CLD and NASH [71][72][73] . We studied the GA/HbA1c ratios in a total of 142 patients with HCV infection and discovered that the ratio increased with the progression of the liver fibrotic stage [71] . The GA/HbA1c ratio was additionally found to be associated with the histological severity of liver fibrosis in the patients with HBV infection and to be positively related to two well-established markers of liver fibrosis, the FIB-4 and APRI indices [72] . We further investigated the NASH patients and found that the GA/HbA1c ratio increased with an increase in the histological severity of liver fibrosis [73] . These findings suggest that the GA/HbA1c ratio is a novel biomarker of liver fibrosis in the patients with NASH as well as those infected with hepatitis viruses. The results of the GA/HbA1c ratios in the patients with various CLDs are summarized in Table 3. Although the AUROCs were not determined in these studies, comparisons [53] , 2011 1 Presence of any degree of fibrosis; 2 Presence of advanced fibrosis (at least moderate portal or pericellular fibrosis, bridging fibrosis, or cirrhosis). NAFLD: Nonalcoholic fatty liver disease. of the diagnostic performance of the GA/HbA1c ratio and other biomarkers would provide important and interesting information. Although a number of biomarkers have been developed, none of them are ideal (i.e., a simple, inexpensive, reproducible, easily measurable test without any special equipment and capable of high diagnostic performance) [74] . It is notable that the rate of change of the GA/HbA1c ratio among the fibrosis stages is relatively small, and this ratio alone cannot be a decisive biomarker for the evaluation of liver fibrosis (similar to the other currently available biomarkers). In addition, some diseases and conditions are associated with high or low GA/HbA1c ratios [68] . For instance, because the GA/HbA1c ratio is affected by changes in glycemic control, it cannot to be used in patients with unstable glycemic control. The ratio may also be inaccurate as a sole liver fibrosis marker in patients with conditions that affect the level of HbA1c, such as anemia caused by non-hepatic diseases and variant hemoglobin. The ratio also differs in patients with abnormal albumin metabolism, such as nephrotic syndrome, thyroid disease and in patients who undergoing glucocorticoid therapy. Therefore, the GA/HbA1c ratio may not sufficiently reflect the degree of liver fibrosis in CLD patients with certain clinical conditions. However, the GA/HbA1c ratio is unique and interesting in that the value can be calculated with only the levels of two common glycated proteins and correlates to the degree of liver fibrosis in various CLDs. A new biomarker based on a combination of factors, including the GA/HbA1c ratio, would provide a better noninvasive assessment of liver fibrosis. The current findings should therefore shed some new light on the evaluation of liver fibrosis. CONCLUSION NASH is one of major causes of chronic liver injury and non-viral cirrhosis. Although a liver biopsy remains the gold standard for the diagnosis of NASH and the evaluation of the severity of liver fibrosis, this technique has several disadvantages in relation to its routine and repeated use. Many serum biomarkers have been proposed in order to estimate the degree of liver fibrosis in NASH patients noninvasively. In addition, new methods based on glycated proteins have been recently developed. These new approaches may provide better insight into the clinical management of NAFLD/NASH. The GA/HbA1c ratio was associated with hepatic functions (decreasing hepaplastin test and cholinesterase levels) independent of the mean plasma glucose levels Bando et al [70] , 2009 HCV-positive CLD 142 The GA/HbA1c ratio increased in association with the histological severity of liver fibrosis. The diagnostic performance of APRI improved when combined with the GA/HbA1c ratio Aizawa et al [72] , 2012 HBV-positive CLD 176 The GA/HbA1c ratio increased in line with the severity of fibrosis. The GA/HbA1c ratios were inversely correlated with four variables of liver function (the prothrombin time percentage, platelet count, albumin value and cholinesterase value) Enomoto et al [73] , 2014 NASH 36 The GA/HbA1c ratio was negatively correlated with ALT and platelet count. The GA/HbA1c ratio was positively correlated with the degree of liver fibrosis Bando et al [74] , 2012 CLDs: Chronic liver diseases; HCV: Hepatitis C virus; HBV: Hepatitis B virus; NASH: Nonalcoholic steatohepatitis.
2018-04-03T06:01:48.085Z
2015-06-28T00:00:00.000
{ "year": 2015, "sha1": "f51069e5b98a021807cc2e98f533029726b4c636", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3748/wjg.v21.i24.7427", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "1fdef45d080e6848bfc8427142d122cd52bded70", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246061016
pes2o/s2orc
v3-fos-license
Gauss Gradient and SURF Features for Landmine Detection from GPR Images Recently, ground-penetrating radar (GPR) has been extended as a well-known area to investigate the subsurface objects. However, its output has a low resolution, and it needs more processing for more interpretation. This paper presents two algorithms for landmine detection from GPR images. The first algorithm depends on a multi-scale technique. A Gaussian kernel with a particular scale is convolved with the image, and after that, two gradients are estimated; horizontal and vertical gradients. Then, histogram and cumulative histogram are estimated for the overall gradient image. The bin values on the cumulative histogram are used for discrimination between images with and without landmines. Moreover, a neural classifier is used to classify images with cumulative histograms as feature vectors. The second algorithm is based on scale-space analysis with the number of speeded-up robust feature (SURF) points as the key parameter for classification. In addition, this paper presents a framework for size reduction of GPR images based on decimation for efficient storage. The further classification steps can be performed on images after interpolation. The sensitivity of classification accuracy to the interpolation process is studied in detail. implanted during wars in different areas of the world. Detection of landmines can be performed with different techniques such as electromagnetic, acoustic, optical, mechanical, biological, and nuclear techniques [1][2][3][4]. The GPR is one of the most efficient landmine detection techniques. Khan et al. [5] developed a landmine detection technique from GPR scans based on assuming that GPR images can be interpreted in a 1-D format. GPR images resemble speech signals in nature. The authors developed a landmine detection technique based on extracting cepstral features from the GPR images and using neural classification. Although this technique is simple, it destroys the 2-D nature of images and the 2-D geometry of objects. Hamdi and Figui [6] presented a GPR landmine detection technique based on an ensemble hidden Markov model. They proved that the different textures of landmine activities are reflected in their model parameters. Furthermore, they compared their technique with a baseline HMM and demonstrated the superiority of their technique to this model. This technique achieved up to 95% detection accuracy. Klesk et al. [7] presented landmine detection approach from 3-D scans taken at different levels underground using higher-order statistics. This approach firstly generates integral images, and hence higher-order moments are extracted from these integral images in a constant time. This approach achieved different sensitivity values up to 98% and different false alarm values down to 2% with different types of landmines. There are several problems facing the use of GPR as a real-time detector for underground utilities. The background noise is one of these problems. It seems as large-amplitude, low-frequency, horizontal stripes in GPR images. Ali and Hussein [8] proposed a proficient background removal algorithm that can be incorporated into GPR logging devices. Several resources in the world are not utilized due to landmines. Deep analysis and several test cases can be studied to guarantee a reliable landmine sensing system. 2D finite element models have been proposed for different objects in the sand at different depths in [9]. This inverse approach is based on neural networks. It is considered a reliable and proficient tool for landmine detection to accurately estimate the basic parameters of the landmine (type, depth) in a sandy desert. Data processing includes extracting important information from any dataset. It is basically different from data analysis because it includes an algorithm that changes the original data, while the analysis encompasses temporary operations preliminary to processing steps or useful to achieve improved data component visualization and discrimination. Several algorithms that can be applied to GPR data can be used in other fields such as image processing [10]. In addition, the authors of [11] proposed an innovative fully non-linear multi-frequency multi-scaling approach for processing of wide-band GPR measurements. The rest of this paper is organized as follows. In Section 2, the related work is presented. The two proposed algorithms are introduced in Section 3. In Section 4, the efficient storage of landmine images is discussed. The simulation results are presented in Section 5. Finally, Section 6 gives the conclusions. Related Work 2.1 GPR Technique The state-of-the-art GPR system is used for identifying subsurface irregularities. This tool has proven to be of high proficiency and practicality. The GPR is one of the most widely-used electromagnetic techniques for landmine detection due to its advantages compared to other tools, as it is simple and can be easily used. It consumes low power. It can find mines with any type of casing, and it can detect plastic objects. Hence, the GPR can be used in a wide range of applications such as engineering, archaeology, and geological applications [12]. The first stage for any GPR detection technique is the sensor. GPR can be developed in software and hardware. This research concentrates on GPR software techniques. Simply, the core of GPR work is that it consists of two antennas, as illustrated in Fig. 1. The transmitter antenna sends a signal into the subsurface that needs to be detected in the form of an electromagnetic wave. When this wave or signal reaches the ground, it will be reflected. Refraction and diffraction from materials occur, when there is a difference in the dielectric properties. After that, the receiver antenna receives the reflected signal, and then this signal can be processed to determine what it identified depending on the results of the processing method [7]. The GPR usually uses a frequency band ranging from 100 to 100 GHz. Processing of the results can lead to more information, such as the depth of buried objects. The depth can be known by computing the time consumed to receive the signal that is transmitted and then reflected. The subsurface features can also be obtained for the buried objects, which may be landmines, rocks, metals, underground water, or gold. By translating the reflected electromagnetic wave and studying the dielectric properties, we can easily understand what it refers to. This can be done by determining the properties such as permittivity, magnetic susceptibility, and conductivity of the media [13]. The depth that the GPR wave can penetrate depends on the humidity of the earth and the wavelength of the wave. There are some limits for that depth, like the electrical conductivity of the ground, the center frequency of the transmitter antenna, and the radiated power. On the other hand, when the electrical conductivity increases, attenuation in the electromagnetic wave is increased. The GPR cannot investigate more depth with higher frequencies, but the resolution may be improved. Therefore, the choice of frequency depends on the type of application. The application should make a balance between resolution and depth. In addition, the GPR is used in several applications like studying bedrocks, soil types, roads, mapping of archaeological features, and landmine detection [14]. Microwave Radiometry (MWR) This technique is based on transmitting short radio and microwave (10 2 to 3 × 10 3 MHz) radiation pulses from an antenna into the ground and measuring the time for reflections to return to the same antenna. Reflections occur at the boundaries between materials of different dielectric constants that are normal to the incident radiation [15]. Transmission of high frequencies provides a high resolution, but with high attenuation in the soil. Hence, high frequencies are suitable for the detection of small shallow objects. Conversely, low frequencies achieve a lower resolution, but are less attenuated in the soil. Hence, low frequencies are more suitable for detecting deep objects. The MWR technique can give good results in dry soils, but it does not achieve good results in wet soils [16]. Infrared (IR) Detection The IR radiation is a portion of the electromagnetic radiation spectrum between the visible light and microwave with wavelengths between 0.75 μm and 1 mm. The concept of using IR thermography for mine detection is based on the fact that mines might have different thermal properties from the surrounding materials. The IR sensor is safe with light weight, and it can scan large areas. However, IR sensors have difficulty in detecting deep objects [17]. Proposed Techniques for Landmine Detection This section presents two proposed landmine detection techniques. These techniques are based on the Gauss gradient and the SURF descriptor. Landmine Detection Based on Cumulative Histogram of Gradients As shown in Fig. 2, a histogram-based strategy is adopted for landmine detection. The basic idea of this strategy is to apply a Gaussian kernel on the image with a certain scale. After that, a Laplacian gradient is used to generate an edge map for the image at the selected scale. The histogram is obtained for this edge map. The histogram of the edges is not enough to decide whether a landmine exists or not. So, a cumulative histogram is generated for each case. After carefully considering the cumulative histograms for the cases with and without landmines, we can select specific gray levels for discrimination based on the cumulative histogram values at these levels in the presence and absence of landmines. Consider a gray-scale image as a function f (x, y). The following Gaussian kernel is applied on the image [18]: Instead of estimating the Laplacian of the image after applying the Gaussian kernel, an alternative approach is to directly apply the Gaussian kernel derivatives in both x and y directions on the image. The derivative of the Gauss kernel in Eq. (1) with x is given by [18]: Also, the derivative with y is given by [18]: Due to the commutative property between the derivative operator and the Gaussian smoothing operator, such scale-space derivatives can equivalently be computed by convolving the original image f (x, y) with the Gaussian derivative operators [G dx , G dy ]. This means that the edge map components in both x and y directions are given by [19], From Eqs. (4) and (5), we deduce that both D fx and D fy reveal the image details due to the derivative effect, where ⊗ refers to the convolution operation [20]. The proposed approach depends on estimating the details in a common image format as follows [19]: Finally, from Eq. (6), we can work on the histogram of D fxy . Gauss gradient algorithm has been applied on more than 70 images with and without landmines. Both the histogram and cumulative histogram have been estimated for all images. Always, there is a difference between the cumulative histogram curves of both types of images. So, a threshold can be set in the middle of the two curves at a particular bin value. Thus, it is possible to discriminate between two images with and without landmines by setting a threshold at a certain bin value in the cumulative histogram. Therefore, we can adopt this strategy for discrimination by taking the bin level of 50, for example, and setting a threshold equal to 0.6. Moreover, we can also use neural classification in this task. SURF-Based Landmine Detection The proposed algorithm for landmine detection based on SURF features includes extracting the SUFR points of the images containing landmines and the images without landmines. Then, based on the volume of the feature vector, a decision is taken whether a landmine exists or not, as shown in The SURF detector-descriptor approach can be considered as a professional alternative to the SIFT descriptor. The SURF algorithm is designed to find blob features. The SURF descriptor is faster and more robust than the SIFT descriptor [20,21]. The computation for the detection step of feature points is based on a two-dimension box filter. It uses a scale-invariant blob detector based on the determinant of the Hessian matrix for both scale selection and localization. It takes the approximations of the second-order Gaussian derivatives using a set of box filters. The Hessian determinant approximation represents the blob response in the image. The 9 × 9 box filters are used to approximate a Gaussian kernel with σ = 1.2 as the minimum scale for computing the blob response representations. These approximations are referred to as D xx , D yy , and D xy , while the Hessian determinant approximation can be written as [22]: where w is a comparative weight for the filter response, and it is used to balance the Hessian determinant expression. These responses are stored in a blob response map, and local maxima are detected and refined using quadratic interpolation and the DoG. Finally, we do non-maximum suppression in a 3 × 3×3 neighborhood to get steady interest points and the scale of values. The SURF descriptor starts by constructing a square region centered around the detected point and oriented along with its main orientation. The size of this window is 20 s, where s is the scale at which the point is detected. Then, the region of interest is further divided into smaller 4 × 4 sub-regions and for each sub-region [22], the Haar wavelet responses in the vertical and horizontal directions (denoted as d x and d y , respectively) are computed at the 5 × 5 sampled points as shown in Fig. 4. Computing this for all 4 × 4 sub-regions, the resulting feature descriptor of length 4 × 4×4 = 64 is obtained. Finally, the feature descriptor is normalized to a unit vector in order to reduce illumination effects [23]. Finally, after detecting the feature points of the GPR images, a threshold can be set based on the number of feature points. This threshold can be used to test the image if it contains a landmine or not. Efficient Storage of Landmine Images This section presents an approach for saving the storage area of landmine images based on decimation and interpolation at the detection stage. This approach aims to reduce the size of the landmine database, while maintaining the ability to detect landmines if the images are interpolated. The acquisition and storage of landmine images consume a large storage area, especially if High Resolution (HR) images are stored. This dilemma can be solved if we could find an efficient tool to reduce the storage size of landmine images, while keeping the discriminative features of these images. If we think of image compression, some information loss takes place as a result of compression. An alternative to this solution is to adopt the decimation strategy of images to reduce the storage size. The decimation by two of an image in both directions leads to one-fourth of the original image size. Now, the question is how to recover the original image size if we decided to perform a landmine detection process after extracting the image from the database. There is a need to think of sophisticated solutions for this problem based on interpolation theory. To understand this idea, there is a need to explore the decimation process, firstly. Decimation Model Assume that there is an HR GPR image represented as f, which is a lexicographically-ordered vector. To get an LR version g of this image through the decimation process, there is a need to multiply f by the decimation operator D as follows [24][25][26][27][28][29][30][31]: where D is the decimation operator that converts the HR image to an LR image. It is represented as [24][25][26][27][28][29][30][31]: while the symbol ⊗ represents the Kronecker product, and D 1 is a filtering and down-sampling operator. In the case at hand [24][25][26][27][28][29][30][31], The filtering and down-sampling process is described as shown in Fig. 5 as the HR image is filtered by a horizontal low pass filter, and the output is filtered by a vertical low pass filter to give the desired LR image. Image Reconstruction Using Interpolation Techniques Image reconstruction before landmine detection from the stored LR images is a necessary task. Reconstruction can be performed as an inverse process of the decimated version using interpolation techniques [26]. This can be performed with maximum entropy or regularized interpolation techniques, as shown in Fig. 6. The objective is to reconstruct the images with as many discriminative features as those of the original images prior to decimation. Maximum Entropy Interpolation This approach is based on obtaining an image with maximum entropy. The entropy [27][28][29][30][31] of the image is defined as follows: We can consider normalized pixel values as probabilities in the image and find the entropy in vector form as follows: We have to maximize the entropy subject to the constraint that [27][28][29][30][31]: We need to minimize this cost function: where λ represents a Lagrangian multiplier. In the previous equation, by using the differentiation of the two terms with respect to f, and then equating the final result to zero: By solving, we get: Then: By the usage of Taylor expansion in order to expand the result, and neglecting all but the first two terms, while g − Df must be a small quantity, this results in the following equation: Solving forf, the result will be: and η = −1/(2λ ln (2)). If the direct inversion is applied to (D t D + ηI), and due to the nature of this sparse diagonal matrix, the inversion process can be simply performed. The diagonal matrix D t D refers to interpolation after decimation. Regularized Image Interpolation There is another solution to solve the ill-posed interpolation problem through regularization theory by setting some constraints on the smoothness of the image to be obtained [32][33][34]. The cost function to be minimized in this case is given as [31,32]: where Q is the 2-D regularization operator, and λ is the regularization parameter. Applying the derivative to the cost function results in: This leads to the following solution [29][30][31]: An iterative solution for the estimation is possible in this case as follows: Gauss Gradient Results Three simulation experiments have been conducted to detect landmines from GPR images. Two images have been used in each experiment: one with a landmine and the other without a landmine. The strategy in all experiments is to apply the Gauss gradient first on both images, and after that, a histogram is estimated for the obtained gradient for each image. Finally, the cumulative histogram is estimated for each case. The results of these experiments are shown in Figs. 7 to 28. A careful look at the cumulative histogram curves reveals that the images with landmines always have more edges, and the cumulative histograms of their edges rise up to the target value of one with a higher rate of change. Thus, it is possible to select a gray level of 50, for example, and threshold the cumulative histogram of the Gauss gradient at this level to use it for discrimination between images with landmines and images without landmines. Tab. 1 illustrates these discrimination levels at 50 in each experiment. In scale-space theory, the standard deviation of the Gaussian kernel used is named the scale. The obtained results reveal the dependency of the threshold on the scale of the Gaussian filter. Figs. 7a and 8 show that there is a difference in the obtained gradient images with the scale. Tab. 1 reveals that the threshold estimated for landmine detection mainly depends on the scale or standard deviation of the Gaussian kernel used. The value of H(50) is selected as there is a clear difference between H(50) with and without landmines. Therefore, we can set a threshold on H(50) to discriminate between images with and without landmines. To follow a general strategy for landmine detection, we can take the average cumulative histogram for the images with landmines and the average cumulative histogram for the images without landmines, as shown in Figs. 9-28. This can help in using all cumulative histogram bins for discrimination, and hence we can estimate accuracy for the detection with any bin. Tabs. 2 and 3 summarize the detection results with different bins of the cumulative histograms. These results show the highest accuracy by working on H(150) is acceptable, and there is some sort of clustering of images with landmines and images without landmines at this bin value. Results Based on Neural Classification Instead of using a thresholding process for the detection of landmines, we can train a neural classifier with specific bins of the cumulative histograms of gradients for images with and without landmines. The number of input nodes can be variable based on the selected margin of the cumulative histogram. For example, if we select the margin from H (50) to H(150), we can have 101 inputs in the input layer. Thus, a single hidden layer is enough for the classification. The classification accuracy of the neural classifier reaches 89%, which is better than the values obtained with a single bin from the cumulative histogram. Fig. 29 shows the convergence performance of the neural network. Simulation Results for SURF Algorithm The basic idea of discrimination of landmine images is to detect SURF feature points and count them. A similar process is performed for images without landmines. Figs. 30 and 31 illustrate this process on an image with a landmine. Tab. 4 summarizes the average number of SURF feature points for images with and without landmines. It is clear that a threshold of 135 feature points can be used to discriminate between the images with and without landmines based on the averages obtained in Tab. 3. From these results, we can conclude that the SURF-based classification is more powerful than the technique based on the cumulative histogram of gradients. Thus, we deal with the problem of landmine detection from a different perspective, which is the number of distinguishing points. Sensitivity of Landmine Detection to Decimation and Interpolation As mentioned in the previous sections, the objective of the decimation process is to reduce the storage size of landmine images. Prior to landmine detection, the interpolation process is performed to restore the original image size. It is required that the interpolated images should have the same features as the original landmine images. Figs. 32 and 33 ensure this fact. It is clear from Tabs. 5-8 that the detection results are not sensitive to the decimation and interpolation results. Thus, it is possible to store GPR images after applying a decimation process to them. Furthermore, the PSNR values after the interpolation process are high enough to reconstruct the images with high quality. The simulation results of these tables reveal that although we have performed decimation and interpolation, it is still possible to discriminate between images based on the number of points extracted from the SURF algorithm. This paper has dealt with a vital image processing problem, which is detecting landmines from GPR images. This task is very important for the demining efforts without victims. The paper presented two basic trends for landmine detection from the GPR images. The first trend depends on the estimation of the cumulative histograms of gradients of images. Simulation results have revealed that it is possible to discriminate between images with and without landmines if a certain gray level is adopted and a threshold is set for discrimination on the cumulative histogram curves. The simulation results have also proved that the selected threshold is scale-dependent. A proper structure of a neural classifier that comprises one input layer consisting of one input, one hidden layer, and one output node can be used for the classification task. The bin values of the cumulative histograms are used as inputs to the neural network for training and testing. Simulations considering a neural classifier showed a promising landmine detection performance with a 92% success rate. This result reflects the possibility of detecting landmines with histogram bins. Some missing landmines are attributed to the close values of bins at the start and end of cumulative histograms of images with and without landmines. The second approach adopted for landmine detection in this paper is based on scale-space theory and the extraction of SURF features. The idea of this approach is based on estimating the SURF points and adopting the number of these points as a basis for discrimination between images with and without landmines. Simulation results have revealed that the images with landmines have significantly large numbers of SURF features compared to the images without landmines. Selecting an appropriate threshold for the number of SURF points can make the detection process easy with a success rate of 100%. No false alarms have been recorded with this approach. Another issue that has been studied in this paper is the storage of landmine images. To reduce the storage size of landmine images, decimation by two can be adopted for this purpose. This leads to a 75% reduction in the storage space for landmine images. An interpolation scheme can be used to reconstruct the landmine images with their original sizes prior to any landmine detection process. Simulation results have revealed that landmine detection is not affected by the decimation and interpolation processes. In future work, different algorithms based on the utilization of artificial intelligence tools are recommended for accurate landmine detection. Also, we intend to utilize more advanced signal processing techniques for the efficient compression of landmine images.
2022-01-20T16:06:04.240Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "131d8cd2694be3c5126a0596b28e98a763e0ce1f", "oa_license": "CCBY", "oa_url": "https://www.techscience.com/cmc/v71n3/46474/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "929c8f9b8eef74e9daeda01ceb57b48ebdd15cca", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
56354076
pes2o/s2orc
v3-fos-license
The impact of geology on the migration of fluorides in mineral waters of the Bukulja and Brajkovac pluton area , Serbia One of the hydrogeochemical parameters that classify groundwater as mineral water is the content of fluoride ions. Their concentration is both important and limited for bottled mineral waters. Hydrochemical research of mineral waters in the surrounding area of Bukulja and Brajkovac pluton, in central Serbia, was conducted in order to define the chemical composition and genesis of these waters. They are carbonated waters, with content of fluoride ranging from 0.2 up to 6.6 mg/L. Since hydrochemical analyses showed variations in the major water chemistry, it was obvious that, apart from hydrochemical research, some explorations of the structure of the regional terrain would be inevitable. For these purposes, some additional geological research was performed, creating an adequate basis for the interpretation of the genesis of these carbonated mineral waters. The results confirmed the significance of the application of hydrochemical methods in the research of mineral waters. The work tended to emphasize that “technological treatment” for decreasing the concentration of fluoride in mineral waters occurs in nature, indicating the existence of natural defluoridization. Introduction Research of mineral waters is of great importance due to the wide variety of their utilization and consumption.Some of them are used for balneotherapeutic purposes, others as medicinal waters, or in the form of bottled mineral water.It is significant to know the con-tent of trace elements.Set of norms and regulations on natural mineral waters define the minimum as well as the maximum allowed values of the content.Fluoride ions have an important place among trace elements; low values cause dental caries, while high values produce dental fluorosis or even skeletal fluorosis.The optimal values are between 0.5 and 1.5 mg/L (FORDYCE The impact of geology on the migration of fluorides in mineral waters of the Bukulja and Brajkovac pluton area, Serbia Апстракт.Један од хидрогеохемијских параметара за издвајање подземне воде као минералне је и садржај флуоридног јона.Садржај овог јона је изузетно важан и ограничавајући код флашираних минералних вода.Хидрохемијска истраживања минералних вода у околини плутона Букуље и Брајковца, у централној Србији, су спроведена ради дефинисања хемијског састава и одређивања порекла испитиваних вода.Оне су угљокиселе, са садржајем флуоридног јона од 0,2 до 6,6 mg/l.Пошто су хидрохемијске анализе показале разлику у хемијском саставу макро компоненти, било је јасно да је неопходно спровести и истраживања регионалнe грађе.За ове потребе, нека додатна геолошка истраживања су спроведена, стварајући неопходну основу за интерпретацију порекла испитиваних угљокиселих минералних вода.Резултати су потврдили велики значај примене хидрохемијских метода у истраживању минералних вода."Технолошки третмани" смањења концентрација флуоридног јона у минералним водама се одвијају и у природним условима, указујући на природну дефлуоридизацију. Кључне речи: флуориди, хидрогеохемија, минералне воде, гранитоидни плутон Букуље и Брајковца, дефлуоридизација.National Research Council 2006).The overall assumption is that the fluoride content in some mineral waters is important because of hyperactivity the ion in the biological balance of elements in the human body.As was already mentioned, the emphasis is put on the content of fluoride ions in waters which can be used as bottled mineral waters.In this case, hydrogeochemical methods play an important role within hydrogeological investigations.Namely, defining hydrogeological conditions favorable for migrations of these ions aids greatly in the recognition the hydrogeological conditions required for the formation of mineral waters with the optimal content of fluoride.Lithology is definitely regarded as one of key factors for defining the presence of a certain element.This kind of approach allows for the recognition of the main issues of hydrochemistry and hydrogeology, for example mineral water genesis, to establish the conditions and forms of migration of fluoride in groundwater, etc.Based on previous investigations, the basic principles have been defined in reference to the migrations of this important trace element in the mineral waters of Serbia (PAPIĆ 1994), and in later hydrochemical investigations, attention was paid to the interdependence of lithology and the presence of fluoride in mineral water.Different fluoride containing minerals are the main sources of fluorides in soil and groundwater (TIRUMALESH 2006;SHAJI 2007) Methods Samples of mineral waters were collected during the investigation period in 2010-2011.Water samples were taken from eight representative localities in the area of Bukulja and Brajkovac granitoid pluton and 16 physico-chemical parameters were determined in these samples, following standard and official methods of analysis.The groundwater samples were filtered through 0.4 µm membrane on site.Unstable hydrochemical parameters were measured on site, immediately after collection of the sample by potentiometry (pH-meter, WTW) and conductometry (EC, WTW).The major anions and fluoride were measured by ion chromatography (IC Dionex ICS 3000 DC).The major cations were determined by inductively coupled plasma -optical emission spectroscopy (ICP-OES, Varian). The Schlumberger water quality analysis software AquaChem and USGS software Phreeqc were used for processing the hydrogeochemical data.The packages were used for the determination of the mineral saturation indexes and for the construction of charts. Results In the following text, eight characteristic localities of mineral waters, with different fluoride contents, are described.They are located in the area of Bukulja Mountain and Brajkovac Village in central Serbia, 60 km south of Belgrade (Fig. 1). Geology The region of Bukulja is dominated by a horst structure, which is in the form of an elongated block that stretches ESE-WNW and can be clearly discerned.It is composed of Paleozoic psamite-pelite sediments, which due to regional and contact metamorphism, first transformed into sericite schists and phyllite, and then into micaschists and finally into sericite schists and gneisses which form a contact aureole of Tertiary pluton bodies.The immediate cover of the Bukulja crystalline rock is composed of Cretaceous basal clastic limestones and flysch sediments, which in the course of intrusion of the Bukulja granite monzonite and the Brajkovac granodiorite, underwent some contact metamorphic changes.These are Hydrogeochemistry From the hydrochemical viewpoint, there are three types of mineral waters, as indicated on the Durov diagram (Fig. 3 I, II and III). The first type is sodium hydrogencarbonate water (Čibutkovica, Rudovci, Darosava, Arandjelovac).They are mineral waters (TDS 1.7-3.8g/L) with a carbon-dioxide content of 0.6-1.05g/L.They have rather high contents of stron- tium, lithium, silicon and fluoride.The fluoride content ranges from 0.7 to 6.6 mg/L.Among other macrocomponents, it is worth mentioning the contents of calcium ions, which range from 60 to 204 mg/L.The values of the genetic coefficient, rNa/(rCa+rMg) (r is reacting concentration in % eqv.) range from 2.3 to 10.The mineral waters are genetically confined to Paleozoic schists and granite gneisses.The favorable migration of fluorides is affected by the slightly acid environment (pH around 6.5), carbon dioxide in gas composition, sodium hydrogencarbonate content and the relatively low calcium ion values (Table 2). The second hydrochemical type of mineral waters are the sodium hydrogencarbonate-calcium waters (Garaši, Brajkovac and Onjeg), with high contents of strontium, lithium and silicon.The fluoride content ranges from 0.2 to 1 mg/L.Among macrocomponents in their chemical composition, the high calcium ion content, which range from 240 to 400 mg/L, is worth mentioning.The genetic coefficient values rNa/rCa+rMg range from 0.4 to 1.3.These mineral waters occur at the contacts of Paleozoic schists with Cretaceous sediments.As a result of the extremely high calcium values, the fluoride ion contents are an order of magnitude lower compared to the previous type of mineral water. Third type of mineral water is calcium hydrogencarbonate water (Kruševica).The mineralization is about 1.55 g/L with a carbon dioxide content of about 0.7 g/L.This type has higher strontium and silica contents, but the contents of the other micro components are not elevated.The value of genetic coefficient rNa/Ca+Mg is about 0.3.The calcium content is extremely high and reaches 460 mg/L, consequently the fluoride ion contents are as low as 0.36 mg/L. Discussion and conclusions Correlation diagrams (Fig. 4) show positive correlation between the fluoride content and TDS, as well as between fluoride and the sodium content.It is also obvious from these diagrams that high concentrations of fluoride are present in waters with high values of the genetic coefficient (rNa/rCa+rMg).This was generally expected considering that decomposition processes of silicate and aluminosilicate minerals occur in the majority of these waters (in the presence of CO 2 ), resulting in a carbonated, sodium hydrogencarbonate composition of the water (Fig. 3). Calcium ions are negatively correlated with fluoride ions, because the content of fluoride in water is limited by the solubility product of calcium fluoride (the more calcium, the less fluoride in water).It is obvious from the Fig. 4 that low fluoride concentrations (< 0.5 mg/L) appear in waters where the concentration of calcium ions are elevated (> 200 mg/L). Saturation indexes (SI) of fluorite and calcite were calculated using chemical thermodynamics, and obtained values indicated mainly mineral waters unsaturated with respect to fluorite and oversaturated with respect to calcite (Table 3 and Fig. 4).There are two exceptions: the mineral water from Darosava, which is mildly saturated with respect to fluorite, and the mineral water from Arandjelovac, which is in equilibrium with fluorite.The fact that these two mineral waters differ from the rest of the analyzed waters could be observed on every correlation diagramnumber 3 (Darosava) and number 4 (Arandjelovac) are always significantly separated from the rest of the symbols, i.e., mineral waters, on the diagrams. The fact that the majority of analyzed waters are unsaturated with respect to fluorite is explained by the elevated concentrations of calcium (and consequently low concentrations of fluoride).The conclusion is that precipitation of fluorite is not possible under these hydrochemical conditions. By comparing geological and tectonic characteristics and results of hydrochemical research, it was established that there is an evident connection between geological structure of the Bukulja substrate and the hydrocarbonate mineral water genesis.It was concluded that, apart from lithology, joint fabrics and larger dislocation structures are of crucial importance for the water chemistry in the studied region.In addition, it should be stated that smaller ruptures determine the type of porosity that enables the accumulation of groundwater in the rock mass and its chemical transformation, while larger dislocation forms determine the stream flows of the regional water circulation.For better perception of the correlation between certain spring The impact of geology on the migration of fluorides in mineral waters of the Bukulja and Brajkovac pluton area Table 2. Representative localities of carbonated mineral waters in the investigated area -macro and micro components.areas, a hydrochemical map was constructed with major geological structures along with hydrochemical properties of the spring locations (Fig. 1). In order to present clearly the correlation between geological and hydrochemical parameters, transversal and diagonal cross sections were drawn, displaying the basic structures and lithologic properties of the rocks (Fig. 2).Associated with them are the following spring areas: -Čibutkovica-Kruševica-Rudovci -Brajkovac-Onjeg-Darosava and -Garaši-Arandjelovac In accordance with previous conclusions, it was established that the main spring areas of sodium hydrogencarbonate mineral waters (having dominant sodium content) occur along the complex regional fault which borders the Bukulja block on its north-eastern side, whereas mineral waters with dominant calcium content appear along the dislocation which borders its northern side. It is obvious that the northeastern dislocation (which connects Arandjelovac, Darosava and Rudovci) and the sets of joints that accompany it cut muscovite granite, gneiss, igneous and clastic flysch rocks, which in turn influence the formation of sodium waters. In the spring area of Čibutkovica, the hydrogencarbonate mineral waters have distinctly sodium characteristics, which prove that the southern dislocation does not act as a groundwater recharge.Recharge is most probably realized in the metamorphic complex that forms the northern hinterland of the spring area. In contrast, along the southern dislocation, Bukulja crystalline rocks are at many places in contact with Table 3. Representative localities of carbonated mineral waters in the investigated area -water type, genetic coefficients and saturation indexes (SI). Upper Cretaceous clastic-carbonate flysch, which increases the amount of calcium in the spring areas of Garaši and Brajkovac.The Onjeg locality belongs to this group, its water having a higher content of calcium due to the dissolution of the limestone thick layers that form a tectonic block between the two reverse faults. The water of Kruševica spring is characterized by a high content of calcium, but the contents of the micro components are not elevated, except for strontium and silica.This is due to a shallower zone of groundwater formation in the sandy Tertiary sediments. It should be emphasized that the two mineral waters belonging to the first type are bottled as the mineral water "Knjaz Miloš" from Arandjelovac (Bukovička spa) and "Dar voda" from Darosava.The fluoride concentrations in these waters are higher than 1 mg/L; hence, they are called fluoride waters.Due to the biological activity of fluoride, its content is limited to 5 mg/L for bottled mineral waters.If the level is higher than 1.5 mg/L, the term "contains more than 1.5 mg/L of fluoride: not suitable for regular consumption by infants and children under 7 years of age" should appear on the label in close proximity to the name of the product.The European Directive on the exploitation and marketing of natural mineral waters and spring waters sets standards for excluding harmful elements such as fluoride ions, iron, manganese, sulfur and arsenic.It is obvious from the obtained results that some mineral waters in Serbia should be subjected to water treatment, which seems to be difficult in practice, and sometimes nature itself plays the role of a "technologist".Two possibilities are offered here: the right choice of locations for abstraction of mineral water with satisfactory chemical composition, which is a hydrogeologist's task, and the application of artificial defluoridization by means of aluminum oxide, lime, ion exchange resins or similar methods, which is a technologist's task.It is important to emphasize the impact and application of hydrochemical methods throughout hydrogeological research, which includes defining the conditions and factors of migrations of fluoride ions in mineral waters, the defining of the basic hydrochemical types of waters with high and low levels of ions and of gas composition, as well as the thermodynamic conditions in aquifers with accumulated mineral waters. Fig. 2 . Fig.2.Geological cross sections of the Bukulja and Brajkovac granitoid massifs (Legend is the same as for Fig.1). The impact of fluorides on the physiological functions of the human body is manifold.Fluorides affect normal endocrine function, as well as the function of the central nervous system and the immune system (Committee on Fluoride in Drinking Water, US Table 1 . Description of representative localities of carbonated mineral waters in the investigated area.
2018-10-16T22:29:58.863Z
2012-01-01T00:00:00.000
{ "year": 2012, "sha1": "91544b286ec8c95b9cfcd938888a6c7bf04c4c05", "oa_license": "CCBY", "oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=0350-06081273109P", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "91544b286ec8c95b9cfcd938888a6c7bf04c4c05", "s2fieldsofstudy": [ "Environmental Science", "Geology" ], "extfieldsofstudy": [ "Geology" ] }
601078
pes2o/s2orc
v3-fos-license
A new instrumented method for the evaluation of gait initiation and step climbing based on inertial sensors: a pilot application in Parkinson’s disease Background Step climbing is a demanding task required for personal autonomy in daily living. Anticipatory Postural Adjustments (APAs) preceding gait initiation have been widely investigated revealing to be hypometric in Parkinson’s disease (PD) with consequences in movement initiation. However, only few studies focused on APAs prior to step climbing. In this work, a novel method based on wearable inertial sensors for the analysis of APAs preceding gait initiation and step climbing was developed to further understand dynamic balance control. Validity and sensitivity of the method have been evaluated. Methods Eleven PD and 20 healthy subjects were asked to perform two transitional tasks from quiet standing to level walking, and to step climbing respectively. All the participants wore two inertial sensors, placed on the trunk (L2-L4) and laterally on the shank. In addition, a validation group composed of healthy subjects and 5 PD patients performed the tasks on two force platforms. Correlation between parameters from wearable sensors and force platforms was evaluated. Temporal parameters and trunk acceleration from PD and healthy subjects were analyzed. Results Significant correlation was found for the validation group between temporal parameters extracted from wearable sensors and force platforms and between medio-lateral component of trunk acceleration and correspondent COP displacement. These results support the validity of the method for evaluating APAs prior to both gait initiation and step climbing. Comparison between PD subjects and a subgroup of healthy controls confirms a reduction in PD of the medio-lateral acceleration of the trunk during the imbalance phase in the gait initiation task and shows similar trends during the imbalance and unloading phase of the step climbing task. Interestingly, PD subjects presented difficulties in adapting the medio-lateral amplitude of the imbalance phase to the specific task needs. Conclusions Validity of the method was confirmed by the significant correlation between parameters extracted from wearable sensors and force platforms. Sensitivity was proved by the capability to discriminate PD subjects from healthy controls. Our findings support the applicability of the method to subjects of different age. This method could be a possible valid instrument for a better understanding of feed-forward anticipatory strategies. Background The ability to move safely during level walking and stair negotiation is a relevant aspect to guarantee success in performing many activities of daily living (ADLs), such as maneuver over a curb or access to public environments and public transport [1]. Stair negotiation (i.e. ascending and descending stairs) is a demanding and hazardous task for frail people, in particular for older adults and subjects affected by neuromotor disorders, such as Parkinson's disease (PD). Compared to level walking, stair climbing necessitates of greater range of motion [2][3][4][5] and moments at the ankle, knee and hip joints [2,3,6,7], and these requirements can force older adults to use almost their maximal motor capabilities [8] with a consequent increase of the risk of falling. It is reported that falling on stairs is the second more common type of falls in the elderly, and that approximately 75% of all injurious falls on stairs occurs in people aged 65 years or older [9]. Moreover, it was demonstrated that subjects affected by PD have an increased risk of falling compared to healthy controls [10], and that Fear Of Falling (FOF) in the PD population is strongly dependent on walking difficulties, turning hesitation and limited ability to climb stairs [11]. Previous studies showed that these functional limitations are highly associated to alterations in dynamic balance control and to poorly coordinated anticipatory postural adjustments (APAs) prior to voluntary limb movements [12]. APAs represent the transient phase between quiet standing and a dynamic condition chosen voluntarily such as walking, stepping up or down a stair, and over an obstacle [13]. They involve complex interactions between neural and biomechanical factors that serve to maintain postural stability by compensating for destabilizing forces associated with moving a limb [12]. In the case of gait initiation, APAs act to accelerate the center of body mass (COM) forward and laterally over the stance foot by moving the center of pressure (COP) posteriorly and toward the stepping leg. Considering COP displacements, APAs can be divided into two different phases [14]: firstly, the Imbalance Phase characterized by initial displacement of the COP backward and toward the stepping (leading) foot, and then the Unloading Phase in which the COP shifts laterally toward the stance (trailing) foot. It was demonstrated that APAs are essential to create appropriate initial dynamic conditions [15], that they are affected by modifications of motor behavior due to aging [15,16] and neurological disorders such as Huntington's chorea [17] and Parkinson's disease [14,16,[18][19][20][21], and that they are dependent on the specific task, i.e. stepping forward or upward [13,[22][23][24]. Given the great importance of APAs in the control of dynamic balance, previous studies have suggested to include their analysis to evaluate disease progression in patients with neurological disorders [17], as well as to detect their early clinical signs [18,19]. APAs related to gait initiation are usually recorded using force plates, electromyography, and motionanalysis systems [14,18]. Although all these systems have been proven effective, their cost and complexity limit their application to clinical practice. Instrumented methods based on low-cost and easyto-manage inertial sensors were developed in recent years to investigate human balance and postural sway during quiet stance [25,26] and to perform instrumented tests for the evaluation of balance deficits and risk of falling [27,28]. Concerning APAs, inertial solutions were previously developed only for level walking [19,29,30], but not for stair negotiation. Furthermore, in the majority of these studies the analysis was focused only on the imbalance phase, not investigating the subsequent unloading phase that is indeed essential for a correct transition from bi-to mono-pedal stance. On the basis of the above considerations, in the present study, an easy-to-administer instrumented method based on wearable inertial sensors was developed and applied to healthy subjects and persons affected by PD to analyze the initiation of level walking and step climbing in a typical physical rehabilitation setting: in particular, considering the importance of the unloading process in balance control during the transition from quasi-static to dynamic conditions, a novel algorithm was developed to recognize the initial and final frames of the unloading phase, allowing its subsequent analysis. Aims of this work were to test the validity and sensitivity of the proposed method by: i) validating it against force plate recordings, and, ii) evaluating its ability to differentiate APAs of PD subjects from APAs of healthy controls. Healthy subjects were excluded if they presented any neurological disorders, if they used orthotic devices or had artificial joints, or if they were under medication that could affect balance or locomotor functions. PD subjects were recruited within a group of patients involved in a neuromotor rehabilitation program administered at our rehabilitation institute. They were included in the study if they fulfilled the following inclusion criteria: diagnosis of idiopathic Parkinson's disease, Hoehn and Yahr (H&Y) stage [31] between 2 and 4, Mini Mental State Examination (MMSE) score [32] higher than 24, ability to stand unsupported more than 10 s, ability to walk for at least 3 m without any walking aid, ability to step up onto a 18-cm high step. Patients were clinically rated by a trained examiner on the H&Y scale and on the Motor Section III of the Unified Parkinson's Disease Rating Scale (UPDRS) [33] immediately before the beginning of the experimental sessions. Demographic and clinical characteristics of PD subjects are reported in Table 1. Patients were tested while they were on their routine therapy. All the 20 healthy subjects and a subgroup of 5 PD patients (age 73.4 ± 6.1 yo, range 65 -82 years, 2 females) got involved in a validation group (VG) for investigating the validity of the proposed method. The eleven oldest subjects of the twenty healthy volunteers (age 66.6 ± 6.1 yo, range 60 -77 years, 5 females) were selected as healthy controls (HC) for the comparative analyses. The ages of HC were comparable to those of PD subjects (p-value = 0.09). All the participants signed informed consent forms approved by the local Ethical Committee. Experimental equipment All PD subjects and healthy controls wore 2 inertial sensors (TMA, Tecnobody, Dalmine, Italy) embedding a 3D accelerometer (range ± 5 g), and a 3D gyroscope (range ± 2000°/s). Linear acceleration and angular velocity data were sampled at 50 Hz and transmitted to a remote PC through a Bluetooth wireless connection for subsequent offline analysis. As shown in Figure 1, one sensor was placed on the posterior trunk, in correspondence to L2-L4 vertebra, with the sensing axes (x, y and z) oriented along the body vertical, medio-lateral (ML) and antero-posterior (AP) directions, respectively. The second sensor was placed proximally on the lateral aspect of the shank of the first stepping leg with the z-axis oriented along the limb medio-lateral direction. Sensors were fixed over clothing through anti-slip elastic bands. Ground reaction forces and COP displacement of VG subjects tested in the motion analysis laboratory were measured by means of two force plates (Kistler Gmbh, Winterthur, Switzerland) with a sampling frequency of 800 Hz, considered as gold standard for APAs analysis (Figure 2a). Experimental protocol Subjects were asked to perform two different transitional tasks: 1) quiet standing to level walking (gait initiation); and 2) quiet standing to single step climbing (step climbing). Three consecutive repetitions for each task were recorded in the above mentioned order. At the beginning of each trial, subjects stood upright for 10 s in a comfortable position with the arms laying on their sides, wearing flat shoes with no heels: no given distances between the feet were imposed, in accordance with protocols developed in previous studies about anticipatory postural strategies preceding stepping upward [22][23][24] and over an obstacle [13]. As soon as they received a vocal command from the experimenter, participants started the task execution. In the first task, subjects had to walk along a straight trajectory for about 3 m, while in the second one, they were asked to step up onto the first level of a two-step staircase. Each step measured 18 cm in height, 38 cm in width, and 34 cm in depth. The step dimensions were chosen to be among the most frequently encountered in public places and new residential buildings. Both gait initiation and step climbing were executed by all the participants, both healthy and PD subjects, starting with their right leg, as reported to be the dominant one, at self-selected speed. Six of the eleven PD patients were tested in a typical rehabilitation setting before the beginning of their conventional physiotherapy session while all the members of the validating group, composed of the 20 healthy subjects and the five PD patients who accepted to be tested outside the rehabilitation gym, executed the tasks in a motion analysis laboratory equipped with two force plates embedded in the floor (see previous section). In the laboratory, VG subjects were required to start the gait initiation task with both feet on the first force plate and then to step forward on the second platform, while in the step climbing task, they were asked to stand upright with both feet on the first force plate and then stepping up onto the lower step of the staircase placed in front of them on the second force plate. Data processing After data recording, signals from force plates and inertial sensors were processed to analyze the anticipatory postural adjustments preceding gait initiation and step climbing. COP displacements recorded from the force plate were filtered with a fourth order, zero-lag, low-pass Butterworth filter with a cut-off frequency of 10 Hz [19]. COP trajectory and vertical ground reaction force were then used to subdivide each task into the initial quasi-static APA phase, made up of the imbalance and unloading phases, and the subsequent dynamic phase corresponding to the swing of the first leading foot. For this purpose, 4 instants were automatically identified by a Figure 2 Laboratory setup for the analysis of COP displacement during APAs. a) Placement of the two force plates b) COP displacement in the medio-lateral (x) and antero-posterior (y) directions during the gait initiation process in a healthy subject. APA onset, Heel-Off, Toe-Off, and Foot Contact instants of the leading foot and the Toe-Off instant of the trailing one are reported. Imbalance (from APA onset to heel-off of the leading foot), unloading (from heel-off to toe-off of the leading foot), and swing (from toe-off to foot contact of the leading foot) phases are indicated. L1 line passing through the points representing the COP position at APA onset and at the toe-off of the trailing foot instants, and L2 line passing through the points representing the COP position at APA onset and at the toe-off of the leading foot instants are drawn. d 1MAX represents the maximal distance from L1 attained by the COP (corresponding to toe-off of the leading foot), while d 2MAX represents the maximal distance from L2 attained by the COP (corresponding to heel-off of the leading foot). dedicated algorithm and visually checked through an interactive software: 1) APA onset, 2) heel-off, 3) toe-off, and 4) foot contact of the leading foot (see Figure 2b). In particular, APA onset was detected with a threshold-based algorithm applied to the COP medio-lateral displacement with the threshold set as twice the standard deviation (SD) of the signal during the quiet standing period preceding task initiation, as proposed in [19]. Heel-off and toe-off of the leading foot were detected as proposed in [14]: referring to Figure 2b, the toe-off of the trailing limb was detected as the last frame of the first force platform signal; then the toe-off of the leading foot was recognized as the instant in which the position of the COP attained the maximal distance (d 1MAX ) from the line passing through the two points representing the APA onset and the toe-off of the trailing limb (L1). Finally, the heel-off of the leading foot was computed as the frame in which the COP position attained the maximal distance (d 2MAX ) from the line passing through the two points representing the APA onset and the toe-off of the same foot (L2). The foot contact of the leading limb was recognized as the instant when the vertical ground reaction force of the second platform exceeded a threshold of 6.5% of body weight, as suggested in [34]. The same detection method was adopted for both the gait initiation and the step climbing tasks. Temporal instants were then extracted from the wearable inertial system data. The acceleration signals recorded at trunk level were transformed to horizontal-vertical coordinate system [35] and filtered using a fourth order, zero-phase, low-pass Butterworth filter with a cut-off frequency of 3.5 Hz, as proposed by Mancini et al. [19]. The same filter was also applied to angular velocity data recorded by the sensor placed on the shank. The APA onset was detected with a threshold-based algorithm applied to the ML acceleration of the trunk sensor [19] with the threshold set as the SD of the signal during the quiet standing period preceding task initiation, multiplied by a factor A. The shank angular velocity around the ML axis was used to identify heel-off and toe-off instants, as shown in Figure 3. In particular, the first peak of the signal (Ωpk) was detected, then the heel-off was estimated as the first instant, following the APA onset, at which the angular velocity became higher than Ωpk value multiplied by a factor H. Toe-off was identified as the first instant, following the peak, at which the signal became lower than Ωpk multiplied by a factor T. The initial calibration of the thresholds was performed considering the data collected on VG subjects, tested in the motion lab with both force plate and inertial sensors. During the calibration procedure, different sets of temporal instants were computed by varying the multiplicative parameters A, H and T. In particular, factor A was varied between values 1 and 5 with unitary incremental steps, while H and T were varied between 0 and 1 with incremental steps equal to 0.01 and 0.05 respectively. For each set of instants and for each subject, the mean absolute errors (MAEs) between instants calculated from force plates data and frames extracted from inertial sensors signals were computed and averaged among all subjects. The final values of A, H and T were then chosen as those which minimized the averaged errors. Finally, the foot contact instant was estimated as the median point between the second peak of the angular velocity and the preceding zero-crossing event; the point was chosen as the one that minimize MAEs. The set of extracted thresholds was then applied to all subjects, including the PD patients tested in the rehabilitation gym, without any further usage of the force plate. After the events detection algorithm was applied, the following spatio-temporal parameters were computed from both COP displacement and trunk accelerations: Temporal parameters Imbalance phase duration: from APA onset to the heel-off of the leading foot. Unloading phase duration: from the heel-off to the toe-off of the leading foot. APA duration: from APA onset to the toe-off of the leading foot, as the sum of imbalance phase and unloading phase durations. Swing phase duration: from the toe-off to the foot contact of the leading foot. Step duration: from APA onset to the foot contact of the leading foot. Spatial parameters Imbalance phase amplitude in ML (AP) direction: i) calculated from force plate data as the difference between COP ML (AP) position at heel-off and COP ML (AP) position at APA onset; ii) estimated from inertial sensors signals as the difference between trunk ML (AP) acceleration measured at heel-off and trunk ML (AP) acceleration measured at APA onset. Unloading phase amplitude in ML (AP) direction: i) calculated, from force plate data, as the difference between COP ML (AP) position at toe-off and COP ML (AP) position at heel-off; ii) estimated, from inertial sensors signals, as the difference between trunk ML (AP) acceleration measured at toe-off and trunk ML (AP) acceleration measured at heel-off. Spatial parameters were computed only for the APA phase, due to the quasi-stationary condition required by Moe-Nilssen [35]. Recognizing that, during APAs, COM and COP typically act as they were reciprocally linked (i.e. in the imbalance phase, COM moves forward and laterally over the stance foot, while COP moves posteriorly toward the stepping foot) and considering the results already reported in literature [19], we hypostasized that i) lower trunk accelerations are significantly correlated with COP displacements during APAs and that ii) lower trunk acceleration data can therefore be used to estimate force platform variables. Statistical analysis For each subject, variables were averaged over the three trials of each test. Parametric statistical tests were used for the analysis, as data normality and homoscedasticity were confirmed by Shapiro-Wilk's W test and Bartlett's test, respectively. Mean absolute errors (MAEs) between temporal instants extracted from force plate data and inertial sensors were compared among young adults, older healthy subjects, and PD patients by using ANOVA test. The concurrent validity of the proposed method for evaluating APAs was investigated through a linear regression analysis between the parameters extracted from the force plate and the correspondent ones computed from inertial sensors, as proposed in previous studies [19,25] . Pearson's correlation coefficient r and the related p-value were therefore calculated considering data recorded from the VG subjects tested in the motion lab. For each parameter, a Student's t-test was adopted to detect differences between PD patients and the subset of comparable aged control subjects (HC). Finally, comparisons of the above mentioned temporal and spatial parameters were performed between the two tasks (gait initiation and step climbing) by using paired t-test. The level of significance was set at 0.05 for all the conducted analyses. All the analyses were performed with R (R Foundation for Statistical Computing, Vienna, Austria). Validity of the method Validity of the proposed method was assessed considering data related to the VG subjects tested with both inertial sensors and force plates. Table 2 shows the values of the multiplicative factors (A, H, and T) used by the threshold-based algorithm for the event detection procedure, the correspondent mean absolute errors (MAEs) between instants computed from inertial sensor signals and frames identified from force plate data, and the percentage errors referred to the step duration. It is possible to notice that the highest error (6.3%) is associated with the detection of the APA onset in the step climbing task: no statistically significant differences in MAEs were noticed between the two tasks (p = 0.79) and between younger adults (<60 yo), elderly subjects (>60 yo), and PD patients (p = 0.73). As reported in Table 3, a significant linear correlation was found between COP medio-lateral displacements and the correspondent trunk accelerations in both tasks, while no correlation was found for the antero-posterior features. A significant linear correlation between the two methods was also noticed considering the duration of the whole test and its phases. Differences between PD subjects and comparable aged controls (sensitivity) To assess the sensitivity of the proposed method, comparison between PD patients and healthy controls (HC) was performed considering only temporal and mediolateral spatial parameters extracted from the inertial sensors, as we proved their validity in the former analysis. Collected results are shown in Table 4, while Figure 4 shows examples of the trunk acceleration signal recorded from a representative control and a PD subject in the level walk and step climbing tasks. Regarding the imbalance phase, trunk ML acceleration was significantly smaller in PD subjects with respect to HC both in level walking and step climbing. Furthermore, as shown in Figure 4a-b, control subjects showed a significant increase of the medio-lateral acceleration during step climbing with respect to level walking (Level walking: 0.19 ± 0.08 m/s 2 ; Stair climbing: 0.26 ± 0.13 m/s 2 ; p = 0.01). No such a difference was found in the PD group (Level walking: 0.08 ± 0.12 m/s 2 ; Step climbing: 0.09 ± 0.15 m/s 2 ; p = 0.78) (see Figure 4c-d). Regarding the unloading phase, a significant reduction of the ML acceleration in PD subjects was found in the step climbing task but not in the level walking. As for temporal parameters, a statistically significant difference between the two groups was found only in swing phase duration that was lower in PD subjects with respect to HC. In addition, the correlations between the investigated parameters and the UPDRS III scores resulted to be not significant; this result is in accordance with [19]. All these findings are enforced by data extracted through the force plates from the HC subjects and the 5 PD patients enrolled in the validation group ( Table 5). The only exception was related to the swing phase duration which was slightly smaller in PD subjects, but not statistically different from HC as it was instead shown by acceleration data. This was probably due to the small sample size. When comparing step-climbing with level walking, similarly to the results obtained from wearable inertial sensors, an increase of the medio-lateral COP displacement was observed in healthy subjects (Level walking: 2.17 ± 0.70 cm; Stair climbing: 2.48 ± 0.97 cm; p = 0.04), but not in the five patients (Level walking: 1.35 ± 0.74 cm; Stair climbing: 1.60 ± 0.59 cm; p = 0.27). Discussion In the present study an instrumented method based on wearable inertial sensors was developed and applied on healthy subjects and on persons affected by PD to analyze the initiation of gait and step climbing. To our knowledge this is the first study aimed at comparing the APAs prior to level walking and step climbing through wearable inertial sensors, and it represents the first attempt to investigate differences between the two tasks in a group of PD subjects under their usual medication state. Specific aims of this work were: i) validating it against force plate recordings, and, ii) evaluating its ability to differentiate APAs of PD subjects from APAs of healthy controls. These two different aspects will be discussed separately. Methodological aspects and validity of the procedure The first objective of this work was to develop a method that offers the possibility to study APAs prior to level walking and stair climbing directly in a typical physical rehabilitation setting, without the necessity of expensive equipment such as force platforms. For this reason it was chosen to use low-cost, easy-to-use wearable inertial sensors, as previously proposed by other authors for the Table 2 Mean absolute error of event detection between inertial sensors and force plates Gait initiation Step climbing investigation of the gait initiation process [19,30]; in these studies a single inertial measurement unit was used and the analysis was limited to lower trunk acceleration during imbalance phase, in which COP shifts backward an toward the stepping foot. To our knowledge, no studies exist about the use of inertial sensors to analyze the subsequent unloading phase (from the heel-off to the toe-off instant of the leading leg) which implies COP shift toward the trailing foot. Considering that correct unloading is essential for the maintenance of dynamic balance during the transition from bi-to mono-pedal stance, in the present work we decided to include this specific aspect into the analysis. For this reason, a second sensor was applied on the lower limb to allow an easier detection of heel-off and toe-off frames from shank angular velocity. The lack of easy-detectable changes in both acceleration and angular velocity signals in correspondence of heel-off and toe-off events compelled us the implementation of a threshold-based automated algorithm for the recognition of heel-off and toe-off temporal instants. The developed solution required an initial calibration of the thresholds on the basis of force plate data. After this preliminary set-up, the algorithm was applied to all subjects without any further usage of force plates. The proposed procedure was validated for healthy subjects with different ages (from 23 to 77 years) and a subgroup of 5 PD patients by means of a comparison with force plates data, considered as a gold standard. Analysis of the temporal frames extracted with the two systems (i.e. APA onset, heel-off, toe-off and the subsequent foot contact of the leading foot) revealed mean absolute errors (MAEs) ranging from 0.05 s to 0.09 s. At our knowledge, no previous studies evaluated errors in the estimation through wearable inertial sensors of specific movements of the leading limb; in absence of term of comparisons, we considered the reported MAEs acceptable for the aim of the present study. No statistically significant differences in MAEs were recognized after comparisons between level walking and stair climbing, and between young adults (age < 60 yo), healthy elderly subjects (age ≥ 60 yo), and PD patients. This result suggests that the method is applicable with comparable accuracy to adults with different age and subjects affected by PD in both tasks. Importantly, linear regression analysis related to both level walking and stair climbing revealed a significant positive correlation between temporal parameters (i.e. duration of the step and of each phase of the test) extracted from inertial sensor and the same variables computed from force plate data. Regarding spatial parameters, the amplitude of APAs measured from COP displacement and estimated from acceleration signals in medio-lateral direction were significantly correlated, in accordance with [19,30]. No such a correlation was found considering the antero-posterior direction. This difference between AP and ML directions could be ascribed to the following consideration. While medio-lateral movements characterizing APAs can be considered mono-segmental (i.e. the entire body moves laterally around the feet to prepare the subsequent step, using mainly the ankle joint), antero-posterior movements can be considered multi-segmental, involving not only the ankle but also the hip joints, especially in elderly subjects [36]. For this reason, the link between COP AP displacement and trunk AP acceleration might result more complex than that observed in the ML direction, thus explaining the lack of correlation found in the present results. In summary, the present results suggested the validity of the proposed method for evaluating temporal aspects and medio-lateral features of the APAs preceding both gait initiation and step climbing. Method's application on PD subjects The method was applied on a group of PD subjects and the results were compared to those related to healthy controls (HC) of comparable age. Only temporal parameters and spatial variables related to ML direction were considered, because we formerly proved their validity on the selected validation group. As a consequence of the good correlation with the force platform and of the applied transformation to horizontal-vertical coordinate system [35], the trunk acceleration pattern registered by the waist-worn sensor can be considered reciprocally linked to the COP displacement pattern during APAs, as previously proposed by other authors [19,29,30]. In the case of level walking, a significant reduction of trunk medio-lateral acceleration was observed in PD subjects during the imbalance phase, confirming that APAs related to gait initiation are hypometric in PD [16,19,20]. On the contrary, ML amplitude of the unloading phase was similar in both groups, confirming the results obtained by Mazzone et al. [21] on force plate data. The feed-forward postural preparation during the imbalance phase has the primary consequence of determining the COM disequilibrium needed for lowering the load of the stepping leg and allowing its forward and upward progression; a reduction of that perturbation could be therefore seen as an attempt to minimize postural instability [18,19,37,38]. Analysis of temporal aspects of gait initiation did not reveal any difference between the two groups both in imbalance and unloading phase. This result is in contrast to that found by Crenna et al. [14] and Halliday et al. [16] who demonstrated a significant prolongation of both phases in PD patients. This discrepancy may be explained by differences in the medication state of the participants, as subjects included in the cited studies were in OFFmedication state while in contrast in the present work PD subjects were tested while they were under their routine therapy. In addition to the reduction of the ML trunk acceleration, our results revealed a significantly shorter duration of the first step swing phase for the PD group. Even though step length was not considered in the present study, previous works demonstrated a significant reduction of this parameter during gait initiation [16,23,29]. On the basis of this consideration, it can be speculated that the reduction in step duration found in the present study could be related to a shortening of the stride length and to an increase in cadence that are typical of PD patients [39]. Furthermore, a significant reduction in medio-lateral amplitude of the unloading phase was also present, suggesting that APAs prior to step climbing are more compromised with respect to those preceding gait. Previous electromyographic studies demonstrated that the preparation to stepping up is characterized by a greater activity of hip abductor muscle and an earlier onset of gluteus medius [24]; hence, the greater request at the expense of the hip muscles, that is indeed weaker in PD subjects [40], could partly explain the significant reduction in mediolateral acceleration that was noticed both in the unloading and in the imbalance phase prior to step climbing. Interestingly, a further difference between PD and HC groups emerged from the comparison between level walking and stair ascending APAs; in particular, in healthy subjects, the medio-lateral amplitude of unloading phase prior to stepping upward was significantly larger with respect to that preceding stepping forward, as found in previous studies [13,24]. This finding could be ascribed to the fact that stepping up is more challenging for ML balance control than level walking, as it presents the additional constraint of not stumbling with the leading foot on the step, and this can be the reason for larger medio-lateral unloading, which ensures that center of mass is safely within the contact area of the supporting foot [13]. No such a difference was found in PD subjects who showed similar medio-lateral amplitude of unloading phase in both tasks. This result is particularly interesting taking into account the already published findings on healthy subjects; in those studies, the ability to scale the anticipatory postural strategies on the basis of task requirements [23,41,42], and the differences in APAs preceding stepping over an obstacle or up a stair [13,[22][23][24] have been well documented. These in turn can be considered as a mechanism adopted by the central nervous system to safeguard balance during different transitional tasks. On the contrary, the absence of scaling found in the PD group could imply a difficulty to adapt feed-forward anticipatory strategies to different stepping task, that seems to be consistent with deficits in neural control, proprioception [43,44] and muscle weakness, mainly of the hip joint [40]. Such reduced adaptability may have a role in step climbing limitations that are typical of PD patients, with a consequent increase of anxiety and risk of falling [11]. Importantly, the above results were confirmed by force plate data recorded from elderly controls (HC) and the subgroup of 5 PD patients tested in the motion lab. Taken altogether, these results confirmed the validity of the proposed method for evaluating APA preceding gait initiation and step climbing. Limitation of the study There are some limitations that need to be addressed regarding the present study. A first limitation is represented by the small number of subjects included in this study; the proposed method should be applied on a greater number of patients in order to confirm these preliminary results. Secondly, the validity of the proposed procedure was performed on healthy subjects and 5 PD patients. In fact only 5 of all the tested PD subjects gave their consent to perform the test outside the rehabilitation gym in the motion laboratory equipped with force plates. Moreover, considering that the aim of the present work was to verify the applicability of the method directly in a physical rehabilitation setting, we considered the described validation procedure suitable for a first pilot study. Anyway, future studies are warranted to validate the method on a greater sample of PD patients and, possibly, on subjects affected by other different neurological disorders such as Multiple Sclerosis, and to test the reliability of the proposed variables. A third limitation of the study is represented by the fact that no given distances between the feet were imposed in both the tasks. Spontaneous feet placement on the floor with no constraints was allowed in accordance with previous studies on the adaptation of anticipatory postural strategies for stepping upward [22][23][24] and over an obstacle [13], and it was intended to guarantee the maximal level of comfort, self-confidence, and safety prior to attempt the requested complex transitional tasks without walking aid. Finally, a further investigation to define the minimum significant detectable changes is desirable for a future application of the method to evaluate the course of the disease and possible rehabilitation effects. Conclusion In summary, the results of the present study showed that the proposed method based on inertial sensors i) is applicable in clinical settings to evaluate APAs preceding both gait initiation and step climbing, and ii) is able to discriminate APAs of PD subjects under their usual medication state from those of healthy controls of comparable age. In particular, PD subjects showed altered APAs in both gait initiation and step climbing, with the latter task showing more pronounced alterations. Moreover, difficulties in modifying feed-forward anticipatory strategies on the basis of the specific transitional task was demonstrated in PD group. Validity of the method was verified through the comparison with force plate data. Even though caution must be taken due to the small sample size, these preliminary findings suggest that the proposed procedure could be a fast, easy-to-manage and cost effective solution for a quantitative characterization of APAs in PD patients in those clinical settings where force platforms are usually not available.
2016-05-12T22:15:10.714Z
2015-05-05T00:00:00.000
{ "year": 2015, "sha1": "31fcfb0b7126861eb216770909aaf7ee0bf119b3", "oa_license": "CCBY", "oa_url": "https://jneuroengrehab.biomedcentral.com/track/pdf/10.1186/s12984-015-0038-0", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1e04d882ca8ddda6653867846cd91a7425279ef0", "s2fieldsofstudy": [ "Biology", "Psychology" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
246294497
pes2o/s2orc
v3-fos-license
On the construction of Bessel house-moving and its properties The purpose of this paper is to introduce the construction of a new stochastic process called"$\delta$-dimensional Bessel house-moving"and its properties. $\delta$-dimensional Bessel house-moving is a $\delta$-dimensional Bessel process hitting a fixed point at $t=1$ for the first time. We have two methods for the construction of this process: characterizing it using the first hitting time of a Bessel process and obtaining it as the weak limit of conditioned Bessel bridges. We also study sample path properties of this process and give the decomposition formula for its distribution. For 0 ≤ a < b, let R a = {R a (t)} t≥0 be a δ-dimensional Bessel process starting from a and let τ a,b denote the first hitting time of the point b by R a : τ a,b := inf{r ≥ 0 | R a (r) = b}. In addition, X n D → X means that {X n } ∞ n=1 converges to X in distribution. In [2], it was shown that 3-dimensional Bessel house-moving (i.e., Brownian house-moving) can be obtained as the weak limit of the conditioned 3-dimensional Bessel bridge. Motivated by that work, we construct the δ-dimensional Bessel house-moving H a→b as the weak limit of the conditioned δ-dimensional Bessel bridge. Theorem 2. Let 0 ≤ a < b. There exists an R-valued continuous Markov process H a→b = {H a→b (t)} t∈[0,1] that satisfies Also, we study the sample path properties of δ-dimensional Bessel house-moving H a→b , and establish the regularity of its sample path. We show that the δ-dimensional Bessel house-moving does not hit b on the time interval [0, 1). Proposition 1.1. For every γ ∈ (0, 1 2 ), the path of H a→b (0 ≤ a < b) on [0, 1] is locally Höldercontinuous with exponent γ: The sequence of positive zeros of the Bessel function J α is denoted by {j α,n } ∞ n=1 . It is well known that for n = 1, 2, . . . j α,n < j α+1,n < j α,n+1 . Bessel process and Bessel bridge The δ-dimensional Bessel process is a one-dimensional diffusion generated by L δ := 1 2 d 2 In addition, for 0 ≤ a < b the δ-dimensional Bessel bridge from a to b on [0, 1] is defined by conditioning the δ-dimensional Bessel process from a, For t > 0 and x, y ∈ (0, ∞), we set Let a, b ≥ 0. For 0 < s < t and x, y > 0, we have the transition densities of R a : t (a, y)dy, For 0 < s < t < 1 and x, y > 0, we have the transition densities of the δ-dimensional Bessel bridge from a to b, r a→b , on [0, 1]: In the next lemma, we express the joint densities of the Bessel bridge and the maximal value of the Bessel process by the maximal values of the Bessel bridges. Proof. First, we prove (4). By the Markov property of R a , we have Therefore, because 1] ) ≤ c P r a→b (t) ∈ dy , which completes the proof. In a similar manner to the proof of (4), we can obtain (5). Proof of Theorem 1 In this section, we prove Theorem 1, which gives the construction of the Bessel house-moving by using the first hitting time of the Bessel process. Proof. First, we prove (6). It holds that For each n, we set Then, by Theorem 6, we have Let T > 0 be fixed. By Lemma A.1 and (3), there exist some C ν > 0 and N ν ∈ N such that by Lebesgue's dominated convergence theorem. Thus, we obtain By (1), because there exists C ν > 0 such that holds, the functions x ν+1 J ν (xj ν,n /b) , n = 1, . . . , N ν in the first term on the right-hand side of (8) are integrable with respect to x on [0, b]. In addition, since holds, the function x ν 1 + xπ b in the second term on the right-hand side of (8) is integrable with respect to x on [0, b]. Therefore, by Lebesgue's dominated convergence theorem, ∂ ∂x be the infinitesimal generator of R a and let m(x)dx = 2x 2ν+1 dx be the speed measure of the Bessel process. Then, we obtain So, we get . for 0 < a < b and t > 0. By differentiating these identities, we obtain By using (10) and (11), we can also prove Lemma 3.1. Proof. According to [6,Theorem 3.3], for all x ∈ [0, 1) and t > 0, there exists a constant C ν > 0 such that Hence, by Lemma 3.1, we can prove the assertion as follows: Proof. Using the Markov property of R a , for 0 < t < u, it holds that Since the density of τ a,b is a derivative of the distribution function, we obtain We can calculate the first term of the right-hand side of (14) as Therefore, by (14), (15), (6), (7), and L'Hôpital's rule, we can prove (12) as follows: dy. Next, we prove (13). Using the Markov property of R a , for 0 < s < t < u, it holds that Thus, it follows that On the other hand, by (14), we obtain . Combining this equality, (15), (6), and L'Hôpital's rule, we can prove (13) as follows: We prepare the following inequalities: Proof. First, we prove inequality (1). By (2), there exists some C ν > 0 such that Thus, by this inequality, it follows that Next, we prove inequality (2). According to [4], there exists some C ν > 0 such that Using this inequality and Theorem 6, we can obtain the following estimation: we can obtain our assertions. . (16) Proof. By (16), it suffices to show the following identity: Here, using Lemma 4.1, it holds that According to L'Hôpital's rule, we obtain On the other hand, by Lemma 3.3, for η ∈ (0, 1) and y ∈ (0, b+η), we have the following estimation: Again, using L'Hôpital's rule, it holds that for y ∈ (0, b). Therefore, by (19), (20), and Lebesgue's dominated convergence theorem, we obtain By this equality and (18), taking the limit η ↓ 0 in (17) allows us to prove the assertion. The following proposition implies that h b (s, x, t, y) satisfies the Chapman-Kolmogorov identity. Thus, it suffices to show that According to Lemma 2.1, we can prove the assertion as follows: By Proposition 3.1 and Proposition 3.2, h b (0, a, t, y)dy and h b (s, x, t, y)dy determine the continuous Markov process H a→b = {H a→b (t)} t∈[0,1] . Therefore, the proof of Theorem 1 is completed. Proof of Theorem 2 In this section, we prove Theorem 2, which gives the construction of the Bessel house-moving as the weak limit of the conditioned δ-dimensional Bessel bridge. Proof. By Lemma 4.1 and L'Hôpital's rule, we obtain our assertion. In Section 3, we proved that the right sides of (25) and (26) are the transition densities of the continuous Markov process H a→b = {H a→b (t)} t∈[0,1] . Then, by Proposition 4.1 and Lemma A.2, we obtain the convergence r a→b | K − (b+η) → H a→b as η ↓ 0 in the finite-dimensional distributional sense. Therefore, all that remains in proving Theorem 2 is the tightness of the family {r a→b | K − (b+η) } 0<η<η 0 for some η 0 > 0. By lim we can take η 1 > 0 so that q Proof. According to Taylor's theorem, we can find θ ∈ (0, 1) so that Using Lemmas 3.3 and 4.2, we obtain the following moment inequalities: For each α > 0, we can find a constant C α,ν,a,b > 0 so that , s, t ∈ (0, 1). Proof. By Lemmas 3.3 and 4.2, we have holds. Hence, because we have we obtain inequalities (1) and (2) as follows: Next, we prove (3). We note that (s, x, t, y)dxdy. By Corollary 1 and Proposition 4.2, we can apply Theorem 8 to {r a→b | K − (b+η) } 0<η<η 0 and obtain the tightness of this family. Therefore, we can differentiate term by term the second identity of Theorem 6 in some neighborhood of η. According to Proposition A.1 and Lebesgue's dominated convergence theorem, we can obtain the next corollary. A.2 General results on continuous processes In this subsection, we introduce some general results used in this paper. The proofs of them are found in [2]. Theorem 7 ([3, Chapter 2, Theorem 4.15]). Let {X n } ∞ n=1 be the family of C([0, 1], R d )-valued random variables. If the family {X n } ∞ n=1 is tight and the finite-dimensional distribution of X n converges to that of some X, then X n D → X holds. Lemma A.2. Let a, b ∈ R d , and let X n and X are R d -valued Markovian bridges from a to b on [0, 1] for n ∈ N. Let X n and X have the respective transition densities P (X n (t) ∈ dy) = q n (t, y)dy, P (X n (t) ∈ dy | X n (s) = x) = q n (s, x, t, y)dy, P (X(t) ∈ dy) = q(t, y)dy, P (X(t) ∈ dy | X(s) = x) = q(s, x, t, y)dy for 0 < s < t < 1, x, y ∈ R d , and n ∈ N. If lim n→∞ q n (t, y) = q(t, y), a.e. y ∈ R d , lim n→∞ q n (s, x, t, y) = q(s, x, t, y), a.e. (x, y) ∈ R d × R d , for 0 < s < t < 1, then the finite-dimensional distribution of X n converges to that of X as n → ∞. Lemma A.6. Let S 1 and S 2 be Polish spaces and let X n and Y n be random variables defined on (Ω n , F n , P n ) that take values in S 1 and S 2 , respectively. If X n and Y n are independent and P n • X −1 n and P n • Y −1 n converge to probability measures Q on S 1 and R on S 2 , respectively, then P n • (X n , Y n ) −1 converges to the product measure Q × R.
2022-01-28T02:15:46.002Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "1ffe599f901a22a3823cb590175a20ab9240c78f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "1ffe599f901a22a3823cb590175a20ab9240c78f", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [] }
256441802
pes2o/s2orc
v3-fos-license
Intestinal intussusception of Meckel’s diverticulum, a case report and literature review of the last five years ABSTRACT Meckel’s diverticulum is the most common gastrointestinal tract anomaly. It arises from the incomplete closure of the omphalomesenteric conduit, which is a true diverticulum at the antimesenteric border of the ileum. Although the majority of patients are asymptomatic, they can present with inflammation, hemorrhage, intussusception, intestinal obstruction, and perforation, among others; this constitutes an important differential diagnosis for acute abdomen. A 19-year-old female sought medical attention because of intermittent diffuse abdominal pain for two months, nausea, and diarrhea. In the requested imaging tests, tomography, and enterotomography, a diagnosis of Meckel’s diverticulum with some degree of intussusception was suggested. The patient underwent elective surgical treatment without complications and was discharged on the second postoperative day with clinical improvement. In this section, we review publications on similar cases published in the last five years. ❚ INTRODUCTION Meckel's diverticulum (MD) is the most common gastrointestinal tract anomaly with an estimated prevalence of 2% in the population. (1,2) Its pathophysiological process consists of an incomplete closure of the omphalomesenteric (vitelline) conduit, forming a true diverticulum at the antimesenteric border of the small intestine. Most cases are asymptomatic, and the discovery of the diverticulum requires examinations for other causes during the surgical approach. However, sometimes there may be clinical symptoms; for example, with inflammation generating a picture of diverticulitis, a differential diagnosis of an acute inflammatory abdomen would be suggested. The diagnosis of MD is usually made by the association of clinical suspicion with imaging tests, which may not have satisfactory sensitivity and specificity, sometimes requiring a surgical and anatomopathological approach to establish the diagnosis with certainty. Within the MD, ectopic mucosa can be found, the most frequent being gastric and pancreatic mucosa, but the presence of these findings is not mandatory. Thus, the clinical picture of MD may present with ulceration and bleeding in the gastrointestinal tract due to ectopic gastric mucosa. Meckel's diverticulum can cause other complications, such as hemorrhage, intussusception, intestinal obstruction, perforation, and, very rarely, bladder diverticular fistula and tumors. Treatment usually consists of surgical resection of the diverticulum. (3) ❚ CASE REPORT A 19-year-old female patient sought emergency care due to a complaint of diffuse abdominal pain, which was greater in the lower abdomen, associated with mild nausea and diarrhea. She had a history of intermittent chronic pain and had been in medical care at this service for two months prior for the same reason, with abdominal ultrasound and laboratory tests within the limits of normality. New ultrasound and laboratory tests were requested, in addition to prescribing analgesic medications. The ultrasound result again showed no abnormalities, and emergency computed tomography (CT) was performed as shown in figure 1; in the venous phase, a blind-end loop in the right iliac fossa that could correspond to MD, with emphasis on parietal thickening and casting of the same, with contrast medium enhancement. No other significant changes were observed. Complementing the study with oral contrast was suggested to help better characterize the findings. The investigation continued with enterotomography and CT with oral and venous contrast to better assess the hypothesis of MD in the arterial phase and vascular study, which was performed two days later. The findings in figure 2 confirm the presence of MD, mobile in the comparison between the tomographic studies, located in the median/left paramedian region of the pelvis, superior to the bladder dome. She had significant mucosal thickening and hyperenhancement of her pons, with an image suggesting partial invagination of the pons. A nourishing artery was highlighted along the diverticulum and inserted at the tip. Peridiverticular adipose planes with preserved attenuation can be seen. The remaining thin loops had a normal distribution, without significant parietal thickening. No fistulous pathways, organized collections, or lymph node enlargements were observed. The patient remained clinically stable, with no changes in laboratory test results and no clinical signs of obstruction or other acute intestinal complications. Elective surgery was scheduled 19 days after the CT scan. The surgical procedure was videolaparoscopy, performed with the patient under general anesthesia. A Meckel's diverticulum with a wide base and partial invagination was identified ( Figure 3). Segmental enterectomy with resection of the diverticulum and intracorporeal mechanical side-to-side anastomosis is the treatment of choice. The procedure continued with closure of the mesenteric gap, review of hemostasis, removal of the surgical piece, and closure of the portals and dressings. The procedure was uneventful. Macroscopically figure 4, the material resulting from the resection was described as a segment of small intestine measuring 7.0cm in length and 3.0cm in perimeter, which presented a smooth and shiny serosa, with a saccular area measuring 2,5cm x 2.3cm and 2.5cm from the end. The anatomopathological report was compatible with MD, described as a saccular projection of the enteric wall lined by the gastric fundic epithelium with foveolar hyperplasia. The remaining enteric wall had a well preserved architecture. Viable surgical margins and no morphological evidence of malignancy were noted. The patient evolved well clinically and was discharged on the second postoperative day. This study was approved by the Research Ethics Committee of Hospital Israelita Albert Einstein under CAAE: 59733422.0.0000.0071; #5.548.250. ❚ DISCUSSION We performed a search in the PubMed database on February 8, 2022, looking for publications that contained the descriptors (intussusception) and (Meckel) or (Invagination) and (Meckel) in their titles to obtain articles similar to the present report. The search yielded 178 items, and the timeline of publications was quite broad. The first publication dates from 1902, being a case of an adult American patient with an acute obstructive abdomen due to the invagination of MD. Interestingly, the author cites a similar presentation, described by a colleague in 1884. (4) The latest date is 2022. We decided to revisit the reports of the last 5 years in this discussion, therefore determining as exclusion criteria any article with a publication date beyond this period. This represents 15 publications, all of which are represented by case reports in figure 5. Seven males and eight females were included in the study. However, the tendency for symptomatic presentations of MD is more prevalent in males, with a ratio ranging from 1.5:1 to 4:1. (3) Symptoms and presentation by age Of the patients younger than 6 years old, the most prevalent symptom was vomiting (60%), followed by abdominal pain (40%), hematochezia (40%), and lowered level of consciousness (40%). One patient was initially investigated for shaken baby syndrome, constipation (20%), and fever (20%). On physical examination, 40% had no changes, 20% had abdominal distension, 20% had abdominal stiffness, 20% had abdominal pain, and 20%, (that is, one case), had lower limb edema, and was diagnosed with protein-losing enteropathy in addition to intestinal intussusception with necrosis due to MD. In the database, among children with symptoms due to MD, 46.7% had obstruction, 25.3% had gastrointestinal tract bleeding, and 19.5% had inflammation. (3) In the category of patients over 15 years of age, the most commonly described symptoms were abdominal pain in 90% of cases, nausea and/or vomiting in 60% of cases, bowel movement arrest and flatus in 10%, hematochezia in 10%, and asymptomatic in 10% (that is, a patient with no complaints who was diagnosed by an accidental finding in the investigation of another etiology). Regarding physical examination, 20% of the reports did not comment on this evaluation, while in the others, there was a 50% prevalence of pain on abdominal palpation, 25% abdominal distension, 25% pain on decompression, and 38% no changes. From the available data on symptomatic MD in adults, we found that 35.6% had obstruction, 27.3% had gastrointestinal tract hemorrhage, and 29.4% had inflammation. (3) Complementary exams Of the studies cited in this article, at least one imaging test was requested for all cases. Laboratory findings of leukocytosis were described in 2 of 15 reports. The most prevalent tomographic findings were intestinal loop distention and diagnosis of intussusception, followed by free fluid in the cavity, lesion with a target appearance, tumor progression in size, edematous thickening, and cystic lesions. The most cited sonographic finding was the "target sign," followed by other descriptions of intussusception, a report of double intussusception, a suspected volvulus, and an image with free fluid in the cavity. Epidemiology From a geographical perspective, we observed that 15 publications came from five continents. One was from America (United States), (5) four from Europe (Italy, (6) France, (7) Spain (8) and the United Kingdom), (9) one from Africa (Tunisia), (10) seven from Asia (Japan, (11)(12)(13)(14) South Korea, (15) China (16) and Syria) (17) and two from Oceania (Australia). (18,19) Regarding the age of the patients included in this sample, there was a range from newborns to 49 years of age. Although in the literature, children up to 10 years of age represent more than 50% of symptomatic MD cases, (3) the incidence in the reports reviewed in this discussion proved to be slightly different. Two patients were less than 6 months of age, (6,10) three between 1 and 6 years of age, (14,16,17) five between 15 and 30 years of age (5,7,13,18,19) and five between 40 and 50 years of age. (8,9,11,12,15) einstein (São Paulo). 2023;21:1-6 As we have seen above, the symptoms and pathological processes that cause them are not specific to MD, which is a diagnostic challenge due to the possibility of considering other etiologies of acute abdomen. Thus, diagnostic complementation with imaging examinations or depending on the case of surgical complementation for investigation of the condition is of paramount importance. Some tools commonly cited for follow-up are radiography, ultrasound, tomography, magnetic resonance imaging, angiography, arteriography, and nuclear scans with CT-99m pertechnetate. (3) Surgical findings Regarding the intraoperative findings, of the 15 cases, we observed that in 46% of them, there was a description of necrosis/ischemia, and of these, one report of perforation. In 33% of the cases, there was diagnostic doubt due to the description only of the detection of an intraluminal lesion/tumor, and the remaining 66% had classic intussusception resulting from MD. One patient even had double intussusception. Primary anastomosis was performed in all surgeries. Once the diagnostic hypothesis of MD has been raised, or another etiology of surgical treatment has been proposed as a differential diagnosis, direct observation of MD will provide the correct diagnosis. This can be performed surgically, by laparoscopy or laparotomy, or even with endoscopy of the small intestine or capsule endoscopy; importantly, each method has its indications and reservations regarding specificity and sensitivity. However, it is important to reiterate, as mentioned above, that it is not uncommon for asymptomatic cases or cases under investigation/approach to other clinical situations in which MD is accidentally found, and its resection at diagnosis has been advocated by some authors. (3) Anatomopathological All reports confirmed the diagnostic hypothesis of MD. In addition, 26.6% of patients had ectopic pancreatic tissue, 20% had ectopic gastric tissue, and 6.6% had ectopic gastric and pancreatic tissue. One report did not describe the anatomopathological findings. However, there has been a description that the presence of gastric ectopia is the most common tissue found, being present in 4.6% to 71% of symptomatic MD, followed by pancreatic tissue in 0% to 12%. These two factors are responsible for 97% of the ectopias present in MD; however, duodenal or colonic tissue may also have been present. (3) Outcomes In all cases described, good surgical recovery was observed, and the patients were discharged with clinical improvement. A need for surgical re-approach was described in one patient for lysis of adhesions 12 months after discharge. ❚ CONCLUSION We describe a case report of a patient with Meckel's diverticulum and intestinal intussusception secondary to invagination of the diverticulum, with a clinical presentation and surgical and anatomopathological findings consistent with most cases published in the last five years. Considering that Meckel's diverticulum is a prevalent intestinal malformation, this etiology should be considered as an important differential diagnosis when faced with abdominal complaints. ❚ AUTHORS' CONTRIBUTION Dora Sandoval Schaedlich: conceptualization, data curation, and project administration. Pedro Custodio de Mello Borges: data collection and project administration. Arnaldo Lacombe: methodology. Renato Alonso Moron: resources and contributed data.
2023-02-01T16:13:21.891Z
2023-01-27T00:00:00.000
{ "year": 2023, "sha1": "e84b72e0e76e59345221c58c257b34ca724452e4", "oa_license": "CCBY", "oa_url": "https://doi.org/10.31744/einstein_journal/2023rc0173", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f745d2922d4dc4a286714d2c4dfbc495b8897783", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
201135579
pes2o/s2orc
v3-fos-license
The Effect of Driver Engagement in Autonomous Driving based on Flow Experience As the vehicle controls the driving itself, driver’s role changes into only a supervisor. These changes in this role directly affect driver’s behavior inside the vehicle. Especially, the driver and the system for semi-autonomous driving require not only more suitable interaction but also proper intervention and exact control. This study investigates driver behavior patterns in autonomous driving based on the 4 - channel model of flow theory. For this, we investigated the driver engagement and behavioral change according to the 4 conditons which are caused by the combination of NDRT’s difficulty and the driver's skill. Driver engagement and mental workload were assessed by Flow short scale (FSS) and NASA-TLX. Then the optimal experience and driver’s action guidelines are determined enabling the appropriate interaction between the driver and vehicle. The results of this study on the driver engagement can contribute to solving issues of collaboration in the intelligent systems and the level of user control as well. Introduction As the development of autonomous driving technology, most drivers don't need to highly focus on the typical driving. In other words, while the vehicle controls the driving itself, driver's role changes into only a supervisor [1]. Changes in this role directly affect driver's behavior inside the vehicle. In addition, driver's behavior is determined by the level of automation and attention to driving tasks [2]. Driving time can be used for non-driving-related tasks (NDRTs) [3]. In particular, there are some ambiguous aspect in terms of the role of driver in semi-autonomous driving compared to manual or fully autonomous driving [4]. Therefore, the driver and the system for semi-autonomous driving require not only more suitable interaction but also proper intervention and exact control. In the case of the semi-autonomous driving, the length of Take-Over Request (TOR)-time in take over scenarios is mainly studied previously to ensure the adaptation of new system [2][3]. Mental workload about drivers performing NDRT in highly autonomous driving is also studied [1]. Especially, the driver performing NDRTs is required to allocate visual attention in takeover [5]. In addition, drivers sometimes need to be prepared to catch the steering wheel in case of an accident or traffic jam [5]. However, as the increasing of driver's adaptability of autonomous driving technology, drivers have been able to give attention to NDRTs. This type of driving, and then, enable the driver to engage the NDRTs [3]. Thus, there is some limitation to study driver's behavior through an existing evaluation perspective. The flow condition represents the optimal experience to the task performance [9]. This means the condition of individual who performs optimal challenge and task with the highly concentrated status. In fact, It has been studied to enhance the user satisfaction and to derive optimal experience [6]. According to the flow theory, a flow is experienced when the operator's skill level balances with the challenge of activity Csikszentmihalyi [9]. Based on the previous theories, recent studies suggest that flow condition can also occur in autonomous vehicle environment [7][8]. For instance, the driver's behavior was defined in regards to three conditions such as Boredom, Flow, and Anxiety. The Boredom condition, which is one of the possible condition, is generated when the activity's challenge is relatively low, and the anxiety condition is induced when the task challenge is high. However, the flow model has been modified to a quadrant model (4-channel model) due to the inconsistent results of typical model (3-channel model) [9]. The modified model suggests that flow is experienced only at the high level of operator's skill and challenge of activity. Therefore, it is necessary to apply the quadrant model which can capture the behavior change through classifying the skill level of the operator performing NDRTs in the autonomous driving environment. In addition, previous studies have suggested that flow condition requires a level of attention resource similar to mental workload [10]. Other studies have suggested that condition of the driver also influence the take-over performance [5]. Thus, the purpose of this study is to prove the relationship between drivers' flow condition who performing NDRT and mental workload and to investigate how 4 conditions affect driving performance. Participants In this study, 12 participants aged between 28 and 31 took part in the experiment (M = 29.25, SD = 1.29). 6 (50%) male and 6 (50%) female were participated respectively. They had held their drivers' licenses for a mean of M = 6.67 (SD = 1.72) years Apparatus In this study, the driving simulator, macbook and ipad screen used shown figure 1-(a). For this driving simulator (Logitech), consisted of 3 units of 27-inch LCD monitors, steering wheel, pedal, driver's seat, simulation software (Open DS). The software of this driving simulator, Open DS was set to conditionally L3automated system on the road. The vehicle in the Open DS was set to run repeatedly the M-shaped road. Experimental task and design In semi-autonomous driving environment, drivers can focus on NDRTs, such as reading books or using smartphones [5]. These various NDRTs contribute to determining the demand level of the driver [3]. These NDRTs include tasks such as Reading, Quiz, Writing, Tracking, n-Back, Addition. According to 4-channel model about flow theory [9], operator can experience flow condition only when his skill and challenge of activity are high. Particularly, motivation can increase the skill level of the operator performing the NDRTs. There are extrinsic motivation and intrinsic motivation, both of which can play a role of motivation [11]. Especially, verbal reinforcement(praise) and extrinsic rewards are positive effects on motivation [6]. We thus used motivation (extrinsic/intrinsic) to improve the participant's skill. This motivation is to induce Relaxation condition and Flow condition in 4-channel models. For this study, we used the Drawing and Addition task to the participants in the NDRTs show figure 1-(b). These types of NDRT can make a difference in driver's condition. These 2 tasks were applied to suit our experiments shown figure 2. The Addition task and the Drawing task each have 2 difficulty levels. This level of difficulty used to control the challenge of activity in flow theory. Addition task (SIMPLE) was the addition of 2 number with 2-digits each (less than 50), such as 14+37, shown in the macbook screen at the right-hand side of driver. Participants were asked to choose UP or DOWN as the correct answer. And Addition task (COMPLEX) was the addition of three number with three-digits each (less than 50), such as 18+28+13, and participants selected 50 or UP or DOWN. Likewise, we have set up Drawing task about 2 difficulty levels, SIMPLE and COMPLEX. Level of SIMPLE was a few of points shown in the ipad screen at the right-hand side of driver. Participants were asked to use the apple pencil to connect the points. Experimental procedure The instructor informed the participants about the experiment and purpose of our experiment. He or she gave the information consent, and surveyed their demographic information. Particularly, participants who involved in the experiment had not experience autonomous driving. Thus, they watched videos related to autonomous driving. Afterwards, they practiced driving for 15 minutes on the driving simulator. In addition, Preliminary practice were performed on NDRTs in advance to help participants understand. Then, this experiment was conducted in take over scenarios. During the autonomous driving, the participants naturally performed the NDRTs. According to [1], semiautonomous driving at a speed of were set 100km/h. Then, when the time budget of 8 seconds for take-over request time heard, the participants put their hands on the steering wheel and continued his or her manual driving. According to [1], These time budgets were claimed empirically safe for quality of take over scenarios. In this experiment, the participants experienced total of 8 combinations of NDRTs in random order to lead them into 4 conditions (Apathy, Relaxation, Anxiety, Flow). Thus, they performed 2 NDRTs consisting of 4 conditions (Apathy, Relaxation, Anxiety, Flow). In this study, we used the perceived demand level (9point Likert scale) of the FSS (Flow Short Scale) to investigate whether the driver perceived a difference in the difficulty level of NDRTs. We then used the FSS composed of 10 questions (7-point Likert scale) to assess drivers' condition. According to [6], these FSS used in previous study to assess level of engagement. In addition, we used NASA-TLX to measure participants' mental workload. Finally, the reaction time measured in the software of the driving simulator. The definition of reaction time used in this study is "Time between TOR and start of maneuver [2]". Results We investigated the difference in the perceived demand level of the NDRTs performed by the participants with different gender. The gender did not have a significant effect on all dependent variables such as the perceived demand level (F(1,10) Discussion and Conclusion This study used flow theory and NDRTs type to set the design of 4 different conditions of skill/challenge experiments in semi-autonomous driving environment. The results indicate that the 2 types of NDRT were differently accepted about perceived demand level by the participants. Participants perceived Addition task (M = 5.750, SD = 1.695) as a higher demand level than Drawing task (M = 3.188, SD = 1.497). Thus, the participants can accept the NDRTs, which require cognitive load, as a high level of difficulty. These results indicate that the NDRT types affect the perceived demand level of the driver. In addition, the results indicate that the design of 4 conditions were implemented as predicted, as shown by results of the perceived demand level. In our study, participants perceived the Anxiety condition as the highest demand level, appeared by Flow condition, Relaxation, and Apathy condition were in order about perceived demand level. These findings are consistent with the results of [6][7] claim that participants perceived the demand level differently in performing those NDRTs as Anxiety were high, Flow were medium, and boredom were low. Although perceived demand level in this experiment was limited to 3, mental workload and flow level showed different results. Then, the flow experience are perceived differently by participants depending on the 4 conditions. These results indicate that the 4 conditions induce the different flow experience of the driver. Participants showed the most immersive experience in flow condition, followed by Relaxation condition, Anxiety and Apathy condition were in order about FSS. These are not consistent with [8] [12] claim that Flow condition rated the higher flow experience than the others (Boredom, Anxiety condition). In previous studies claimed that the Boredom and anxiety condition are not assessed differently. This result show that when we set skill/challenge experiments, we could better experience flow in the Relaxation condition and Flow condition influenced by motivation. In addition, our findings are consistent with [11] claim that the motivation influenced flow experience. This experiment found that the NDRT types and 4 conditions had an effect on mental workload. Participants accepted mental workloads differently when performing 2 types of NDRT. Participants felt more mental workload when performing Addition (M = 40.247, SD = 19.986) than Drawing task (M = 32.784, SD = 20.505). These results are similar to those of participants who assessed the perceived demand level of NDRT differently. In addition, the mental workload was accepted differently about 4 conditions. Participants showed the highest mental workload in the Anxiety condition, Flow condition in the next order, Relaxation condition in the third, and lowest mental workload in the Apathy condition. These results are consistent with [8] claim that all conditions from 3-channel model were accepted differently by participants. Particularly, the result is that the Anxiety conditions were high, Flow conditions were medium, and Boredom conditions were low. However, these results are not consistent with [12] claim that the Flow and Anxiety condition were not accepted differently by participants. In this study, additionally, the participants accept mental workload differently about Apathy conditions and Relaxation conditions. We found that the NDRT types and 4 conditions affect the reaction time of the driver in the take-over scenarios. Participants who perform Drawing task (M = 2.975, SD = 0.594) were show significantly lower reaction time than perform Addition task (M = 3.173, SD = 0.678). These results indicate that the reaction time differs according to the cognitive demand during performing parallel task. In addition, Participants showed different reaction times according to the 4 conditions. They showed the slowest reaction time in flow conditions. Whereas the fastest reaction time appeared in apathy conditions. The reaction time of relaxation and anxiety conditions were faster than flow conditions but slower than apathy conditions. As well as, our results shown average range of reaction times ranging from 2.585s to 3.665s. These results are similar to the take-over time ranging from 2.69s to 3.61s according to [13]. In this study, 4-channel model of flow theory applied to derive various conditions of driver in semiautonomous driving environment. We then investigated whether the driver perceives the demand level differently depending on the NDRT types and conditions. In addition, we investigated the effect on the reaction time through with-in subject design that can assess the drivers' mental workload and flow experience. Thus, we found that the flow experience had an effect on the reaction time in this experiment. As well as, we also found that when the driver reached the flow conditions, they showed an adequate level of mental workload. These results show that the optimum experience followed when the mental workload is not too low or high.
2022-05-31T18:08:22.051Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "5c76f83890e2f260cf884b6b3ef5a161999c9105", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/jje/55/Supplement/55_2H1-6/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "5c76f83890e2f260cf884b6b3ef5a161999c9105", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [] }
53868996
pes2o/s2orc
v3-fos-license
Model-based Data Fusion in Industrial Process Instrumentation Process sensors form an essential component of modern industrial production processes. They usually have to be operated under stringent environmental, safety, and cost constraints. Some of the key requirements on process instrumentation are: operation under harsh and varying environmental conditions, high reliability, fault tolerance, and low cost. The failure of process sensors may cause high losses due to plant breakdowns or out-ofspecification products. Therefore it is important to know as much as possible about the momentary states of a production process. In addition, the estimates need to have low uncertainty and high reliability. Sensor and data fusion can be key techniques to economically reach these goals and achieve higher performance than with isolated single point sensors alone. These methods enable the quantification of otherwise inaccessible quantities that cannot be deduced from a single sensor or management principle. Examples are the concentration measurement of ternary solutions or the tomographic estimation of spatially distributed material parameters from arrays of single point sensors. Industrial processes are usually operated within a defined environment, although there may be very harsh conditions like temperature variations, aggressive fluids, and high humidity. There are certain limits of operation. The nominal parameters, like desired product specifications as well as normal or acceptable fluctuations are known in advance while unknown encounters like in classical sensor fusion for target tracking, autonomous guidance, and battlefield surveillance are not within the scope of operation. This fact potentially establishes a precise and specific model of the industrial process at hand. The knowledge then contained in the process model can be fruitfully exploited in model-based data fusion. Generally, model-based approaches reach beyond straightforward methods like physical redundancy with majority votings or heuristic filtering operations. The achievable measurement precision as well as decision making reliability is usually higher in model-based approaches due to the additional regularization of the state or hypotheses space that is achieved with an appropriate model. However, at the same time special care needs to be taken to choose a model that allows for sufficient and representative variation. This is the only way the inherent variability of a process can be adequately represented. O pe n A cc es s D at ab as e w w w .in te ch w eb .o rg The well-known JDL (Joint Directors of Laboratories) data fusion process model is a popular and useful conceptual framework for the classification and comparison of data fusion approaches (Hall & Llinas, 1997), (Varshney, 1997), (Macii et al., 2008). It was originally developed with military applications like surveillance and target tracking in mind. It has been pointed out that the JDL model does not fully address data fusion problems from nonmilitary areas, like image fusion. However, a classification of such fusion algorithms may be useful for a common understanding. After a preprocessing stage the JDL model distinguishes four levels of processing: • Object refinement (level 1) • Situation refinement (level 2) • Threat refinement (level 3) • Process refinement (level 4) Due to the frequent availability of a well-defined process description, the model-based data fusion approaches in industrial process instrumentation are classified as level 1 processes in most cases. In this object refinement stage parametric, locational, and identity information are combined. Major functions are the transformation and alignment of data to a suitable reference frame, and the estimation and prediction of states. In terms of process instrumentation, the desired output of the object refinement process are quantitative and unambiguous figures. Due to the precise knowledge of the desired target quantities these figures can be straightforwardly used for an objective assessment of the industrial process. Therefore the higher levels of the JDL model do not play an as important role in industrial as in military applications. According to another popular characterization of data fusion approaches the fusion can take place at different stages of the signal processing chain. A common categorization uses three levels (Varshney, 1997), (Hall et al., 1999): Decision-level fusion In data-level fusion the raw data from each of the sensors are combined. In this context raw data are single measurements like temperature or pressure readings. All further processing is based on the totality of data. This approach is able to yield the most accurate results, but it requires the sensors to be commensurate, i.e. that the different data can be processed in a common framework. If the data are in different regimes they have to be registered first, e.g. through coordinate transformations. Data-level fusion requires centralized data processing, since the totality of raw data has to be simultaneously available. A high communication bandwidth is necessary since all raw data have to be transferred. In feature-level fusion the raw data of each sensor are processed locally. A feature vector is generated from the corresponding observations. Features can, e.g., be volume fractions of materials, flow rates, or flow profiles, which are derived from multiple single point measurements. The different vectors are then fused to give a single feature vector. The necessary communication bandwidth is reduced compared to data-level fusion. However, the generation of the single feature vectors results in some data loss in general. Finally, in decision-level fusion, each sensor derives higher-level decisions from its own data and features. A decision could be whether there is a process malfunction or not. The individual decisions are finally fused by some sort of voting to give the final inference. The high required bandwidth in data-level fusion may be a severe disadvantage in large scale distributed target tracking applications, www.intechopen.com but is not regarded as a major issue in process instrumentation. Sensors that share a common or similar state space or reference frame are usually installed in close vicinity. In distributed production environments the derived feature and decision information is passed on to the control room. This chapter is intended to give an overview of applications of model-based data fusion in the context of industrial instrumentation and process monitoring. The diverse examples addressed are grouped according to a classification in: uniquely determined, overdetermined, underdetermined, or sequential data fusion (Tanner, 2003). Data fusion may already take place in instruments that are received as single sensor installations from an outside perspective (Ruhm, 2007). Examples are sensors that rely on additional measured quantities to compensate for unwanted cross-sensitivities. Internal temperature compensation of the primary measurand, e.g., is indispensable in many instruments. A next step is data fusion of multiple independent and non-redundant sensors to compensate for cross-sensitivities among the primary measurands that can be explicitly modelled. A typical application is the concentration measurement in ternary or multinary solutions. A characteristic feature of this class of sensor fusion problems is that the number of unknown parameters can be uniquely determined from the number of input quantities of the fusion process. A concept more easily perceived as data fusion is the combination of identical parallel sensors to provide redundant information. This leads to a higher degree of security and reduces measurement uncertainty. The use of such redundant sensor arrays has a long tradition in industrial process instrumentation as sensor breakdowns and false decisions can have dramatic implications. Even single sensor failures may be able to cause malfunctions of large-scale facilities like power plants if security issues are not properly addressed in the system architecture. A slightly different approach than redundancy maximization is the use of identical sensors in a specific spatial configuration. By using appropriate models it is then possible to deduce quantities that cannot be measured with a single point sensor of the same measurement principle. Further benefits are increased accuracy of the estimates and powerful error compensation without additional sensors. This approach is convincing if very simple and cost-efficient elements can be used and will be illustrated by means of capacitive and magnetic sensors for angular position measurement. In these cases the number of measurement values exceeds the number of unknown parameters to be determined. The third class of data fusion applications covers the converse case where the number of independent measurements from a homogeneous sensor array is actually smaller than the number of unknown parameters. This occurs when spatially varying material parameters are to be determined using industrial process tomography. The number of measurements obtained at the boundary of a problem domain is limited compared to the appropriate discretization of the domain. This leads to ill-posed inverse problems. The incorporation of additional prior knowledge through the process model is essential in order to obtain a meaningful solution of the problem. Here the concept of model-based measurement peaks in its relevance as a framework for multi-sensor data fusion. Solution strategies for tomographic problems are introduced in the corresponding section and the importance of the process model is discussed. Besides single-modality tomographic data fusion, multimodality fusion is also addressed. For this case a sequential fusion process is proposed. www.intechopen.com In subsequent sections, grouped according to uniqueness of the data fusion solution, several model formulations will be introduced. These model formulations range from static to dynamic, ideal to those including nuisance effects and noise, and explicit to implicit. Also different fusion algorithms including response surfaces, stochastic filters, and sequential fusion will be addressed within the respective applications. Error compensation Measurement processes and the superordinate fusion algorithms can be modelled in various ways. Depending on the actual situation and the aim of the measurement they may be formulated as static or dynamic systems. For certain applications a static model may be sufficient although virtually every sensor shows some kind of dynamic behaviour. For the following considerations we assume a vector ( ) of two or more measurements as input to a nonlinear static fusion equation f . The output of the fusion process is the scalar variable x . This kind of data fusion can already be found in most of modern process instruments. In the simplest case a single input variable is directly related to the output of the fusion relation and defines the primary measurement equation. An illustrative example that can be found throughout the process industries is density measurement. A classical instrument for that purpose is the vibrating tube densimeter (Ihmels et al., 2000) (Laznickova & Huemer, 1998). The operating principle based on a spring-mass system is sketched in figure 1 using a U-shaped tube. The process fluid under test flows through a metal or glass tube that is decoupled from the surrounding through a base mass. In order to measure the fluid density the tube is excited by a force acting on the bend. The tube then vibrates at its resonant frequency, which is sensed by the pick-up mechanism. (2) As the volume of the tube is constant and known the measurement is proportional to the fluid density. Equation (2) can be reformulated to give the density ρ as a function of the period of oscillation, where A and B are constants. In practice equation (3) is not sufficient to obtain accurate results over a wide operating range as there are several other factors influencing the period of oscillation. Temperature changes lead to changes in the tube volume and the spring constant. The same holds for changes of the fluid pressure p . To compensate for these effects a polynomial expansion can be applied to the constants in the measurement equation. For increased accuracy also mixed terms can be included. Combining equations (3) -(5) we obtain the fusion equation to compensate for the nuisance effects of temperature and pressure on the density reading. As the accuracy requirements are increased even further influences need to be compensated for. Fluid viscosity, as an example, is known to have a small effect on the period of oscillation (Krasser & Senn, 2007). Multidimensional parameter estimation In the error compensation case several auxiliary measurands act on individual scalar output quantities in a unidirectional way. A more complex fusion process can be introduced if a vector of output quantities ( ) is introduced in equation (1). Then the input parameters can interact with all of the outputs simultaneously. This allows for the use of more flexible and powerful fusion methods. Figure 2 shows flow charts of the scalar and multidimensional data fusion approaches. The data fusion process 12 f can range from a matrix in the linear case to arbitrary response surface methods, e.g. based on polynomial expansions. The concentration measurement of ternary and multinary solutions is a typical problem in this class of fusion procedures. The density measurement introduced in section 2.1 is often used for concentration measurement of binary solutions. However, if there are more than two components only the sum of contributions of the individual components can be measured. The measurement problem may be resolved by fusing suitable methods for binary concentration measurement. As a representative application the measurement of extract and alcohol concentration in beer production is presented (Vasarhelyi, 1977). Beer is www.intechopen.com basically a ternary mixture of water, extract and alcohol. So density measurement alone is not sufficient. The same is true for sound velocity and refractive index measurements, which are also classical methods for density and concentration determination (Hauptmann et al., 2002). If the response curves of two measured quantities to extract and alcohol variations are linearly independent in a certain range the problem can be solved with data fusion. Then both quantities can be uniquely determined from the primary measurements. The procedure is sketched in figure 3. The example qualitatively shows the extract and alcohol determination from density and sound velocity measurements. It can be seen that an extract change increases both sound velocity and density. On the contrary, a change of the alcohol concentration increases sound velocity and decreases density. This orthogonality allows for the unique determination of the target quantities. The inversion procedure corresponds to the transformation of a point in the Euclidean density/sound velocity space to a different, in general curvilinear, coordinate system. Sensor fault detection and isolation One of the most prominent applications of data fusion in industrial process instrumentation is sensor fault detection and isolation (FDI). It is of fundamental importance in all safety- (Simani, 2000). Faults can then be detected by comparing the residuals with thresholds that are suitably defined with respect to the normal sensor operation. Finally the faulty sensors are isolated by analyzing the different residuals. Physical and analytical redundancies are the two possible types of redundancy necessary for the calculation of residuals. Physical redundancy means the use of several sensors measuring the same physical quantity in parallel. Analytical redundancy is concerned with the application of analytical models in order to produce estimates of sensor signals from other sources of data. It requires the precise knowledge of the underlying process. Typically applied in this category are stochastic filters and observers like the Kalman filter. The application study in section 3.2 treats a sensor system using an extended Kalman filter with integrated FDI functionality. Physical redundancy is more general as it does not rely on such information. However, it is more expensive to implement as multiple sensors for every quantity to be measured need to be employed. Residuals for a redundant set of sensors are computed by comparing each sensor signal with an estimate of the true value of the measured physical quantity ς . A straightforward linear measurement and fault model can be used to relate the true value to the measurement j z of sensor j . In the fault-free case the expectation of the residuals is zero and the standard deviation depends on the individual sensor standard deviations and the number of sensors. In the case of a single instrument error the residual of this sensor has a mean equal to the fault amplitude, while the other residuals show a mean that is lower by a factor ) 1 /( 1 − n . With this information suitable thresholds for the detection of occurring faults can be easily established. Other possibilities for the detection of faults include the calculation of statistical, spectral, and temporal characteristics of the residuals and artificial intelligence methods. The use of fuzzy logic allows for a flexible integration of different aspects of failure modes with empirical knowledge, for which analytical models are difficult to define (Park & Lee, 1993). Input quantities of the system like differences of sensor values and other characteristics are fuzzified using linguistic variables. An example is shown in figure 4. The differences of sensor readings are normalized according to the standard uncertainties of the sensors. In the shown example three membership functions 'small', 'medium', and 'large', based on Gaussian and spline functions, are employed for the fuzzification (Steiner & Schweighofer, 2006). Further input variables can be elaborated from other observations related to error and fault occurrence. The actual fault model is contained in the rule base that is used for fuzzy inference. Rules are formulated as if-then relations. IF residual 1=small AND residual 2=small THEN operation=normal (10) The de-fuzzification stage of the Mamdani-type fuzzy system finally yields quantitative measures of the output fault parameters. Fuzzy systems have the advantage that they can be easily extended with more detailed application-specific process knowledge. Integrated sensor arrays With suitably designed sensor arrays and data fusion algorithms the tasks of error compensation and fault detection and isolation can be readily combined within a single instrument. Another potential feature is the highly accurate estimation of quantities of interest through the combination of multiple low accuracy sensing elements. The achievable performance is illustrated in the following by means of an integrated smart capacitive sensor array for the determination of angular position and speed (Watzenig et al., 2003) (Watzenig & Steiner, 2004). The approach is based on the use of the extended Kalman filter (EKF), which is frequently used for data fusion in applications like target tracking. It offers a powerful framework also for industrial applications of data fusion. In contrast to the methods introduced in this chapter it is based on dynamical state space models and allows to monitor and exploit dynamical effects of the involved sensors and processes. The capacitive angular position sensor consists of a rotor mounted coaxially between two stator plates. One stator plate corresponds to the transmitter. It is divided into 16 segments, the other stator contains the receiving electrode. The 16 segments of the transmitting electrode are electrically isolated from each other. The two stator plates are both bounded by an inner and an outer guarding ring connected to ground potential. The electrically conductive rotor is also grounded. It affects the coupling capacitances between the transmitter segments and the receiving electrode, dependent on its angular position. The used sinusoidal rotor shape yields a sinusoidal capacitance distribution. Segment driver and receiver electronics ensure that the received voltage signal amplitude is linearly dependent on the coupling capacitances. Figure 5 illustrates the axial view of a capacitive angular position sensor with approximately sinusoidal capacitance variation. In particular, a fourblade rotor in front of the transmitting electrode with 16 segments is shown. The receiving electrode is similar to the transmitting electrode, but without segmentation. Thus, the sensor array consists of 16 simple low resolution measurement channels which acquire data at different spatial orientations. The sensible combination of the channel data allows for the accurate estimation of angular position. The same principle can be applied to the measurement of other quantities like inclination angle, torque, liquid level, and flow. It can also be used with other physical sensing effects. Fig. 5. Axial view of a capacitive angular position sensor. A four-blade rotor with a sinusoidal rotor shape is placed in front of the transmitting electrode, which is divided into 16 segments. The receiving electrode above the rotor is not shown. The data fusion algorithm is based on the EKF. Therefore a discrete-time state space model of the sensor array is introduced. It is derived from a continuous second order system using a sampling interval T . The state vector k x at time step k is composed of the angular position k ϕ and the angular speed k ω . The measurement equation (12) relates the state vector to the vector k z containing the 16 segment readings through the nonlinear measurement function h . Due to the special shape of the rotor the segment voltages are basically phase-shifted sinusoids as a function of the angular position. Both state and measurement equations are corrupted by process and measurement noise sequences w and v , respectively. They are assumed to be uncorrelated white Gaussian sequences with covariance matrices Q and R . The EKF recursively estimates the process states based on the current values of the states and the state error covariance matrix k P . The algorithm can be grouped in a prediction step and a correction step. The gain matrix is then used to update the state estimate using the current measurement. The term in brackets, which is the difference between actual measurement and estimated measurement, is called innovation sequence. It is a sensitive indicator of differences between a fault-free model and the current system. If the measurements and the model are in agreement, it has a mean of zero. Thereby Kalman filtering can also conveniently be used for fault detection and isolation using analytical redundancy. For the present application the innovation sequence s k can be used to compensate for occurring segment offsets. The measurement equation (12) can be extended by offset voltages ξ for the individual segments. The offset values can be estimated without increasing the size of the Kalman filter equation systems by directly integrating the innovation sequence. The choice of the integration time constant  determines the trade off between smoothness of the estimates and bandwidth of the offset compensation. The relation of the discrete-time EKF to the capacitive sensor system is illustrated in figure 6. The 16 measured segment voltages are used as inputs to the EKF. Based on the fusion of all signals the EKF is able to calculate accurate estimates of angular position and speed. A further approach to perform error compensation for the capacitive sensor array that is capable of handling additional errors like line faults, short circuits, driver failures, and electromagnetic disturbances, is to use several parallel Kalman filters and integrate the estimates in an additional data fusion step. Decentralized Kalman filtering also drastically reduces the computational requirements for the signal processing of the sensor array. In the present case the 16 measurement signals can be pairwise used as inputs to eight parallel EKFs. The inputs are wired in such a way that a single EKF processes signals that have a phase shift of 90°. The block diagram of the whole sensor system including the final fusion stage is sketched in figure 8. All filters operate on the same process model, but use different measurement equations. Assuming equivalent error covariances for all filters, the decentralized fusion can be done by averaging the eight Kalman Filter state estimates. This assumption holds as long as no segment fault occurs. (11) does not consider the correlation of the single Kalman filter estimates due to the common process noise states. However, a comparison with the optimal fusion filter (Hashemipour et al., 1998) applied to the same problem shows only minor differences. The computational cost of the proposed averaging is very low compared to the optimal filter and numerical problems inherent in optimal fusion are avoided. The availability of eight state estimates in parallel can be exploited for fault detection. The correlation between the different outputs and the final averaged estimate provides a confidence figure (an estimate for the variance) for each filter output. The last N measurements are used for the calculation of the mean square difference. In order to exclude a faulty filter some threshold level must be defined. This threshold level should be adaptive in a sense that only a segment that is significantly worse than the others is eliminated. A possible choice is the mean of the confidence figures with a tuning parameter α. The performance of the decentralized filtering approach to data fusion with the capacitive sensor array is demonstrated with two failure modes; segment drift, and segment disturbance. In capacitive sensing a line break does not necessarily lead to a full breakdown of the segment signal, because the capacitive coupling over a break is still considerable. Since this coupling may depend on external influences such as temperature or vibrations, a noisy segment signal as shown in figure 9 may occur. The noise contamination of one signal instantaneously affects the position estimate of the corresponding filter. The confidence figure of this filter quickly rises above the threshold, so that the corrupted EKF is excluded from the calculation of the final estimate. It can be included again when the variance estimate is below the threshold again. Another failure mode is demonstrated in figure 10. The offset for one segment continuously increases. This occurs in practical implementations when, e.g., water drops or dirt accumulate on a segment, because the high conductivity or permittivity of such a contamination lead to an amplification of the signal. Again, this error is quickly detected, long before even a very restrictive range checking algorithm would have the chance to detect this problem. Consequently the angular position estimate remains unaffected from the disturbance. Ill-posed measurement processes 4.1 Industrial process tomography In some measurement problems the available data is not sufficient to fully characterize the process at hand. This is often the case in distributed scenarios where spatially varying quantities that cannot be measured directly need to be resolved. For example, tomographic measurement techniques are able to provide two-dimensional or three-dimensional information about internal states of industrial processes. The knowledge of the internal www.intechopen.com behavior of such processes can be used for process design, prediction and process control in order to increase product quality and process efficiency. Depending on the application, different tomographic sensing modalities have been developed for industrial purposes, e.g. electrical capacitance tomography (ECT), electrical resistance tomography (ERT), ultrasonic reflection tomography (URT), positron emission tomography (PET) and X-Ray tomography (Scott & McCann, 2005) (Plaskowski et al., 1995). These techniques have in common that the spatially distributed parameters are reconstructed from a limited number of measurements. The sensors are usually distributed around the boundary of the problem domain. The fusion of this boundary data in order to obtain estimates of the spatial distributions of the quantities of interest is an ill-posed inverse problems. This implies that there is no unique solution to the problem. Ill-posed problems are very sensitive to noise and special measures need to be taken to obtain a stable meaningful solution (Kak & Slaney, 2001). This includes the choice of a suitable model of the measurement process and the incorporation of available prior knowledge through regularization. A multitude of reconstruction algorithms has been proposed, ranging from simple linear backprojection methods to model-based approaches based on nonlinear optimization methods and stochastic filtering methods like extended Kalman filters and particle filters. Electrical capacitance tomography presents as a representative example. The objective in industrial ECT is to estimate the dielectric properties of heterogeneous mixtures or distinct transitions between occurring phases based on capacitance measurements between certain electrodes at the boundary of a closed container like a pipeline. Figure 11 illustrates the schematic of an ECT sensor based on the measurement of displacement currents (Wegleiter et al., 2005). The cross-section of a non-conducting pipe is used as measurement plane. 16 electrodes are evenly spaced around the circumference of the pipe. The setup is protected from electromagnetic interference by a grounded outer shield. Every single electrode can be alternately used as a transmitter and a receiver. The front-end electronics of an electrode consists of a transmitting amplifier and an input stage comprising a current-to-voltage converter, a bandpass filter, and a high frequency peak rectifier. A single measurement frame consists of 16 projections, according to the 16 available transmitting electrodes. For one projection a specific electrode acts as transmitter while all the others sense the displacement current. A measurement frame consequently consists of 16 by 15 = 240 entries. The reconstruction of the permittivity distribution within the pipe from the boundary measurements requires a mathematical model of the measurement process. This forward model establishes the functional mapping between the cross-sectional material distribution ( ) y x r , ε and the measured displacement currents q . Under the assumption of nonconducting materials, negligible magnetic fields, and wave propagation effects it can be modelled as an electrostatic field problem in the interior of the screen. This leads to a generalized Laplace equation for the electric potential v (Watzenig et al., 2007b). Dirichlet boundary conditions Fig. 11. Measurement configuration and schematic of a typical ECT sensor. The measurement electrodes are placed around the pipe containing the imaging domain. Every electrode features dedicated transmitting and receiving hardware. The acquired data is transferred to a PC where the signal processing is performed. is numerically solved using the finite element method (FEM). The problem domain is thereby discretized into n triangular finite elements, where the permittivity is assumed constant within a single element. The discretization of equation (20) leads to a linear equation system for the vector v of the potentials of the finite element nodes. The stiffness matrix K reflects the geometry of the problem and the permittivities of the finite elements. Due to the sparsity of the stiffness matrix the equation system can be solved efficiently using specialized algorithms. However, in the context of industrial process tomography a compromise between the spatial resolution and accuracy of the finite element mesh and the computation time still has to be met. Figure 12 shows one quadrant of a typical finite element discretization. The domain is bounded by the outer screen. The interior contains the electrodes, the pipe and the imaging plane. The whole interior of the pipe is segmented into 316 elements, which is the number of unknown parameters to be reconstructed. The unknown permittivity distribution can be estimated from the measured displacement currents by inverting the known measurement relation. A key issue associated with this inverse problem is its ill-posedness. This basically means that there is no unique solution and that it does not depend continuously on the data. The available reconstruction methods can be generally classified in non-iterative and iterative algorithms. Non-iterative methods assume a linear relationship between permittivities and displacement currents through the sensitivity matrix S. In this case the inverse problem can be solved by minimizing a least squares cost functional. The minimization can be performed using the Gauss-Newton algorithm. A flow chart of the procedure starting from an initial guess for the permittivity distribution is sketched in figure 13. The minimization is terminated when the residual is below a predefined threshold. www.intechopen.com The first term on the right hand side of equation (25) is the sum of squared errors between measured displacement currents q m and the simulated values. The minimization of this term alone would not yield sufficient results due to the ill-posedness of the inverse problem. Therefore a second term, the regularization term, has to be added in order to stabilize the solution. The relative weight of the two terms is controlled by the regularization parameter α. The assumptions that are placed in this term introduce prior knowledge about the assumed material distributions and can take various forms. A popular choice in the absence of specialized knowledge is generalized Tikhonov regularization. The regularization matrix L is a discrete approximation of the Laplace operator, leading to high values of R for jumps between neighboring finite elements. Consequently, this choice leads to a smoothing of the reconstructed permittivity distribution. Figure 14 illustrates an ECT sensor with a predefined two-phase material composition of gravel and air (left). The reconstructed cross-sectional material distribution based on the described least squares reconstruction method with Tikhonov regularization is shown on the right. The relative permittivity values are coded in gray scale. Depending on the industrial process at hand, models of the material distribution other than those described may replicate the situation more accurately. This allows to incorporate more detailed prior knowledge and helps to obtain accurate reconstruction results. Usually the materials involved in a process are known in advance. This sets constraints on the admissible parameter range (Steiner & Watzenig, 2008). When the distribution is piece-wise constant, like in discrete multi-phase flow, closed contour models would be appropriate. Boundaries between different materials are explicitly modelled, which introduces processspecific additional prior knowledge. Contour models can e.g. be based on polynomial splines, Fourier series expansions, and level set functions (Kortschak et al., 2007) (Watzenig et al., 2007a). Closed Fourier contours in two dimensions can be obtained through parameterizations of the x and y coordinates with a period T=1. The parameterization is similar for the other coordinate. The complexity of shapes that can be modelled can be increased by using more terms. The reconstruction can be performed by fitting the model parameters to the measurements, similar to the bulk model based on the finite element discretization. Another sensing modality for industrial process tomography is ultrasound reflection tomography (URT). A common approach records reflections of transmitted ultrasonic waves at material boundaries. The measured travel times of reflected waves from many different directions can be used to reconstruct the locations and contours of material inhomogeneities. Also for URT there are different reconstruction approaches, from simple backprojection to model-based approaches utilizing specific contour models . Results obtained with simple backprojection and a B-spline-based contour model and least squares minimization are illustrated in figure 15. The backprojection algorithm can be more generally applied to a wide range of problems. However, the results suffer from noise and blurring. If the process can be clearly characterized a proper model can lead to much better results, as demonstrated in the right subfigure. Tomographic sensor fusion Multimodality tomography systems combine two or more different sensing modalities. The rationale is to increase the reconstruction accuracy by data fusion of complementary data. If sensibly combined, the multimodal data may contain more information about the state of the imaging domain than could be achieved with a single sensing modality alone. Multimodal sensors therefore offer the possibility to monitor complex processes that cannot be dealt with by single modalities. They can be fruitfully applied to three-phase flow, like oil-gas-water flow occurring in oil production, where a single modality only gives good contrast for two of the involved materials. ECT and URT, e.g., are well suited for data fusion. The main motivation is that the electrical modalities are sensitive to the bulk properties of materials while ultrasound is sensitive to phase boundaries, yielding the desired complementarity. So URT can give accurate boundary information not available with electrical tomography and electrical tomography can give information about connected volumes not achievable with URT alone (Steiner, 2006). The most widely used principle of dual modality data fusion in terms of the data flow is sequential coupling of the modalities, which is illustrated in figure 16. After individual acquisition of the raw data with the two sensor arrays the inversion of the first modality is independently performed. The result is then used as additional input for the reconstruction of the second modality. This can be seen as providing a priori knowledge about the process state for the second stage. Another option for the combination of two modalities is parallel processing of the totality of raw data. However, this raises serious issues of data association (Steiner, 2007). For the particular combination of sequential URT-ECT fusion the URT reconstruction can be used to deduce an outer approximation of the inclusion region containing the disperse phase of the material distribution, i.e. to uniquely assign parts of the imaging domain to the background region. If used in the subsequent ECT reconstruction, this information reduces the degrees of freedom of the inverse problem and thus the ill-posedness of the problem. Another approach is to use the incomplete information about object edges in the URT image to relax the smoothness assumption of ECT incorporated by the regularization term. This provides physically sound regularization as locations with a high probability of occurring material interfaces are allowed to show steeper permittivity gradients. A combination of these two sequential fusion approaches is compared to a single ECT reconstruction of two objects in figure 17. The left picture shows the ECT result, where the two objects cannot be resolved by the least squares reconstruction algorithm with smoothing Tikhonov regularization. ECT is least sensitive in the center of the imaging region, leading to a low spatial resolution. In contrast, URT offers highest sensitivity in the central region. The fusion reconstruction, due to the additional prior information from URT, distinguishes clearly between the two material inclusions. An URT image as prior information is able to supply information about material boundaries. This can straightforwardly be included in contour-based reconstruction www.intechopen.com algorithms for ECT. An example result is shown in figure 18. The capacitance data was reconstructed using a level set approach, where the contour is generated from equipotential curves of a two-dimensional function. A common regularization approach in this case is to penalize the arc length of the contour. The URT reconstruction can be added to the regularization term, forcing the level set contour towards the URT reconstruction while still allowing for deviations. The ECT reconstruction of two bubbles in figure 18 shows some blurring compared to the true object contours. The URT reconstruction, containing a sparse collection of points located just at the object boundaries, gives the extra information to allow for a close match of the true contours. Conclusion With increasing demands on the quality and efficiency of industrial processes as well as environmental and safety regulations, industrial process instrumentation is required to www.intechopen.com acquire more accurate and comprehensive information. As the complexity of industrial processes increases, the same holds for the instrumentation. Data fusion techniques maximize the amount of useful information that can be extracted from raw sensor data. This chapter gives an overview of model-based data fusion methods used in industrial process instrumentation with several typical application examples. They are intended to demonstrate the wide range of data fusion applications and are grouped in ascending complexity; from error compensation to multidimensional parameter estimation, sensor fault detection and isolation, integrated sensor arrays, industrial process tomography, and tomographic data fusion. It is expected that the consistent use of process models and data fusion methods will allow for an even more comprehensive and accurate characterization of industrial processes in the future. Acknowledgement This work was partially funded by the Austrian Science Fund (FWF) through the Translational Research Project L261-N04.
2018-11-18T14:03:01.850Z
2009-02-01T00:00:00.000
{ "year": 2009, "sha1": "d34443880e7dff3591b74bbfbe1fbecc6634b0b4", "oa_license": "CCBYNC", "oa_url": "https://www.intechopen.com/citation-pdf-url/6088", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "68b0e07d8256b90bd0dfbf4a8bb93ea742f0e370", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
3813569
pes2o/s2orc
v3-fos-license
Chiral Huygens metasurfaces for nonlinear structuring of linearly polarized light We report on a chiral nanostructure, which we term a"butterfly nanoantenna,"that, when used in a metasurface, allows the direct conversion of a linearly polarized beam into a nonlinear optical far-field of arbitrary complexity. The butterfly nanoantenna exhibits field enhancement in its gap for every incident linear polarization, which can be exploited to drive nonlinear optical emitters within the gap, for the structuring of light within a frequency range not accessible by linear plasmonics. As the polarization, phase and amplitude of the field in the gap are highly controlled, nonlinear emitters within the gap behave as an idealized Huygens source. A general framework is thereby proposed wherein the butterfly nanoantennas can be arranged on a surface to produce a highly structured far-field nonlinear optical beam with high purity. A third harmonic Laguerre-Gauss beam carrying an optical orbital angular momentum of 41 is demonstrated as an example, through large-scale simulations on a high-performance computing platform of the full plasmonic metasurface with an area large enough to contain up to 3600 nanoantennas. Introduction Classical optical lenses gradually change the properties of light, such as phase and polarization, during its propagation.This results in voluminous devices not suitable for photonic integrated circuits.Metasurfaces and flat optics aim to overcome this limit [1,2].Abrupt changes in the properties of light can be introduced at the sub-wavelength scale by so-called meta-atoms, which are tiny scatterers or nanoantennas.By engineering each single emitter of a metasurface, complex structured beams can be created, such as vortex beams carrying orbital angular momentum (OAM) [3,4].The interest in OAM of light is growing for potential applications in classical communications [5], quantum information processing and quantum cryptography [6], microscopy [7], laser machining [8], optical manipulation and particle trapping [9].The creation of OAM states of light from a collection of meta-atoms requires an azimuthal phase tuning, which can be obtained by changing the geometry of the meta-atoms, e.g., V-antenna with different apertures [10] and subwavelength patterning [11], or by using Archimede's spiral configurations [12,13].As an alternative, the Pancharatnam-Berry (geometric) phase can be exploited using a q plate [14] or a fixed meta-atom progressively rotated in a metasurface [15][16][17][18].In this case the incoming radiation has to be circularly polarized, and the spin angular momentum (SAM) is converted to OAM by spin-orbit coupling [19,20], including inside a laser cavity for the production of high purity OAM lasing modes [21]. Flat optics concepts have been recently applied to the nonlinear regime with the emerging interest in nonlinear metasurfaces and metamaterials [22].Nonlinear metasurfaces based on second harmonic generation (SHG) in split-ring resonators (SRR) have been recently demonstrated for beam shaping [23], and focusing and beam steering [24,25].Tuning the properties of the nonlinear emitters is fundamental for shaping the nonlinear beam phase front [26,27].In bulk materials, nonlinear susceptibilities can be predicted by the linear susceptibility, according to Miller's rule.In metamaterials and nanostructures the application of Miller's rule is not straightforward.For example, it holds for third harmonic generation (THG), but fails for SHG, which can be explained by the nonlinear scattering model [28].Examples of nonlinear near-field control in nanostructures have recently been reported [23,[28][29][30][31].The nonlinear emission control is even more challenging in hybrid dielectric/plasmonic nanostructures, since the nonlinear optical generation can take place in the gap of the nanoantenna due to the presence of a nonlinear material, in the nonlinear dielectric surrounding the nanostructure, or in the nanostructure material itself [32].The understanding of the nonlinear emission from hybrid nanostructures, is still under debate [33][34][35][36].Furthermore, the nonlinear emission in bare metal nanostructures strongly depends on the nanoantenna shape, preferring threadlike to bulky shapes [37,38]. In this paper we focus on the nonlinear emission from the gap of a plasmonic nanoantenna which we call a butterfly nanonantenna.The butterfly nanoantenna exhibits uniform field enhancement in the gap for any incident linear polarization.This is due to the chirality of the nanoantenna, which exhibits field enhancement in the gap for only one circular polarization handedness.The linear field in the gap is highly controllable, with an amplitude nearly independent of the angle θ of the incident linear polarization, and a phase linearly varying with θ.This linear field can in turn be used to drive nonlinear optical processes, such as third harmonic generation (THG).This results in an almost ideal Huygens source whose emission can also be highly controlled in amplitude, polarization and phase.The unique properties of this butterfly nanoantenna "meta-atom" allow us to engineer the far-field of a generated nonlinear optical field, by designing the arrangement of thousands of such meta-atoms to create a chiral metasurface.Moreover, the nonlinear optical emission can be at frequencies that are not within the plasmonic bandwidth of the nanoantennas, allowing us to structure light at frequencies that would not otherwise be accessible via only the linear response of plasmonic devices.In this way the nanoantenna is metallic in the linear regime, to exploit plasmonic field confinement [39,40], and dielectric in the nonlinear regime, to allow almost free propagation of the nonlinear fields into the far-field [41,42]. We present a framework for the design of Huygens metasurfaces based on the concept of the Pancharatnam-Berry phase for the production of complex nonlinear beams carrying optical OAM at frequencies outside the plasmonic region.Using a nonlinear optical process to generate the structured light results in a far-field beam of very high purity.The inherent chirality of the butterfly meta-atoms allows us to excite the metasurface with a linearly polarized incident wave at any angle; unlike previous schemes circularly polarized incident beams are not required.The tight control of the near-field (linear and nonlinear) allows us to scale up to high order OAM states.Our demonstration is focused on Laguerre-Gauss beams but is valid also for Hermite-Gauss beams.A full numerical simulation is necessary because the structure cannot be simplified due to the lack of symmetries.Through large scale simulations on a high-performance platform of thousands of gold butterfly antennas, we demonstrate the creation of a third harmonic beam with an OAM of 41. Butterfly nanoantenna The butterfly nanoantenna is sketched in front-view in Fig. 1.It is composed of bent metal strips of width w, thickness t, and rounded edges to minimize divergence effects in the electric field.The structure is asymmetric, i.e., L x = L z , where L x and L z are the lengths of the structure along the x-and z-axis, respectively.The gap has size g and the gap normal is oriented at θ with respect to the x-axis.We consider a gold butterfly nanoantenna uniformly embedded in a generic bulk medium for demonstration purposes.This dielectric material has the dispersion of SiO 2 , and relatively high χ diel such that the nonlinear contribution from the gap predominates over the nonlinear contribution from gold.An example of a real material with these properties is ITO which has a similar refractive index and can exhibit a large third order nonlinear response [43].We perform a broadband linear analysis to find the working wavelength of the nanoantenna.The two lengths L x and L z are responsible for two different resonances which create a crossing point hybridization mode (off-resonance).The crossing point can be seen in Fig. 2(a), which plots the |E x | enhancement in the gap with respect to the incident field |E inc | as a function of the free-space wavelength λ 0 for different linear polarization angles θ inc of the incident wave.We sampled the field at one point in the middle of the gap; the field in the gap is nearly uniform because we work with the lowest order gap mode.A butterfly nanoantenna design that optimizes the crossing point has L x = 300 nm, L z = 200 nm, w = 60 nm, t = 85 nm, and g = 10 nm.We found that a length ratio of L x /L z = 3/2 guarantees the existence of the crossing point.For the parameters above, the crossing point occurs at the working wavelength of λ c = 985.5 nm (ω c = 304.2THz), and will be taken as the wavelength of the pump signal for the nonlinear generation process.At λ c , the enhancement in the gap |E x |/|E inc | is constant with varying θ inc .In Fig. 2(b) we report the phase of E x for different λ 0 ; we observe that at λ c the phase variation is linear with a slope of −1.This indicates the chirality of the nanoantenna, left-handed (LH) in this case; for right-hand (RH) chirality we would have a slope of +1.In Fig. 2(c) we plot the phase difference between the components E x and E z , and observe an excursion that remains within ∼ 6 • .In Fig. 2(d) we plot the ratio |E x |/|E z | and observe that it remains close to 1.The E x and E z components thus have nearly the same amplitude and phase, which means the field in the gap is linearly polarized field in the gap directed at θ = 45 • .The direction of the field enhancement in the gap is constrained by the small gap to be parallel to the gap normal.The single nanoantenna was simulated with periodic boundary conditions (PBCs).The use of PBCs gives a good approximation to the case where the orientation of the meta-atom in the metasurface is slowly varying [44].The inter-distance a = 420 nm used in these calculations is sub-wavelength; a lattice spacing of the order of the wavelength would create lattice resonant modes and interferences modifying the desired phase in the gap. The LH butterfly (L x > L z ) exhibits optimal coupling to left circular polarization (LCP) in terms of producing field enhancement in the gap; the LH butterfly will be used throughout the paper.Exciting an LH butterfly with right circular polarization (RCP) produces negligible field enhancement in the gap.In Figs.3(a,c) we show the field enhancement at ω c for LCP and RCP, respectively.The field enhancement in the gap under LCP illumination is one order of magnitude higher than under RCP excitation.In Fig. 3(b,d) we give the surface charge density showing a strong dipole oscillation in the gap for LCP excitation.In [45] we considered a similar unit cell to build a metasurface for difference frequency generation, which for simplicity was assumed square (L x = L z ).The mirror symmetry with respect to the θ = 135 • axis in that case prevented the production of a field enhancement in the gap for 135 • incident linear polarization.Breaking the symmetry of the structure by using L x = L z is fundamental to producing a field enhancement in the gap for every incident linear polarization.The nonlinear analysis is performed by LCP continuous wave (CW) excitation at ω c and generating a THG signal at 3ω c by the instantaneous isotropic Kerr effect.The CW excitation allows us to isolate the THG component specifically at 3ω c , avoiding frequency mixing and dispersive effects in the third order susceptibility.Simulations of hybrid dielectric/plasmonic nanoantennas show that χ Au is a sufficient condition for neglecting the nonlinearity in gold.The nonlinear field inherits the polarization of the linear field, and its phase is ∠E x (3ω) = 3∠E x (ω).The third harmonic falls in the ultraviolet range where plasmonic effects are not present in gold, which behaves rather as an almost transparent dielectric.This makes the THG hot-spot in the gap of the butterfly nanoantenna well-approximated as a Huygens source.The nonlinear dipole created in the gap of the butterfly nanoantenna thus can be used as the building element of a metasurface to produce complex nonlinear beams, e.g., carrying orbital angular momentum, which we now demonstrate in the next section. Nonlinear metasurfaces A beam can be envisioned as the far-field radiation produced by a distribution of Huygens sources, i.e., radiating dipoles.Tuning the amplitude, polarization and phase of the dipoles in the near-field, can produce a desired structured beam in the far-field.Before considering metasurfaces with butterfly antennas, we first introduce a formalism to describe the distribution of idealized radiating dipoles, e.g., point source apertures with an assigned electric field (or alternatively, current density sources).We use Cartesian (x, z) and cylindrical coordinates (r, φ) in the xz plane, with r = xx + z ẑ and φ = tan −1 (z/x).We consider a circular array of dipoles, with the origin located at the centre of the array.The dipoles are arranged in the xz plane following a square lattice of lattice constant a = 420 nm as determined above.The dipoles are distributed starting from the positive x-axis following the phasor equations for r = a(n x x + n z ẑ), where n x and n z are integers in {−N d , ..., N d }, n 2 x + n 2 z ≤ N 2 d , N d is the number of dipoles along the radius, φ = tan −1 (n z /n x ), |E| is the electric field amplitude associated with the dipole, α is the orientation of the dipoles along the positive x-axis, γ is the number of full rotations of the dipole polarization per 2π, n is the order of the nonlinearity, and i = √ −1.In Figs.4(a,b), dipole sources are depicted as arrows, the orientation of the arrow represents its polarization and the underlying color its phase.We consider α = 0 and a constant amplitude |E|.The polarization of a single dipole is directed at angle α + γφ and its phase is nγφ.We can distinguish two topological charges: γ for the polarization of the dipole and nγ for the phase.Linear and nonlinear dipoles have the same polarization, but they differ in phase and amplitude.For illustrative purposes, we show distributions of dipoles for the linear and nonlinear cases in Figs.4(a,b), respectively, for γ = 2 and N d = 8.The Huygens sources in the arrays of Figs.4(a,b) can be produced with butterfly nanoantennas when they are positioned such that the gap normal is oriented along the dipole polarization, i.e, θ = α + γβ.Through this substitution, we obtain the butterfly metasurface of Fig. 4(c).In a real metasurface |E| depends on the location of the butterfly antenna relative to the illuminating beam and on the size g, which is taken as constant throughout the paper.A plane wave excitation produces |E| ∼ constant in the gap as predicted from Fig. 2(a).This is a good approximation to the case of illumination by a loosely focused Gaussian beam.We consider the butterflies to be embedded within a homogeneous medium for simplicity.Embedding the metasurface into the surface of the nonlinear material and having air on the other side, blue-shifts the crossing point to λ 0 = 915 nm due to the lower effective refractive index.We excite the structure by a LCP CW signal at ω c propagating along y producing a nonlinear beam at 3ω c in the forward direction.The butterfly metasurface can generate structured nonlinear beams for both LCP and RCP excitations.Due to the chirality of the metasurface, the nonlinear far-field generated with RCP illumination is one order of magnitude less intense.The structured beam generated under LCP excitation is due to nonlinear dipoles localized in the gaps.For RCP excitation the contributions mainly come from field enhancement outside the gap. In general, with the arrangement described by Eq. 1, Laguerre-Gauss (LG) modes are obtained. Working in cylindrical coordinates, we found that the OAM state of the innermost intensity ring, figuring as a phase term e ilφ in the far-field beam, is given by where σ = ±1 for incident LCP or RCP, respectively.The OAM order can be increased in two ways: increasing γ or increasing n.Since we are considering n = 3, we investigate a specific example where we reach high order OAM by increasing γ.We consider γ = 20 and N d = 30 (Fig. S. 1 -supplementary section), resulting in l = 41.When γ is large the butterfly nanoantennas in the centre of the metasurface cannot resolve the topological charge.The far-field beam can be polished by removing the innermost butterflies that do not properly resolve the desired number of rotations, that is, by removing the nanoantennas inside the circle of radius r = a n 2 x + n 2 z , such that 2rπ ā > 2γ (Nyquist condition), where ā ∼ a is the average distance between nanoantennas along the circle. In Fig. 5 top row, we show the far-field from an idealized set of nonlinear dipoles, where the far-field was calculated by a near-to-far transformation of the surface field distribution described by Eq. 1.The bottom row shows the nonlinear far field generated via FDTD simulations of the corresponding arrangement of butterfly nanoantennas embedded in third order nonlinear material.The near-field distribution at 3ω c on a xz plane cut was used to numerically calculate the far-field by near-to-far transformation.Movie 1 (Fig. S. 2 -supplementary section) shows the time-domain evolution of an LCP plane wave interacting with the metasurfaces for γ = 20, in a xz plane cut through the middle of the gaps.We observe that both methods show an OAM state of l = 41.There is very good agreement between the two rows, indicating that the butterflies, even when arranged on the surface, do act as effective idealized Hugyens sources.The agreement is expected to improve by locally optimizing each butterfly on the metasurface.In Fig. 5 we observe that the external far field intensity ring carries l 2 = 79.In general, the OAM state of the external ring is l 2 = l + 2σ(γ − 1).This higher order OAM state is due to the diffraction of a uniform plane wave by the metasurface. Conclusion We proposed a plasmonic nanoantenna to control nonlinear optical emission by the linear field enhancement in the gap.We used the nanoantenna as meta-atom in Huygens metasurfaces, and we demonstrated its applicability to arbitrarily structure the far-field radiation.Due to the chirality of the nanoantenna, only one circular polarization handedness enables the field enhancement in the gap and the consequent nonlinear emission.This results in the direct conversion of a linearly polarized wave to a nonlinear beam with arbitrary complexity.A highly pure Laguerre-Gauss beam carrying an orbital angular momentum of 41 at the third-harmonic of the linear exciting field was demonstrated.The beam synthesis framework is general and opens the door to applications requiring very high purity, and wavelengths not accessibly by linear plasmonics.In addition to the low order nonlinear processes investigated here, higher order processes may be considered, such as high harmonic generation from solids, which is very topical and could lead to the structuring of extreme ultraviolet light. Figure 1 : Figure 1: Top view of the LH butterfly nanoantenna for gap a axis oriented at θ = 45 • .
2016-09-24T01:47:13.000Z
2016-09-24T00:00:00.000
{ "year": 2016, "sha1": "0e46a656ccfe85c83ecf7e4aa930617467b66758", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1364/oe.25.002569", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "0e46a656ccfe85c83ecf7e4aa930617467b66758", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
55579221
pes2o/s2orc
v3-fos-license
Optimized Broadband Extinction Method for Retrieving 500 nm AOD with Long-Term Direct Solar Radiation : Model Test and Application A traditional broadband extinction method is improved by introducing the cloud-screening module and aerosol modal adjustment module, and the new approach is known as optimized broadband extinction method (OBEM). Based on OBEM, it could retrieve 500 nm aerosol optical depth (AOD) database with the use of direct solar radiation. Comparison of the monthly average AOD from OBEM and Chinese Sun Hazemeter Network (CSHNET) for 2006–2010 over Beijing site shows that the retrievals and observations display a high degree of consistency. The correlation equation is Y = 1.00X – 0.01 with a correlation coefficient R of 0.83 and root-mean-square error (RMSE) of 0.08. The largest relative error for the yearly average value is only 3.85%. So it is clear that the OBEM could serve as an effective tool to retrieve highly accurate historical 500 nm AOD data and implement homogenization with the current observational 500 nm AOD monitored by satellites and sun-photometers. The yearly average AOD of Beijing in 1993–2010 is 0.58 ± 0.03, and the seasonal AODs in MAM (March–April–May) and JJA (June–July–August) are obviously higher than SON (September– October–November) and DJF (December–January–February), with values of 0.70 ± 0.14 and 0.64 ± 0.16, respectively. The long-term 500 nm AOD reveal significant periodic inter-annual characteristics with a slight downward trend, and the irradiance also decreased due to the extinction of increasing total-cloud amount. INTRODUCTION Atmospheric aerosols play an important role in the terrestrial climate system and air quality monitoring.The aerosols can scatter and absorb the sunlight and directly affect the balance of earth's solar radiation and energy budget, in addition, it can also act as cloud condensation nuclei (CCN) and ice nuclei (IN), affecting the formation process and lifetime of clouds and precipitation events indirectly (Charlson et al., 1992;Penner et al., 1994;Che et al., 2013;Tao et al., 2014).On the other hand, aerosol particles are severe environment pollutants that reduce the air quality.Those aerosol particles with diameter less than 10 µm and 2.5 µm are defined as PM 10 and PM 2.5 , respectively.They can invade to human bronchial area and lungs, causing serious human health problems.Furthermore, their extinction effects show significant impact on the atmospheric visibility, threatening the transportation safety and human daily life (Zhang, 2009).Thus the noteworthy climate effects and airpollution effects of aerosols have gradually become a highinterest field of study in atmospheric science worldwide. AOD is a key physical parameter in aerosol climate effect research and environmental pollution evaluation, besides, this parameter also shows greater significance in correction of satellite remote sensing.Currently, ground-based and remote-sensing are two main methods used to monitor AOD values.Satellite remote-sensing is a highly effective tool for obtaining high-spatial resolution AOD data, but due to the influence of surface albedo and uncertainty in aerosol distribution, the accuracy of the satellites should be further improved (Li et al., 2007;Wang et al., 2007aWang et al., , 2010)).Relatively speaking, ground-based observation is a more exact approach to capturing aerosol optical properties.At the beginning of this century, many organizations established their own aerosol-monitoring networks on the Chinese mainland.In 2002, the Aerosol Robotic Network (AERONET) established more than 30 sites in succession across China (Beijing, Yulin, etc.), equipping them with CE-318 sun-photometers.To obtain in-depth knowledge of dust aerosol optical properties in northern China and to test the accuracy of satellites, the China Meteorological Administration (CMA) first built more than 20 sites to organize the China Aerosol Remote Sensing Network (CARSNET) and also used CE-318 sun-photometers to measure solar radiation.In 2004, this network began professional and normal operation.CSHNET was established by the Institute of Atmospheric Physics (IAP) in August 2004 using portable LED hazemeters in the initial 23 sites (Xin, 2007).The relative error of this type of AOD product is less than 5.00% compared with the CIMEL instrument, thus demonstrating its reliability and veracity (Xin, 2006).In 2011, the Campaign on Atmospheric Aerosol Research network of China (CARE-China) was built based on CSHNET with 36 sites equipped with Microtops-II sunphotometers to collect unified observations (Xin et al., 2015).These 36 sites represent the typical ecosystems of China, and the observation results can describe the spatial and temporal distribution characteristics of aerosol optical properties for regional backgrounds in China.In addition, this network also offers the observations necessary to validate and assess the applicability of international satellites over China (Li et al., 2007;Wang et al., 2007a, b;Liu et al., 2010;Wang et al., 2010) and is applied to evaluate the aerosol direct radiation effect in China (Kwon et al., 2007;Li et al., 2010) and to test the accuracy of regional climate and environment models, such as Reg-CM3 (Xin et al., 2010), RAMS-CMAQ (Han et al., 2009(Han et al., , 2010(Han et al., , 2011)), MATCH (Yin et al., 2009;Zhang et al., 2012).However, including all of the networks mentioned above, only ten years of AOD observation data have been available until now, and thus reconstructing the historical AOD database is necessary for climate research and air pollution evaluation in China. Since the International Geophysical Year (IGY) during 1957-1958, measurements of solar radiation have been collected routinely worldwide.During that campaign period, a large amount of radiation observation instruments and an entire set of radiation observation methods offered by the former Soviet Union were introduced to China, and 122 radiation observation sites were set up in the initial stage to monitor the daily solar radiation.In 1989, CMA began to upgrade the old solar radiation network using the new comprehensive radiometers developed by China to replace the outdated instruments and perform the latest meteorological radiation observations.As of 1993, the improvement project was fundamentally completed, and the monitoring temporal resolution was increased to a new level of hourly values.After years of efforts, a great number of observational radiation data were accumulated.These databases document the decadal variation of solar radiation and also contain potential information for aerosols and clouds.Consequently, how to make effective use of these databases to reveal aerosol variations has become an important issue for many experts and scholars (Xu, 2008).Dating back to 1972, Unsworth and Monteith (1972) first proposed a parameterized model by applying broadband solar radiation to retrieve the broadband aerosol optical depth (BAOD).Subsequently, many professors worked to establish the broadband extinction method (BEM) using broadband direct solar radiation to calculate the AOD (Blanchet, 1982;Qiu, 1995Qiu, , 1997;;Gueymard, 1998;Qiu, 1998;Kudo et al., 2010aKudo et al., , b, 2011)).Until now, scientists in China have obtained many long-term historical AOD databases at 700 nm or 750 nm (the equivalent wavelength was 700 nm or 750 nm) with BEM (Luo et al., 2000;Qiu et al., 2000;Luo et al., 2001Luo et al., , 2002;;Zong et al, 2005).But in general, 500 nm is the most widely monitored (Both by satellites and ground-based instruments) and analyzed wavelength, and thus the retrieval result cannot be compared with the observations.In another aspect, the retrieval model is based on the ideal Junge spectral distribution, assuming v * = 3, but the main-control modals under various backgrounds show significant differences according to the CSHNET monitoring results.In addition, effective removal of cloud-contamination radiation data is another key point for use of the retrieval model.All of these factors might result in a relatively higher error in the retrieval process. We propose the OBEM model for retrieval of the AOD at 500 nm with the introduction of a cloud-screening module and an aerosol modal adjustment module.This research innovatively reconstructs the historical 500 nm AOD and achieves the normalized AOD variation characteristics for the most recent18 years of Beijing. Introduction to the Cloud-Screening Module When retrieving AOD using the direct solar radiation data, it cannot be ignored that cloudy skies and random cloudlets can exhibit certain extinction effects on direct solar radiation, ultimately causing an increase in the retrieved AOD values.Hence, screening for clearness data is a prerequisite for retrieving AOD with direct solar radiation.In the optimized model, a cloud-screening module with high accuracy is added to filter the observational radiation data.Because the cloudy weather often accompanies with higher moisture and shorter sunshine, we set a threshold to total bright sunshine duration and relative humidity (RH), respectively, to screening the clearness.If the meteorological conditions satisfy ①-② simultaneously, it can be estimated as a clear day.① Daily total bright sunshine duration observed by sunshine recorder is larger than 4 hours.② Daily mean relative humidity (RH) is less than 70%. To assess the veracity of the cloud screening method, we filter the clearness cases from 2009 and 2010 for the Beijing site using this method, and the screening results are compared with those days that contain real hazemeter observations (Shown in Fig. 1).The cloud-screening (red circles) results from 2009 and 2010 are 253 and 248, with corresponding observational clearness values of 257 and 248, among which the misjudgment percentages (B * /total * ) of the two years are 9.49% and 9.68%, and the erroneous rejection percentages (C * /total * ) are 10.89% and 9.68%, respectively.Another critical point that should be considered in this cloud screening process is that the sunshine duration and RH datasets are obtained at the Beijing Observatory, whereas the hazemeter observations are collected in the yard of the Iron Tower, IAP.A 20-kilometer distance exists between the two locations, and this factor might be highly important in increasing the misjudgment.Based on these factors, it can be assuredly concluded that the method is reliable for cloud screening. Introduction to the Aerosol Modal Adjustment Module Common sense observes that the different aerosol modals display various extinction effects for separated wavelengths.Under the assumption of a Junge spectral distribution, τ(λ) can be calculated using the equation: where λ E is the equivalent wavelength.τ BAOD is the broadband aerosol optical depth.α is the Angstrom exponent.v * is the Junge spectral distribution parameter within a range of 2-4 (Sheng et al., 2003).However, in the previous calculation process, it is common to assume v * ≈ 3 to represent the particles with diameters larger than 0.1 µm suspended on land, and the corresponding Ångström exponent α ≈ 1 (Qiu et al., 2004). The perennial observational results of CSHNET based on the Chinese ecosystem research network (CERN) showed that due to the seasonal differences of the aerosol modes under various ecosystems, the AOD and Angstrom exponent exhibit a certain regular cycle for seasonal variation.The range of the Angstrom exponents are 0.0-3.0,which illuminates that the preceding v * ≈ 3 (α ≈ 1) assumption might not be accurate.According to the observational Angstrom exponents datasets offered by CSHNET, Xin (2007) gave the statistical Angstrom exponents according to different geographical area, ecosystem type and seasonal classification (Shown as Table 1).Thus, it is more precise to modify the aerosol spectral distribution parameter and achieve the optimization scheme for different ecosystem types. Model Construction The parameterized program for OBEM is shown in the following: ① Parameterized model for broadband aerosol optical depth τ BAOD (Qiu, 2001). The parameterized model that uses broadband direct solar radiation (S) to retrieve AOD is determined as follow: where t a+m is the broadband transmission rates of the atmosphere in the aerosol-concentrated area.t m is the broadband transmission rates of molecules in the aerosol-concentrated area.S is the broadband direct solar radiation.S 0 (λ) is the solar irradiance of λ nm at the top of the atmosphere.θ 0 is the zenith angle.m a (θ 0 ) is the aerosol optical air mass.λ 1 and λ 2 are the lower and upper limits of the sun radiometer for 0.3 µm and 4 µm, respectively.T m is the molecular transmittance function, which can be calculated by the MODTRAN radiative transfer model.② Parameterized model for equivalent wavelength λ E (Qiu, 2001): where m a is the aerosol optical air mass (Gueymard, 1998).u is the water vapor content (cm).μ 0 = cos(θ 0 ).α 1 and α 2 are the Angstrom exponents located in the range of λ ≤ λ M and λ ≥ λ M (λ M = 0.732 µm), respectively.All wavelength parameters use an unified µm.With the assumption of a non-Junge aerosol distribution, if α 1 < α 2 and f size > 0, λ E has longer wavelengths (Qiu et al., 2002).③ Retrieval model: The relationship between τ BAOD retrieved by the broadband direct solar radiation and equivalent wavelength λ E can be expressed as: Assuming that the aerosol spectral distribution n(r) obeys the Junge distribution, the relationship between AOD (τ a (λ)) and wavelength λ can be expressed as follows: where, β is the turbidity coefficient.Combined with Eqs. ( 11) and ( 12) and assigning λ = 0.50 µm (i.e., 500 nm), the AOD at 500 nm can be calculated using the following formula: DATA INTRODUCTION The solar radiation datasets used in this paper are the hourly direct radiation collected by the Beijing observatory (54511, 39°48′N, 116°28′E). The AOD observation datasets are supported by the Beijing site of CSHNET, which is located in the yard of the Iron Tower, IAP.CSHNET was established in August 2004, with 23 stations equipped with unified LED hazemeters across China in the initial construction.Langley calibration and transfer calibration were used to calibrate the hazemeters and three calibration experiments were conducted in July of 2004, December of 2005 and August of 2006 during the 3-year CSHNET project, which ensured the precision of the data quality.The LED hazemeters contain 4 channels (440 nm, 500 nm, 650 nm and 880 nm) and have a field angle of 2.5°.This type of instrument was widely used in the GLOBE (Global Learning and Observations to Benefit the Environment) project.In addition, the U.S. Forest Bureau also conducted selected regional aerosol observational experiments using this tool.Based on stability and reliability advantages, this approach is universally accepted in global science research (Brooks and Mims, 2001;Hao, 2005).According to the Lambert-Beer law, the AOD process equation predefined in LED hazemeter is shown below: where τ(λ) is the atmospheric optical properties at λ µm, i.e., τ(λ) = τ aero (λ)+ τ R (λ)+ τ abs (λ).τ R (λ) is the optical depth of rayleigh scattering, and τ abs (λ) is the optical depth of absorbing gas.m(θ) is the relative atmospheric air mass, and θ is the zenith angle.v(λ) is the measurement value of the sun-photometer.v dark (λ) is the sun-photometer grey value.v 0 (λ) is the sun-photometer calibration constant.d is the sun-earth distance parameter and calculated by the following formulas (Xin, 2007). where dn is the day series (1-Jan is 0 and 31-Dec is 364).dm is the average sun-earth distance.d T is the actual sun-earth distance of monitoring moment. MODEL TEST AND APPLICATION In this section, the AOD at 500 nm is first retrieved from the hourly direct radiation.To test the accuracy of the retrieval results, the valid AOD is compared with the CSHNET products.Finally, we analyze the long-term variation of AOD over the Beijing region. Model Test Fig. 2 Fig. 5 presents a seasonal comparison of AOD values for the observation data of CSHNET and the retrieval results of OBEM during 2006-2010.The range of the correlation coefficient R 2 in the 4 seasons is 0.79-0.84with all data exceeding the significance level of 0.001, and the RMSE values fall in the range of 0.17-0.24.Among the 4 seasons, the retrieval result of JJA is the worst, and the dispersion degree is comparatively high with the highest RMSE value of 0.24.This result is primarily related to strong solar energy, instability of the atmosphere, and powerful convection accompanied by the high humidity environment that occurred in JJA, leading to a prominent cloud-formation effect that sequentially further increased the difficulty in eliminating blocky clouds and cirrus clouds and eventually caused a larger retrieval error.The correlation equation of JJA is Y = 0.86X + 0.07, and difference compared with the midline of Y = X reaches 0.14.The RMSEs for both MAM and DJF all are relatively small, with values of 0.17 and 0.18, respectively. For long-term AOD characteristic analysis, accuracy evaluation of the yearly retrieval result is more important.Table 2 shows that within two decimal places, the relative errors for the yearly average data of OBEM andCSHNET in 2006-2010 all are less than 5.00%.The smallest value of 0.00% occurred in 2009, and the largest value of 3.85% occurred in 2010.The largest average AOD values for OBEM and CSHNET all appeared in 2006, with values of 0.57 and 0.59, respectively.Xu et al. (2015) illustrated that the yearly average AOD for 2006 was significantly higher than those of the other 4 years (2007-2010) as well.The smallest values of 0.51 for OBEM and CSHNET all occurred in 2009.Therefore, the yearly retrieval datasets are also acceptable. Model Application--Long-Term Variation of AOD at 500 nm over Beijing Based on the above analysis, the OBEM retrieval result is reliable and acceptable for revealing the variation characteristics of AOD at 500 nm.Fig. 6 gives the monthly variation of 500 nm AOD (Fig. 6 The difference in March is the most obvious, reaching a gap of 0.18 and mainly associated with the decrease in MAM dust events (Wang et al., 2017).In the entire variation (blue line in Fig. 7(b)), the AOD presents an unimodal shape.Peak value of 0.79 occurs in April, which is contributed by the frequent strong dust events.Statistics for the annual and seasonal average AOD in 1993-2010 (table in Fig. 7(b)) shows that the annual mean value is 0.58 ± 0.03.Among the 4 seasons, AOD of 0.70 in MAM is the largest and that these values decrease with the order of JJA (0.64), SON (0.51), and DJF (0.46).In common sense, irradiance of JJA should be higher than the other three seasons over Beijing.While the observational irradiance for 1993-2010 (Fig. 7(b)) has two peaks and the valley value is in JJA.Comparing with MAM, the aerosols exhibit similar extinction effect (AOD MAM : 0.70; AOD JJA : 0.64).So we can come to a principal conclusion that, except for the relatively higher AOD, the clouds developed by severe convection and high humidity also have strong extinction effect on weakening the solar light in this season.In ideal clear MAM and SON, the received irradiances are the same, but Fig. 7 shows that the MAM peak is obviously higher than the SON peak and the aerosol loading in SON is lower, hence cloud extinction effect in SON is very significant as well.Because the AOD value decreased with the increase in monitoring channels, the OBEM retrieval datasets were lower than those of Che et al.(2015) and higher than those of Xu et al.(2015).As is shown in Fig. 8(b), the irradiance also exhibited a downward trend during 1993-2010.Zheng and Zhang (2013) found the total-cloud amount was increasing in this period.So the extinction effect of cloud was the major factor on weakening the solar light. CONCLUSION With the application of the cloud-screening module and the aerosol modal adjustment module, the OBEM is more accurate and reliable in retrieving the AOD database of 500 nm and is favorable for docking with the current 500 nm AOD datasets monitored by satellite and ground-based instruments.What's more, OBEM is assuredly feasible to reconstruct the historical 500 nm AOD database and implement the homogenization for observational wavelength of AOD.The AOD for 1993-2010 retrieved by OBEM shows that this value presents significant inter-annual cycle variation in Beijing.Affected by frequent dust events, the highest AOD occurred in MAM, whereas the lowest value was noted in DJF.In the past 18 years of 1993-2010, the AOD exhibited a slight downward trend.Cloud extinction effect is especially significant in JJA and SON and the long-term variation of irradiance also decreased due to the extinction of increasing total-cloud amount, so it was obviously more important to remove the cloud contamination in the process of retrieving AOD with direct solar radiation. Fig. 1 . Fig. 1.Tests of the cloud-screening module.Red circles shown for 2009 and 2010 represent the cloud screening results and blue circles represent the observational clearness (the observational clearness is defined as those days that have hazemeter observations).A * denotes the available results; B * denotes misjudgment components (misjudgment means that observed cloudy sky is classified as clearness by the cloud screening module); C * denotes the erroneous rejection components (erroneous rejection means observed clearness is classified as cloudy sky by the cloud screening module). Fig. 2. Comparison of the daily average AOD between the observation data of CSHNET and retrieval results of OBEM at the Beijing site (the solid thick line represents linear fitting of the daily data; dashed line represents Y = X; dotted line represents Y = 1.15X ± 0.05). (a)) and irradiance (Fig. 6(b)) in 1993-2010.The AOD variation obviously presents significant inter-annual cycle characteristics over the Beijing region and a unimodal or a bimodal distribution over the course of a year, among which peak values are primarily found in MAM and JJA, and the valley values are found in SON and DJF.In analyzing AOD variations Fig. 5 . Fig. 5. Seasonal comparison for AOD values between observation data of CSHNET and OBEM retrieval results over Beijing site (solid thick line represents the linear fit of the daily data; dashed line represents Y = X; dotted line represents Y = 1.15X ± 0.05). 2003 and June 2007, and the smallest value of 0.20 occurred in December 2007 during the 18 years of 1993-2010.The AOD in 2007 fluctuated remarkably, with a range of 0.20-1.08,and the AOD variation in 2001 showed the weakest fluctuation, with an AOD range of 0.44-0.76.The irradiance fluctuated severely, and it's related to the seasonal differences of light distribution and the atmospheric extinction effect.Comparing the monthly average values in 1993-2005 (red line in Fig. 7(a)) with those of 2006-2010 (green line in Fig. 7(a)), except for June, August and September, the values for 1993-2005 all are greater than those of 2006-2010. Fig. 7 . Fig. 7. Monthly average of 500 nm AOD for 1993-2005 (red line in (a)), 2006-2010 (green line in (a)) and 1993-2010 (blue line in (b)).Red line in (b) represents the variation of monthly average irradiance for 1993-2010.Table in (b) gives the statistics for annual and seasonal average AOD. Xu et al. (2015) retrieved the 750 nm AOD in the period 1993-2012 using the traditional broadband extinction method and found that the AOD exhibited a downward trend for the past 20 years in Beijing.Two peaks existed in 2003 and 2006, among which the AOD in 2006 was relatively high, with a value of approximately 0.60.Che et al. (2015) analyzed the 440 nm AOD variation of the Beijing site belonging to CARSNET for the duration of 2002-2013 and also noted that the AOD decreased year by year.Fig.8shows the yearly average values of retrieved 500 nm AOD (Fig.8(a)) and irradiance (Fig. 8(b)) in 1993-2010.It is obvious that the retrieval result commendably presents the AOD downward trend, with a fitting equation of Y = -0.0058X+ 0.6998.The smallest AOD of 0.51 occurred in 2009, and the largest AODs of 0.63 were noted in 1993 and 1996. Table . 2 . Comparison of the yearly average AOD between observation data of CSHNET and OBEM retrieval results at the Beijing site.
2018-12-06T22:53:20.153Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "7e07f388a7564fe147d1cc53f3baf902353a5cfb", "oa_license": "CCBY", "oa_url": "http://www.aaqr.org/article/download?articleId=2411&path=/files/article/2411/24_AAQR-16-12-LRT-0591_3220-3229.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7e07f388a7564fe147d1cc53f3baf902353a5cfb", "s2fieldsofstudy": [ "Environmental Science", "Physics" ], "extfieldsofstudy": [ "Environmental Science" ] }
119590483
pes2o/s2orc
v3-fos-license
On the semi-centre of a Poisson algebra If $\mathfrak{g}$ is a Lie algebra then the semi-centre of the Poisson algebra $S(\mathfrak{g})$ is the subalgebra generated by ad$(\mathfrak{g})$-eigenvectors. In this paper we abstract this definition to the context of integral Poisson algebras. We identify necessary and sufficient conditions for the Poisson semi-centre $A^{\operatorname{sc}}$ to be a Poisson algebra graded by its weight spaces. In that situation we show the Poisson semi-centre exhibits many nice properties: the rational Casimirs are quotients of Poisson normal elements and the Poisson Dixmier-M{\oe}glin equivalence holds for the semi-centre. Introduction Throughout this paper k is a field of characteristic zero, all vector spaces are defined over k, and g will be a Lie algebra. The symmetric algebra Spgq carries a natural structure of a Poisson algebra. It is easy to see that the subalgebra Spgq g Ď Spgq consisting of elements annihilated by adpgq coincides with the Poisson centre. The semi-invariants are, by definition, the common eigenvectors for adpgq and the algebra Spgq sc which they generate is known as the Poisson semi-centre. This is a Poisson commutative subalgebra of Spgq graded by the weight space decomposition of adpgq. Over the years the study of semi-centres has motivated a sizable body of research, see [1,2,7,8,9,14] and the references therein. Since this topic arose in the context of invariant theory some of the central questions are the polynomiality and factoriality of semi-centres. One notable outlet for the study of semi-invariants lies in the computation of the rational invariants of Spgq. By the results of Rentschler and Vergne, Dixmier's fourth problem is in fact equivalent to the statement that the centre of Frac Spgq is purely transcendental over k, see [15] and [1,Problèmes]. Thanks to [2] every rational invariant is a quotient of two elements of Spgq sc with the same weight, and so the theory of semi-invariants appears naturally in some important classical problems. The purpose of this article is to define and study of the Poisson semicentre A sc of an arbitrary integral Poisson algebra A, by which we mean a Poisson algebra which is also an integral domain. We recall that a Poisson normal element a P A is such that tA, au Ď Aa, equivalently the principal ideal Aa is Poisson, and our first observation is that when A " Spgq the Poisson semi-invariants of Spgq are precisely the same as Poisson normal elements (Lemma 2.1). With this in mind we define the Poisson semi-centre A sc of A to be the subalgebra generated by the Poisson normal elements. In general this subalgebra need not be a Poisson subalgebra (see Example 2.3), and even when it is, it need not be Poisson graded by the weight spaces for the Hamiltonian derivations (see Example 2.9). To remedy this we begin the paper by identifying a necessary and sufficient condition for A sc to be a Poisson algebra graded by the Poisson weight space decomposition, as we now explain. Since A is assumed to be a domain, it is easily shown that for every Poisson normal element a P A there exists a Poisson derivation λ : A Ñ A such that tb, au " λpbqa for all b P A (Lemma 2.5). The additive submonoid of Der k pAq generated by these derivations will be denoted ΛpAq. In Proposition 2.6 we show that A sc is a Poisson subalgebra of A graded by the weight space decomposition if and only if ΛpAq is an abelian Lie submonoid of Der k pAq, and we refer to the latter condition as the abelian weight property. We exhibit several large families of Poisson algebras satisfying this property, including symmetric algebras Spgq of Lie algebras, Poisson affine spaces, semiclassical limits of various quantised coordinate rings (see [6]) and the algebras Apn, aq studied by Sierra and the first author in [12]. Motivated by the close connection between the semi-centre Spgq sc and the centre of the Poisson quotient field Frac Spgq (see [2]) we investigate the relationship between Poisson ideals, normal elements and the centre of the fraction field of the semi-centre. Some of our results are gathered together here; see Propositions 3.6 and 3.9. Proposition 1.1. Let A be an integral Poisson algebra with the abelian weight property and such that A sc is finitely generated. Then the following hold: (i) Every nonzero Poisson ideal of A sc contains a nonzero Poisson normal element; (ii) Every rational Casimir of A sc is a quotient of two normal elements weighted by the same derivation. If A is an integral Poisson algebra and a, b P A are normal elements weighted by the same derivation, then it is easily seen that ab´1 lies in the centre of Frac A. Question 1.2. Does every element of the centre of Frac A arise as the quotient of two normal elements? When A is a given Poisson algebra, the study of the rational Casimirs is a challenging problem; this is especially true for Dixmier's fourth problem in the case of symmetric algebras of Lie algebras (see [14] for a detailed discussion). One application of such information is to develop our understanding of the Poisson primitive ideals of P-SpecpAq via the Poisson Dixmier-Moeglin equivalence. Recall that a Poisson prime ideal I Ď A is called locally closed if tIu is a locally closed subset of the Poisson spectrum P-SpecpAq; I is called Poisson primitive if it is the largest Poisson ideal contained in some maximal ideal of A; finally, I is called rational if the Poisson centre of the quotient field of A{I is algebraic over k. Thanks to [13, 1.7, 1.10] we know that every locally closed ideal is primitive and every primitive ideal is rational. Brown and Gordon asked whether all three properties might coincide [3], and when they do we say that A satisfies the Poisson Dixmier-Moeglin equivalence. Using Proposition 1.1 we prove the following. Theorem 1.3. Let A be an integral Poisson algebra with the abelian weight property and such that A sc is finitely generated. Then the Poisson Dixmier-Moeglin equivalence holds for A sc . We now describe the structure of this paper. In §2 we discuss the definition of the Poisson semi-centre and the abelian weight property, showing that some familiar examples of Poisson algebras satisfy this property. In §3 we consider a class of finitely generated Poisson algebras axiomatising the algebras A sc where A is an integral Poisson algebra with the abelian weight property. We call these Poisson algebras generalised Poisson affine spaces and we prove Proposition 1.1 in the context of such algebras, from which we deduce Theorem 1.3. Acknowledgements: We would like to thank Professor David Jordan and Professor Alfons Ooms for carefully reading the first draft of this manuscript and making helpful suggestions. The second author is grateful for the support of EPSRC grant EP/N034449/1. The Poisson semi-centre and the abelian weight property Suppose that g is a Lie algebra over k. If tx i | i P Iu is a basis for g then the symmetric algebra Spgq carries a natural structure of a Poisson algebra with bracket: The invariants of Spgq are the elements Spgq g :" tf P Spgq | adpgqf " 0u and the semi-invariants are defined to be tf P Spgq | adpgqf Ď kfu. An easy calculation using (2.1) shows that the Poisson centre of Spgq is equal to Spgq g . The algebra which is generated by the set of all semi-invariants is known as the semi-centre Spgq sc , and it has been the focus of much research over the years. The Poisson normal elements of Spgq are defined to be the elements f P Spgq such that tSpgq, f u Ď Spgqf . Proof. If a is a semi-invariant then using (2.1) we see that a is Poisson normal. Conversely if a is Poisson normal, then for any x P g there exists λpxq P A such that tx, au " λpxqa. We deduce that λpxq P k from the fact that the Poisson bracket (2.1) satisfies tSpgq i , Spgq j u Ď Spgq i`j´1 , where Spgq " À iě0 Spgq i is the grading with g placed in degree 1. The above discussion leads us naturally to: The semi-centre of a Poisson algebra A is the subalgebra A sc generated by Poisson normal elements. One nice feature of the semi-centre Spgq sc is that it is Poisson commutative. However outside the Lie theoretic setting, this fails immediately. To illustrate what may go wrong we present a couple of examples. The first one shows that in general A sc is not necessarily a Poisson subalgebra of A. Example 2.3. Let A " krx, y, zs with brackets tx, yu " xyz, tx, zu " x and ty, zu " y. Then A sc " krx, ys is not closed under the Poisson bracket. The following example shows that even when A sc is a Poisson subalgebra it is not always Poisson commutative. Example 2.4. Let A " krx 1 , ..., x n s be a polynomial algebra and let pλ i,j q 1ďi,jďn P Mat n pkq be a skew-symmetric matrix. Define a Poisson bracket on A by the rule This algebra is known as Poisson affine space and, since the generators x 1 , ..., x n are Poisson normal we have A " A sc is not Poisson commutative in general. The Poisson torus T associated to A is the localisation of A at the generators T " krx˘1 1 , ..., x˘1 n s and the Poisson bracket on A extends uniquely to a Poisson bracket on T . We proceed to discuss the properties of normal elements. Recall that a Poisson derivation λ P Der P pAq is a k-derivation of A which is also a derivation of the Lie bracket t¨,¨u of A. Lemma 2.5. If A is an integral domain and a P A is Poisson normal then there exists a Poisson derivation λ P Der P pAq such that tb, au " λpbqa. From henceforth we assume that A is an integral Poisson algebra. For any λ P Der P pAq we make the notation A λ :" ta P A | tb, au " λpbqa for all b P Au and write ΛpAq :" tλ P Der P pAq | A λ ‰ 0u. Since tA, ku " 0 we have 0 P ΛpAq and when a P A λ and b P A µ we have ab P A λ`µ by the Jacobi identity, so that ΛpAq is a commutative submonoid of Der P pAq. This leads to an alternative description of the semi-centre The derivations λ P ΛpAq will be referred to as the weights of A, whilst the subspaces A λ will be called the weight spaces. Although the formula (2.4) defines a grading on A sc as an associative subalgebra of A, it does not in general define a Poisson grading (see Example 2.9). In this paper we are interested in the case where A sc is a Poisson subalgebra which is Poisson graded by (2.4), ie. tA λ , A µ u Ď A λ`µ for λ, µ P Λ. The following translates these properties into statements about Λ. Proposition 2.6. Let A be an integral Poisson algebra. Then the following are equivalent: (i) rΛ, Λs " 0; (ii) A sc is a Poisson subalgebra of A and (2.4) is a Poisson grading. If (i) or (ii) holds then we say that A has the abelian weight property. Furthermore λpA µ q Ď A µ for all λ, µ P Λ. Since A is an integral domain it follows that λpbq P A µ and so λ preserves the weight spaces for all λ P Λ. Remark 2.7. When A has the abelian weight property, the Poisson normal elements of A sc are precisely the elements homogeneous with respect to the grading A sc " À λ A λ . We shall use these two names interchangeably for such elements. We also point out that homogeneous elements of degree zero are the same as Poisson central elements. The abelian weight property is reasonably natural as the next result illustrates. Proof. There is a Lie algebra embedding form Der P pAq into Der P pAS´1q extending derivations via the Leibniz rule. If a is normal in A with weight λ then the following computation shows it is also normal in AS´1, and that the weight is the image of λ in Der P pAS´1q tbs´1, au "´bs´2ts, au`s´1tb, au "`λpbqs´1´bs´2λpsq˘a. Thus ΛpAq ãÑ ΛpAS´1q as abelian groups, which proves the first claim. We now verify that the examples listed in the proposition satisfy the abelian weight property: (i) Let g be a Lie algebra. Since the maps λ P ΛpSpgqq are derivations and Spgq is generated by g it suffices to show that rλ, µspgq " t0u for all λ, µ P ΛpSpgqq. By Lemma 2.1 the Poisson normal elements of Spgq are actually semi-invariants and so λ, µ send g Ñ k. It follows that λ˝µpgq " µ˝λpgq " t0u and as a result rλ, µspgq " t0u. (ii) Now let A " krx 1 , ..., x n s. For i " 1, ..., n we let x i P A λ i for λ 1 , ..., λ n P ΛpAq and write B i :" B Bx i . It follows from (2.2) that λ j " ř n i"1 λ i,j x i B i for λ i,j P k. Since the derivations tx i B i | i " 1, ..., nu pairwise commute it follows immediately that the same is true for λ 1 , ..., λ n . The monoid of weights of the torus krx˘1 1 , ..., x˘1 n s is generated by the weights t˘λ i | i " 1, ..., nu and so is abelian. (iii) These Poisson algebras are all Poisson iterated Ore extensions to which the Poisson deleting derivations algorithm [10] can be applied, and therefore they localise to Poisson tori [11,Theorem 5.3.2]. The result then follows from the first claim along with part (ii). (iv) Fix n ě 1 and a P k. By [12, Lemma 3.26] the Poisson algebra A " Apn, aq has a localisation A˝which is isomorphic to the Poisson algebra krY˘1 0 , X, Y 2 , ..., Y n s with nonzero Poisson brackets It is straightforward to see that its semi-centre is krY˘1 0 , Y 2 , . . . , Y n s and that the monoid of weights is generated by the commuting set t˘aY˘1 0 B X , paì qY 0 B X | i " 2, ..., nu. Despite holding for the families described in the proposition, the next example shows that the abelian weight property does not hold for every integral Poisson algebra. Example 2.9. Every Poisson bracket on A " krx, ys is determined by a choice of tx, yu thanks to the derivation rule and skew-symmetry. Furthermore, every possible choice actually defines a Poisson bracket. If we define tx, yu " pxy for some p P A then both x and y are normal and so the resulting Poisson structure satisfies A sc " A. The weights of x and y are respectively λ x " pyB y and λ y " pxB x . Since rλ x , λ y spxq "´λ x ppqx and rλ x , λ y spyq " λ y ppqy it follows that A has the abelian weight property if and only if p P k. Generalised Poisson affine space and the Poisson Dixmier-Moeglin Equivelence In this section we investigate algebraic and geometric properties of P-Spec A sc and so we restrict ourselves to the case where the semi-centre is finitely generated. We remark that this is not always the case, as shown in [2,Section 5]. Our results focus on the case where the semi-centre is a Poisson algebra graded by its weight space decomposition. In view of Proposition 2.6 the most appropriate way to discuss such algebras seems to be via the following axiomatisation. Proof. Let x 1 , ..., x n be Poisson normal generators of A. We can suppose that there is 1 ď m ď n such that tx 1 , ..., x m u X I " H and tx m`1 , ..., x n u Ď I. Then the images x 1 , ..., x m in A{I are normal generators in A{I. Since the latter is an integral domain Lemma 2.5 tells us that there are derivations λ 1 , ..., λ m of A{I such that ta, x i u " λ i paqx i for all i " 1, ..., m. For all a P I we have ta, x i u " λ i paqx i P I and since I is prime and x i R I we deduce that λ i paq P I. In other words, λ i pIq Ď I and the map λ i is just the map induced by λ i on the quotient A{I. Finally, since tλ i | i " 1, ..., mu pairwise commute we may conclude that the same is true for tλ i | i " 1, ..., mu. (i) forming a Poisson affine space over some ground ring K, which is a finitely generated commutative k-algebra; (ii) taking a prime Poisson quotient of any generalised Poisson affine space. In fact when all of the normal generators x 1 , ..., x n of a generalised affine space A are prime, it is easy to show that the Poisson brackets all have the form tx i , x j u " λ i,j x i x j where λ i,j P CaspAq is a Casimir for 1 ď i, j ď n. In this case the fibres of the map Spec A Ñ Spec CaspAq are Poisson affine spaces. It is interesting to wonder whether this conclusion holds when the normal generators are not necessarily prime. In order to prove Theorem 1.3 we actually prove the following result, which is equivalent. For the rest of the section we assume A is a generalised Poisson affine space. We let x 1 , ..., x n be the normal generators with weights λ 1 , ..., λ n and we let Λ be the monoid consisting of the weights of normal elements. Recall that, by Lemma 2.6 the decomposition A " À λPΛ A λ is a Poisson grading. Proof. Suppose that λ, µ P Λ 0 and observe that λ´µ P Der P pAq. If λ´µ vanishes on the generators x 1 , ..., x n then by the Leibniz rule it vanishes on all of A. The lemma follows. The next proof follows the same principle as Artin's linear independence of characters of a group. Proposition 3.6. Every nonzero Poisson ideal contains a nonzero homogeneous element. Proof. Let I be a Poisson ideal. For each a P I we may decompose a " ř λPΛ a λ with a λ P A λ and write paq " #Λpaq where Λpaq :" tλ P Λ | a λ ‰ 0u. We show that I contains an element with paq " 1. Pick a P I such that paq ą 1 is minimal. Recall that each x i is homogeneous of weight λ i . By Proposition 2.6 the derivations λ P Λ preserve the grading and so for any i P t1, ..., nu we have From Lemma 3.5 there is some i P t1, ..., nu such that µ 1 px i q ‰ µ 2 px i q for some µ 1 , µ 2 P Λpaq. Thus, for this choice of i, the expression µ 1 px i qa´tx i , au is non-zero, lies in I and has pµ 1 px i qa´tx i , auq ă paq. This contradicts the minimality of paq and the contradiction proves the claim. Remark 3.7. In general it is not true that every Poisson ideal in a generalised Poisson affine space is generated by homogeneous elements. For example, let A :" krx 1 , x 2 s with Poisson bracket given by tx 1 , x 2 u " x 1 x 2 . It is not hard to see that the Poisson normal elements are precisely the monomials in x 1 , x 2 , however for all s P k the ideal px 1 , x 2´s q is Poisson. We now recall a few facts about modules over Poisson algebras, required in the proof of Proposition 3.9. A Poisson A-module is a vector space V equipped with two linear maps A Ñ End k pV q, which we write a Þ ÝÑ mpaq; a Þ ÝÑ δpaq, such that m is a representation of A as an associative algebra, δ is a representation of A as a Lie algebra, and δpabq " mpaqδpbq`mpbqδpaq; (3.1) mpta, buq " rδpaq, mpbqs. Proof. Let a P pU : V q, b P A and u P U . We have mpabqu " mpaqpmpbquq P V ; mpta, buqu " rδpaq, mpbqsu P V ; δpabqu " mpaqδpbqu`mpbqδpaqu P V ; δpta, buqu " rδpaq, δpbqsu P V, and so ab, ta, bu P pU : V q. The following result says that every rational Casimir is a quotient of two normal elements. Our approach was inspired by the corresponding statement in symmetric algebras of Lie algebras, first proven in [2]. Proposition 3.9. Consider the set We have CaspFracpAqq " In other words, every Casimir of FracpAq is a quotient of homogeneous elements of A of same weight. Proof. The fact that the elements of Q λ are Casimirs follows from a short calculation in FracpAq, which we leave to the reader. Let ab´1 P CaspFracpAqq and consider the Poisson A-submodule U Ď FracpAq generated by ab´1. Since ab´1 is a Casimir the map A Ñ U sending c to cab´1 is an isomorphism of Poisson modules. According to the previous lemma the space pU : Aq is a Poisson ideal of A. We claim that pU : Aq ‰ 0. For all c P A we have b 2 cab´1 " bca P A; It follows that b 2 P pU : Aq ‰ 0. Now we may apply Proposition 3.6 to deduce that pU : Aq contains a nonzero homogeneous element c P A λ . By definition we have cab´1 " d P A and since ab´1 is a Casimir it follows that d P A λ . Now we have equality ab´1 " dc´1 in FracpAq which shows that ab´1 P Q λ . Corollary 3.10. If Cas Frac A is a finite extension of k then for every λ P Λ, every two elements a, b P A λ are algebraically dependent over k, ie. there is a non-zero f P krX, Y s such that f pa, bq " 0. Proof. Suppose that a, b P A λ are algebraically independent. We claim that the set " a b´sa | s P k * is a k-linearly independent subset of Cas FracpAq. Since k is a field of characteristic zero it has infinite cardinality and so this claim will prove the lemma. Since these are fractions of Poisson normal elements of the same weight λ they are Casimirs as claimed. Suppose that s 1 , ..., s n P k are distinct elements and suppose that t 1 , ..., t n P k are some elements such that Clearing the denominators and using the fact that kra, bs is an integral domain we get ÿ i t i ź j‰i pb´s j aq " 0. Since this equation holds in the polynomial ring kra, bs it holds modulo the ideal pb´s j aq kra, bs with j " 1, ..., n. This gives t 1 " t 2 "¨¨¨" t n " 0 and this proves the claim. Let I be an nonzero Poisson prime ideal. We claim that I contains at least one of the generators x 1 , ..., x n . By Proposition 3.6 we know that there is a nonzero element a P I XA λ for some λ P Λ. By Lemma 2.6 the monoid Λ is finitely generated by the weights λ 1 , ..., λ n of the generators x 1 , ..., x n and so we may assume that λ " ř n i"1 m i λ i for non-negative integers m 1 , ..., m n . It follows that b :" x m 1 1¨¨¨x mn n P A λ . Consider the polynomial ring R :" krX, Y s and define a Λ-grading on R by placing both X and Y in degree λ. Consider the homomorphism φ : R ÝÑ A; X Þ ÝÑ a; Y Þ ÝÑ b. It is evidently a homogeneous morphism with respect to the Λ-gradings on A and R, and so the kernel is homogeneously generated. Furthermore, by Corollary 3.10, Ker φ ‰ 0, and so we can choose f pX, Y q " ř d i"0 s i X i Y d´i where s 0 , ..., s d P k such that f pa, bq " 0. If s i " 0 for i ă d then this relation says that s d a d " 0. Since A is an integral domain we know that this is not the case. Hence s i ‰ 0 for some i ă d. Suppose that m " minti | s i ‰ 0u and observe that f pa, bq " a m d ÿ i"m s i a i´m b d´i " 0. Using the fact that A is an integral domain once again we see that We have now shown that I contains a monomial of the form b " x m 1 1¨¨¨x mn n . Since I is prime it must be that it contains one of the elements x 1 , ..., x n as claimed. The deductions made above imply that the zero ideal is equal to the following open subset of P-SpecpAq: tp0qu " n č i"1 tP P P-SpecpAq | px i q Ć P u. As a consequence p0q is locally closed and the proof is complete. Remark 3.11. In this paper we assumed throughout that A is an integral domain, however this hypothesis can be removed when A is noetherian, reduced and the minimal prime ideals p 1 , ..., p n are pairwise coprime. When A is such a Poisson algebra the p 1 , ..., p n are all Poisson [16, Lemma 1.1] and so the natural map A Ñ A{p 1ˆ¨¨¨ˆA {p n is a Poisson homomorphism. The map is surjective by the Chinese remainder theorem and the kernel is Ş i p i " 0. Now our results can be applied to each of the direct factors tA{p i | i " 1, ..., nu. Geometrically this just corresponds to a Poisson variety with disjoint irreducible components.
2017-07-27T19:17:14.000Z
2017-07-27T00:00:00.000
{ "year": 2020, "sha1": "8d89b01d015e2e80edce6c10d3c632ab19460f23", "oa_license": null, "oa_url": "https://kar.kent.ac.uk/73658/1/Poisson%20semi-centre.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8d89b01d015e2e80edce6c10d3c632ab19460f23", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
208443115
pes2o/s2orc
v3-fos-license
Primary Mural Endocarditis Caused by Streptococcus pyogenes Graphical abstract INTRODUCTION Primary mural endocarditis is a rare form of intracardiac infection that occurs on endocardial surfaces independent of systemic valve involvement. More commonly, nonvalvular infective endocarditis occurs secondary to infected mural thrombus, intracardiac devices or prostheses, cardiac tumors, structural abnormalities including congenital defects, or valvular infective endocarditis. 1 Risk factors for mural endocarditis otherwise include immunosuppression, intravenous drug abuse, prior cardiac surgery, and chronic debilitating disease. 2 The clinical presentation of mural endocarditis is otherwise similar to that of infective valvular endocarditis. 3 Mural endocarditis is most commonly caused by Staphylococcus aureus and Streptococcus species, but any endocarditis caused by group A beta hemolytic Streptococcus is uncommon. 4,5 We describe a case of an immunocompromised patient with Streptococcus pyogenes bacteremia and clinical signs of myopericarditis who, after further evaluation, was diagnosed with primary mural endocarditis. CASE PRESENTATION A 37-year-old African American man with a medical history significant for HIV/AIDS and daily cannabis use presented to the emergency department reporting severe, stabbing, midsternal chest pain that started suddenly at rest. The patient reported an upper respiratory tract infection in the preceding weeks and described shortness of breath and subjective fever for several days. He also reported nonadherence to antiretroviral therapy within the year. On physical examination, he was normotensive and afebrile but notably tachycardic and tachypneic. Most notably, he had a prominent pericardial rub on auscultation, without any other significant examination findings. Initial electrocardiography on presentation was notable for diffuse, submillimeter ST-segment elevations in leads I, II, aVL, and V 3 through V 6 , with nonspecific T-wave changes. Initial laboratory findings suggested leukocytosis (white blood cell count 20.7 K/mL) with left shift, normocytic anemia (hemoglobin 11.4 g/dL), thrombocytopenia (platelet count 111 K/mL), acute kidney injury (creatinine 1.60 mg/dL and blood urea nitrogen 27 mg/dL), and an initial troponin I level of 0.181 ng/mL. The patient's most recent CD4 + count was 15/mm 3 , 1 year before this presentation, with a viral ribonucleic acid count of 107,000 copies/mL. Given concern for possible acute coronary syndrome, urgent transthoracic echocardiography was performed to evaluate for possible regional wall motion abnormalities. Instead, a small posterior pericardial effusion was identified (Figure 1), as well as echogenic structures within the left atrium adjacent to the interatrial septum (Figures 1 and 2) as well as the right atrium along the free wall and tricuspid valve annulus ( Figure 2). Empiric antibiotic therapy was initiated in light of patient's immunocompromised state because of concern for pyogenic pericarditis and possible infective endocarditis. The first two blood cultures obtained on admission grew group A Streptococcus, specifically S pyogenes, and empiric antibiotic therapy was transitioned to penicillin G. Transesophageal echocardiography was performed on hospital day 4 to evaluate for possible infective endocarditis, which again identified a small pericardial effusion but now with fibrous, echogenic material (Video 1). Most concerning was a solid left atrial mass along the anterior aspect of the mitral valve annulus, extending toward the interatrial septum and aortic root. The mass was fixed and independent of the mitral valve leaflets (Videos 2 and 3). Also noted was thickening of the posterior tricuspid valve leaflet (Video 2). There was no significant compromise of valvular function, as only trivial to mild mitral and tricuspid regurgitation was identified (Video 4). The differential diagnosis at this point in time included pyogenic pericarditis, perimyocarditis, and mural endocarditis. Aortic root abscess was also considered, but there was no evidence of conduction system disease and no compromise of aortic valvular function, so the diagnosis was deemed less likely. The echocardiographic finding of a left atrial mass independent of the mitral valve leaflets without significant compromise of valvular function particularly increased our suspicion for mural endocarditis, though it was unlikely an isolated process. The patient had symptomatic resolution on intravenous antibiotic therapy during his hospital stay, which was transitioned to ceftriaxone at discharge with a plan for completion of at least a 4-week course per infectious disease recommendations. Antiretroviral therapy was also restarted. Transesophageal echocardiography was repeated 5 weeks after the initial study and after completion of the intravenous antibiotic course, revealing near resolution of the left atrial echo density (Figure 3, Video 5) and complete resolution of the pericardial effusion ( Figure 4). The right atrial echo density had not significantly changed in appearance ( Figure 5, Video 6). Blood cultures were repeated twice and showed no growth. The patient was lost to follow-up after repeat transesophageal echocardiography, so further evaluation could not be performed, nor could we follow the patient to assess for further improvement. DISCUSSION Mural endocarditis is a rare manifestation of intracardiac bacterial or fungal infection that involves the nonvalvular endocardium and may involve any cardiac chamber. Generally, infective endocarditis occurs when pathogens adhere to damaged endothelial surfaces that are highly thrombophilic and activate procoagulant reactions, resulting in fibrin and platelet deposition, forming a nidus for microorganism adhesion and accumulation during bacteremia. 1,6 Resulting plaque formation serves to promote the development of vegetations during transient bacteremia. Primary mural endocarditis is a rare form of intracardiac infection that more commonly occurs as an extension of infected mural thrombi, contaminated prosthetic materials, intracardiac tumors, or infected cardiovascular implantable electronic devices. In primary mural endocarditis, endothelial damage may be caused by high-velocity, eccentric regurgitant intracardiac jets produced by atrioventricular valve disease that are directed toward the wall of a cardiac chamber. 1,7,8 The most common causative organisms of mural endocarditis include S aureus, Streptococcus species, Candida species, and Aspergillus species. 2 Our patient is the second reported case of primary mural endocarditis caused by group A beta hemolytic Streptococcus bacteremia; the first involved a 3-year-old girl with a history of developmental delay and Chiari malformation in 1992. 9 S pyogenes is an overall rare cause of infective endocarditis in any age group. 4,5 Although the primary source of infection in our patient is unclear, we suspect that his preceding upper respiratory tract infection was a predisposing factor, further complicated by his immunocompromised status. Potential complications of mural endocarditis include peripheral embolization, abscess and fistula formation, papillary muscle or chordae compromise, and cardiac perforation. 1 Thus, a high index of suspicion for mural endocarditis with early diagnosis is necessary. Given the scarceness of mural endocarditis and the unusual areas of vegetation involvement, diagnosis is challenging. Echocardiography is the principal imaging technique for diagnosis of valvular endocarditis and has been the primary modality used to diagnose mural endocarditis in the majority of reported cases. 10 The overall diagnostic accuracy of echocardiography in the diagnosis of mural endocarditis specifically is unclear. Nonetheless, the transesophageal approach View the video content online at www.cvcasejournal.com. alone approaches sensitivity rates for detection of valvular vegetations between 90% and 100%, 11 so nondiagnostic transthoracic echocardiography does not preclude the diagnosis of any form of endocarditis. Transesophageal echocardiography resulted in the diagnosis of mural endocarditis in our patient. Guidelines for treatment of valvular endocarditis suggest early surgical intervention when clinically appropriate, but a paucity of data pertaining to mural endocarditis limits guidance toward recommended treatment strategies. Previous cases have reported failure of antimicrobial therapy alone in resolution of mural endocarditis. 2 In our case, however, the patient was successfully treated using conservative methods given no evidence for serious complications, and there was clinical improvement and resolution of vegetations with antimicrobial therapy. Annular thickening was presumably related to endocardial or myocardial inflammation in this case and remained unchanged. There was no prior cardiac imaging to compare and no clear indication for pathologic assessment to confirm this. Patients with large mural vege-tations, myocardial abscess formation, and structural heart disease resulting in propensity toward infection (i.e., aneurysm with mural thrombosis) have had subsequent clinical deterioration in lieu of long-term antibiotic therapy. In these patients, surgical vegetectomy or aneurysmectomy have been pursued. 1 CONCLUSION Primary mural endocarditis is rarely reported but presents similarly to infective valvular endocarditis, with complications that are potentially as severe. Even rarer is primary mural endocarditis caused by S pyogenes bacteremia. Diagnosis and management have not clearly been defined, with treatment recommendations suggested on a case-bycase basis. In our case, the patient had clinical and imaging resolution of the left atrial mural vegetation with long-term antibiotic therapy alone, although more complex cases may necessitate surgical intervention. Echocardiography is recommended to confirm diagnosis where there is a high index of suspicion. Figure 5 Midesophageal view on transesophageal echocardiography with a focus on the right heart showing no significant change in the appearance of the tricuspid annulus (white arrows) at the time of diagnosis (left) and after completion of antimicrobial therapy (right). Although this finding suggested a possible alternative underlying process, the patient was lost to follow-up before further investigation could be pursued. The tricuspid valve leaflets are seen independent of the thickened annulus (black arrow). Varying quality between studies is likely responsible for increase brightness of the tricuspid annulus on posttreatment transesophageal echocardiography.
2019-10-24T09:13:11.259Z
2019-10-16T00:00:00.000
{ "year": 2019, "sha1": "b8a488cf786429092317f2c7101bcc1d515bbb72", "oa_license": "CCBYNCND", "oa_url": "https://www.cvcasejournal.com/article/S2468-6441(19)30176-8/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4b21117a4da4611112c4c411d408a09374c4d412", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119625480
pes2o/s2orc
v3-fos-license
Smooth or singular solutions to the Navier--Stokes system ? The existence of singular solutions of the incompressible Navier-Stokes system with singular external forces, the existence of regular solutions for more regular forces as well as the asymptotic stability of small solutions (including stationary ones), and a pointwise loss of smoothness for solutions are proved in the same function space of pseudomeasure type. Introduction So far, only two ways for attacking the Cauchy problem for the Navier-Stokes equations are known: the first is due to J. Leray [27], and the second is due to T. Kato [18]. None of them can be considered the "golden rule" for solving the Navier-Stokes equations because they both leave open the following celebrated question. In three dimensions, does the velocity field of a fluid flow that starts smooth remain smooth and unique for all time ? The concept of "weak" solutions introduced by J. Leray in 1933, permits the study of functions in much larger classes than the classical spaces used to describe the motion of a fluid. It is easier to prove the existence of a solution (regular or singular) in a larger class, but such a solution may not be unique. Based on a priori energy estimates, Leray's theory gives the existence of global weak, possibly irregular and possibly nonunique solutions to the Navier-Stokes equations. On the other hand, a completely different theory introduced by T. Kato in 1984, based on semigroups techniques and the fixed point scheme, gives the existence of a global unique regular "mild" solution, under the restrictive assumption of small initial data. A second restriction is given by the fact that Kato's algorithm does not provide a framework for studying a priori singular solutions. In fact, in order to overcome the difficulty (and sometimes the impossibility) of proving the continuity of the bilinear estimate in the, so-called, critical spaces, Kato's algorithm makes clever use of a combination of two estimates in two different norms, the natural one and a regularizing norm. As such, Kato's approach imposes a priori a regularization effect on solutions we look for. In other words, they are considered as fluctuations around the solution of the heat equation with same initial data. For people who believe in blow up and singularities, this a priori condition coming from the "two norms approach" is indeed very strong. However, there exist two exceptions, more exactly two critical spaces where Kato's method applies with just one norm: the Lorentz space L 3,∞ (considered independently by M. Yamazaki [33] and by Y. Meyer [29]) and the pseudomeasure space of Y. Le Jan and A.S. Sznitman [25], [7]. Here we will not go into the technical details arising from these critical spaces and we refer the reader to the recent surveys contained in [4] and in [26]. In this paper we will show how the approach with only one norm gives existence and uniqueness of a (small) solution in a larger space which, in our case, contains genuinely singular solutions that are not smoothed out by the action of the nonlinear semigroup associated. More exactly, in the case of the pseudomeasure space we can prove the following results. The existence of singular solutions associated to singular (e.g. the Dirac delta) external forces thus allowing to describe the solutions considered by L.D. Landau in [23] and by G. Tian and Z. Xin in [32]. The existence of regular solutions for more regular external forces. The asymptotic stability of small solutions including stationary ones. A pointwise loss of smoothness for solutions. The study of the Navier-Stokes equations written in terms of the vorticity and with measures as initial data started in the 80s in a series of papers by G. Benfatto, R. Esposito, M. Pulvirenti [1], G.-H. Cottet and J. Soler [9,10], and Y. Giga, T. Miyakawa and H. Osada [12,13]. We refer the reader as well to the more recent results obtained by T. Kato in [19] and Y. Giga in [11]. On the other hand, the case of external forces that can be singular atomic measures was studied by H. Kozono and M. Yamazaki [20]. Here we want to provide, among others, such kind of results. One-point singular solutions As observed by J. Heywood in [15], in principle "it is easy to construct a singular solution of the NS equations that is driven by a singular force. One simply constructs a solenoidal vector field u that begins smoothly and evolves to develop a singularity, and then defines the force to be the residual." In this section we want to give an explicit example of this mathematical evidence. Our example arises from the physical experiment described by L.D. Landau in [23] (see also [24,Sec. 23]), where an axially symmetric jet discharging from a thin pipe into the unbounded space is studied. Passing to the limit with the diameter of the pipe, this "plunged" jet can be regarded as emerging from a point source (i.e. driven by the delta function). Landau provided a mathematical setting for explaining this phenomenon by using the classical incompressible Navier-Stokes system and deriving an explicit "solution" for it. To be more precise, let us recall the famous Navier-Stokes equations, describing the evolution of the velocity field u and pressure field p of a three-dimensional incompressible viscous fluid at time t and the position x ∈ IR 3 . These equations are given by where the external force F and initial velocity u 0 are assigned. Recently, G. Tian and Z. Xin [32] also found explicit formulas for a one-parameter family of stationary "solutions" of the three-dimensional Navier-Stokes system "with F ≡ 0" which are regular except at a given point. Due to the translation invariance of the Navier-Stokes system, one can assume that the singular point corresponds to the origin. These explicit "solutions" by Tian and Xin agree with those obtained by Landau for special values of the parameter. More exactly, the main theorem from [32] reads as follows. All solutions to system (2.1)- (2.3) (with F ≡ 0) u(x) = (u 1 (x), u 2 (x), u 3 (x)) and p = p(x) which are steady, symmetric about x 1 -axis, homogeneous of degree −1, regular except (0, 0, 0) are given by the following explicit formulas: and c is an arbitrary constant such that |c| > 1. Remark 2.1 Note that in the formula [32, (2.1)] the numerator of the fraction defining The factor "2" was missing in that formula what can be inferred from [32, (2.40)] or [24, (23,16)- (23,19)]. On the other hand, the sign "−" in the formula [24, (23,20)] for the pressure is wrong. ✷ Before commenting this result, we think it is necessary to clarify the meaning of "solution of the Navier-Stokes equations", for, since the appearance of the pioneer papers of Leray, the word "solution" has been used in a more or less generalized sense giving origin to so many different definitions of "solutions", distinguished only by the class of functions they are supposed to belong to: classical, strong, mild, weak, very weak, uniform weak and local Leray solutions of the Navier-Stokes equations ! We will not present all the possible (more or less well-known) definitions here and refer the reader to [4] and the references therein. Let us first remark that there is no hope to describe the "solutions"given by equations (2.4) in Leray's theory, because they are not globally of finite energy, in other words they do not belong to L 2 (IR 3 ). However, they do belong to L 2 loc (IR 3 ) and this is at least enough to allow us to give a (distributional) meaning to the nonlinear term . Moreover, the "solutions" discover by Tian and Xin cannot be analyzed by Kato's two norms method either, because they are global but not smooth, more exactly they are singular at the origin with a singularity of the kind ∼ 1/|x| for all time. We will provide in the following section an ad hoc framework for studying such singularity within the fixed point scheme and without using the two norms approach. As recalled in the introduction, this can be done in principle either in a Lorentz or in a pseudomeasure space and they both contain singularities of the type ∼ 1/|x|. However, we will chose the latter space not only because the proofs will be very elementary, but also because this choice will allow us to treat singular (Delta type) external force, that precisely arise from Landau and Tian and Xin "solutions". More exactly, by straightforward calculations, one can check that, indeed, the functions (u 1 (x), u 2 (x), u 3 (x)) and p(x) given by (2.4) satisfy (2.1)-(2.3) with F ≡ 0 in the pointwise sense for every x ∈ IR 3 \ {(0, 0, 0)}. On the other hand, if one treats (u(x), p(x)) as a distributional or generalized solution to (2.1)-(2.3) in the whole IR 3 , they correspond to the very singular external force F = (bδ 0 , 0, 0), where the parameter b = 0 depends on c and δ 0 stands for the Dirac delta. Let us state this fact more precisely. Proposition 2.1 Let u = (u 1 , u 2 , u 3 ) and p be defined by (2.4). For every test function ϕ ∈ C ∞ c (IR 3 ) the following equalities hold true: and In particular, the function b = b(c) is decreasing on (−∞, −1) and (1, +∞). Moreover, Proof. Equality (2.5) says that the velocity u is weakly divergence-free in IR 3 . This can be shown by a standard argument involving integration by parts, since each component of u is homogeneous of degree −1 and thus belongs to W 1,p loc (IR 3 ) with 1 ≤ p < 3/2 and (∇ · u)(x) = 0 for all x ∈ IR 3 \ {0}. Next, due to singularities of u and p at the origin, we fix ε > 0 and we integrate in equations (2.6) for |x| ≥ ε, only. Integrating by parts, we obtain because x/ε is the unit vector normal to the sphere {x ∈ IR 3 : |x| = ε}. Obviously, the first term on the right-hand side of (2.8) disappears, and our goal is to compute the limit as ε ց 0 of the second one. For this reason, note first that each term ∇u k , u k u, and p is homogeneous of degree −2. Hence, changing variables x = εy in the integral |x|=ε ... dσ(x) in (2.8), and next passing to the limit with ε ց 0 we show by the Lebesgue Dominated Convergence Theorem that it converges toward (2.9) To complete this proof, it remains to compute the surface integral in (2.9). First, however, we simplify it a little by using the Euler theorem for homogeneous functions which in this case gives x · ∇u k = −u k . Moreover, it follows from the definition of u k and p that Consequently, for k = 2, 3, the integral in (2.9) equals because u 2 and u 3 are odd functions with respect to x 2 and x 3 , respectively, and u · x is even. In case of k = 1, we use the identities valid for |x| = 1, and the polar coordinates to show that Here, we skip these long but rather elementary calculations. ✷ Remark 2.2 As we have already emphasized, the stationary solutions defined in (2.4) are singular with singularity of the kind O(1/|x|) as |x| → 0. This is the critical singularity in the context of Proposition 2.1, because as it was shown by H.J.Choe and H.Kim [8], every pointwise stationary solution to system (2.1)- is also a solution in the sense of distributions in the whole B R . Moreover, it is shown in [8] that under the additional assumption u ∈ L q (B R ) for some q > 3, then the stationary solution u(x) is smooth in the whole ball B R . In other words, if u(x) = o(1/|x|) as |x| → 0 and u ∈ L q (B R ) for some q > 3, then the singularity at the origin is removable. ✷ Definitions and spaces We will study global-in-time solutions u = u(x, t) to the Cauchy problem in IR 3 for the incompressible Navier-Stokes equations (2.1)-(2.2). As far as u = u(x, t) is a sufficiently regular function, the equations (2.1)-(2.2) can be rewritten as If we recall that the Leray projector on solenoidal vector fields is given by the formula Finally, let us emphasize that we shall study the problem (2.1)-(2.3) via the following integral equation obtained from the Duhamel principle where S(t) is the heat semigroup given as the convolution with the Gauss-Weierstrass kernel: To give a meaning to the Leray projector IP (defined in (3.1)), let us first recall that the Riesz transforms R j are the pseudodifferential operators defined in the Fourier variables as R k f (ξ) = iξ k |ξ| f (ξ). Here and in what follows the Fourier transform of an integrable function v is given by v(ξ) ≡ (2π) −n/2 IR n e −ix·ξ v(x) dx. Using these well-known operators we define moreover, in our considerations below, we shall often denote by IP (ξ) the symbol of the pseudodifferential operator IP which is the matrix with components All these components are bounded on IR 3 and we put We are now in a position to introduce the Banach functional spaces relevant to our study of solutions of the Cauchy problem for the system (2.1)-(2.3): where a ≥ 0 is a given parameter. The notation PM stands for pseudomeasure, and the classical space of pseudomeasures introduced in harmonic analysis (i.e. those distributions whose Fourier transforms are bounded) corresponds to a = 0. Definition 3.1 By a solution of (2.1)-(2.3) we mean in this paper a function The space PM 2 is chosen because it contains homogeneous functions of degree −1 which are sufficiently regular on the unit sphere. In particular, one can easily check that this is the case for the one-point singular solutions defined in (2.4). . In a standard way, we extend this definition to all tempered distributions. It follows from elementary calculations that f λ (ξ) = λ −3 f (λ −1 ξ). Hence, for every λ > 0, we obtain the scaling property of the norm in PM a (3.5) In particular, the norm PM 2 is invariant under rescaling f → λf (λ ·). Moreover, it follows from (3.5) that for a = 3(1 − 1/p) the norms · PM a and · L p (IR 3 ) have the same scaling property. ✷ Remark 3.2 C w denotes, as usual (cf. [3]), the space of vector-valued functions which are weakly continuous as distributions in t. This is an additional difficulty caused by the fact that the heat semigroup (S(t)) t≥0 is not strongly continuous on the spaces of pseudomeasures but only weakly continuous (cf. Lemma 4.2, below). ✷ 2) and the integral is understood as the Bochner integral. However, such a meaning of a solution is not suitable for our construction of solutions of the Cauchy problem and, in particular, of self-similar solutions. Indeed, for stationary and homogeneous of degree −1 solutions u (given, e.g., by (2.4)), the nonlinear term corresponds to a tempered distribution which is homogeneous of degree −3, hence, there exists a distribution H such that Now, computing the PM 2 norm and using the scaling relation (3.5), we obtain On the other hand, the Fourier transform of this quantity equals to e −t|ξ| 2 IP (ξ)( u ⊗ u)(ξ) and the singularity at t = 0 does not appear. Hence, the integral with respect to τ in equation (3.2) should be defined in a weak sense like, e.g., it was done in [33,Def. 2]. For more explanations, we refer the reader to [26], because our spaces PM a are the example of the shift-invariant Banach spaces of distributions systematically used in that book. ✷ Nevertheless, a distributional solution of system (2.1)-(2.3) is a solution of the integral equation of (3.4), and vice versa. This equivalence can be proved by a standard reasoning, and we refer the interested reader to [33,Th. 5.2] for details of such computations. To simplify the notation, the quadratic term in (3.2) will be denoted by where u = u(t) and v = v(t) are functions defined on [0, T ) with values in a vector space (here most frequently PM 2 ). Global-in-time solutions As in [3], the proof of our basic theorem on the existence, uniqueness and stability of solutions to the problem (2.1)-(2.3) is based on the following abstract lemma, whose slightly more general form is taken from [26]. Then, if 0 < ε < 1/(4η) and if y ∈ X such that y < ε, the equation x = y + B(x, x) has a solution in X such that x X ≤ 2ε. This solution is the only one in the ball B(0, 2ε). Moreover, the solution depends continuously on y in the following sense: if Proof. Here, the reasoning is based on the standard Picard iteration technique completed by the Banach fixed point theorem. For other details of the proof, we refer the reader to [26,Th. 13.2]. ✷ Our goal is to apply Lemma 4.1 in the space We need some preliminary estimates. Proof. By the definition of the norm in PM 2 , it follows that Now, let us prove the weak continuity with respect to t, and, by the semigroup property of S(t), it suffices to do this for t = 0 only. For every ϕ ∈ S(IR 3 ), by the Plancherel formula, we obtain Proof. Similarly as in the proof of Lemma 4.2 we get Let us skip the proof of the weak continuity of w(t) because the reasoning is more or less standard. Similar arguments can be found e.g. either in [29,Ch. 18,Lemma 24] or in [33,Th. 3.1]. ✷ The goal of the next proposition is to prove that the bilinear form B(·, ·) defined in (3.6) is continuous on the space X = C w ([0, ∞), PM 2 ). This fact is well-known and the proof appeared for the first time in [25] and [7]. Here, however, we repeat that reasoning because we want to control better all the constants which appear in the estimates below. Proof. We do all the calculations in the Fourier variables. Recall that the constant κ is defined in (3.3). Using elementary properties of the Fourier transform we obtain In the computations above, we use the equality |ξ| −2 * |ξ| −2 = π 3 |ξ| −1 . A detailed analysis concerning such convolutions can be found in [28,Th. 5.9] It remains to show the weak continuity of B(u, v)(t) with respect to t, but this follows again from standard arguments, cf. the remark at the end of the proof of Lemma 4.3. Assume, for a moment, that F ≡ 0. Homogeneity properties of the problem (2.1)-(2.2) imply that if u solves the Cauchy problem, then the rescaled function u λ (x, t) = λu(λx, λ 2 t) is also a solution for each λ > 0. Thus, it is natural to consider solutions which satisfy the scaling invariance property u λ ≡ u for all λ > 0, i.e. forward selfsimilar solutions. By the very definition, they are global-in-time, and one may expect that they describe the large time behavior of general solutions of (2.1)-(2.3). Indeed, if lim λ→∞ λu(λx, λ 2 t) = U(x, t) in an appropriate sense, then tu(xt 1/2 , t) → U(x, 1) as t → ∞ (take t = 1, λ = t 1/2 ), and U ≡ U λ is scale invariant. Hence U is a self-similar solution, and is thus determined by a function of d variables U(y) ≡ U(y, 1), y = x/t 1/2 being the Boltzmann substitution. If u λ ≡ u for all λ > 0, then from the self-similar form (4.2), the initial condition (2.3) lim tց0 u(x, t) is a distribution homogeneous of degree −1 at the origin. Of course, one-point singular solutions defined in (2.4) are self-similar solutions which are time independent. Self-similar solutions can be obtained directly from Theorem 4.1 by taking u 0 homogeneous of degree −1 of small PM 2 norm. By the uniqueness property of solutions of the Cauchy problem constructed in Theorem 4.1, they have the form (4.2). The same reasoning can be applied to the case when external forces are present. Indeed, if the initial datum u 0 is homogeneous of degree −1 and if the external force F (x, t) satisfies (here, the scaling is understood in the distributional sense), the solution obtained in Theorem 4.1 is self-similar. Note that, in particular, we can take (the multiples of the Dirac delta) for sufficiently small |b|. In other words, the existence of the solutions introduced by Tian and Xin and described in the previous section can be ensured by the fixed point method for large values of the parameter c (this is possible because of the particular expression of the function b(c) in (2.7)). We will clarify this fact in Section 6. Proceeding in this way we arrive at [3,4], the self-similar solutions that arise from this construction are instantaneously smoothed out for t > 0 and the only singularity (of the type ∼ 1/|x|) can be found at t = 0. We will remark on this important point in Section 7. ✷ 2) can be solved in a subspace of X formed by self-similar functions, as was done in [3], [29]. ✷ Remark 4.3 The existence and the stability results from this section are closely related to those from the paper by Yamazaki [33] where he studied the Navier-Stokes system in the weak L p -spaces in an exterior domain Ω. In those considerations, Yamazaki applied the Kato algorithm in the space C w ([0, ∞), L 3,∞ (Ω)) without a priori assumptions on the decay of solutions. Our approach involving the PM 2 space is much more elementary than that from [33]. Moreover, we can treat more singular external forces, and we obtain a kind of asymptotic stability of solutions (see the next section). ✷ Remark 4.4 Solutions to the Navier-Stokes system corresponding to singular external forces can also be obtained from very general results by Kozono and Yamazaki [20] where they use the Sobolev-type spaces based on homogeneous Morrey spaces. Their proof of existence of stationary solutions relies on the inverse function theorem and subtle estimates of the Stokes operator. Next, they investigate properties of a perturbation of the Stokes operator and they show resolvent estimates in the Morrey spaces needed in the proof of stability of stationary solutions. Here, our space PM 2 is much smaller that those from [20]. Our approach, however, besides its simplicity, does not require separate reasoning for stationary solutions and unsteady ones. Moreover, we believe that such an elementary idea will allow to understand better properties of large solutions (see Section 8). Proof. It follows from the definition of the norm · PM 2 that ... dτ . Using the substitution ξ = w √ t − τ , we first obtain Now, the right-hand side of the above inequality tends to 0 as t → ∞ by the Lebesgue Dominated Convergence Theorem. We estimate the term containing the integral t t/2 ... dτ in the most direct way by holds. This result means that if the difference of the solutions of the heat equation issued from u 0 , v 0 becomes negligible as t → ∞ (e.g., if the difference of the initial data u 0 −v 0 is not too singular) and if F (t) and G(t) have the same large time asymptotics, the solutions of the nonlinear problem u(t), v(t) behave similarly for large times. It can be interpreted as a kind of asymptotic stability result if the choice of v 0 is restricted to the initial data in a neighborhood of u 0 satisfying additionally (5.1). It is easy to verify that the first condition in (5.1) is satisfied if, e.g., |ξ| 2 ( u 0 (ξ) − v 0 (ξ)) → 0 as ξ → 0. Proof of Theorem 5.1. First, let us recall that, by Theorem 4.1, we have We subtract the integral equation (3.2) for v from the analogous expression for u. Next, computing the norm · PM 2 of the resulting equation and repeating the calculations from the proof of Proposition 4.1 we obtain the following inequality where small constant δ > 0 will be chosen later. In the term on the right-hand side of (5.4) containing the integral δt 0 ... dτ , we change the variables τ = ts and we use the identity sup ξ∈IR 3 We deal with the term in (5.4) containing t δt ... dτ estimating it directly by η sup Hence, applying (5.5) and (5.6) to (5.4) we obtain for all t > 0. Next, we put The number A is nonnegative and finite because both u, v ∈ L ∞ ([0, ∞), PM 2 ), and our claim is to show that A = 0. Here, we apply the Lebesgue Dominated Convergence Theorem to the obvious inequality and we obtain lim sup Finally, computing lim sup t→∞ of the both sides of inequality (5.8), and using (5.7), (5.9), and (5.10) we get Consequently, it follows that for δ > 0 sufficiently small, by the assumption of Theorem 4.1 saying that 0 < ε < 1/(4η). This completes the proof of Theorem 5.1. ✷ As a direct consequence the proof of Theorem 5.1, we have also necessary conditions for (5.2) to hold. We formulate this fact in the following corollary. Proof. As in the beginning of the proof of Theorem 5.1, we subtract the integral equation (3.2) for v from the same expression for u, and we compute the PM 2 -norm The first term on the right-hand side of (5.12) tends to zero as t → ∞ by (5.11). To show the decay of the second one, it suffices to repeat calculations from (5.4), (5.5), (5.6), and (5.9). Here, however, one should remember that now it is assumed that A = 0 and sup t>0 u(t) PM 2 < ∞ and sup t>0 v(t) PM 2 < ∞. Note here, that if U(x, t) = t −1/2 U(x/t 1/2 ) is a self-similar solution to system (2.1)-(2.3), its PM 2 -norm is constant in time by the scaling relation (3.5). As it is wellknown, U(x, t) corresponds to the initial condition U 0 (x) which is homogeneous of degree −1, so S(t)U 0 (x) = t −1/2 S(1)U 0 (xt −1/2 ). Consequently, by the scaling property of the norm, we have S(t)U 0 PM 2 = S(1)U 0 PM 2 , cf. Corollary 5.1 with F = G ≡ 0. Remark 5.2 In the setting of the L p -spaces and the homogeneous Besov spaces, the study of the asymptotic stability of self-similar solutions to the Navier-Stokes system begun with the paper [30] of F. Planchon (see also the presentation of Planchon's results in [26,Ch. 23.3]). As illustrated in the book by Y. Giga and M.-H. Giga [14] those ideas are quite universal and were used for other partial differential equations (e.g. the porous medium, the nonlinear Schrödinger and the KdV equations); they were applied for instance to study asymptotic properties of solutions to a large class of nonlinear parabolic equations [16] as well as of solutions with zero mass to viscous conservation laws [17]. In this section, we extend them on solutions which not necessarily decay to 0 as t → ∞. ✷ Stationary solutions Our approach, described in previous sections, to study global-in-time solutions to the problem (2.1)-(2.3), as well as their large time behavior, can be also applied to stationary solutions. Below, we briefly describe some consequences of Theorems 4.1 and 5.1. The following proposition contains two equivalent integral equations satisfied by stationary solutions. Proposition 6.1 Assume that u = u(x) ∈ PM 2 and F ∈ PM. The following two facts are equivalent is a stationary mild solution of system (2.1)- (2.2) in the sense of Definition 3.1. Hence, u is the solution of the integral equation for every t > 0; 2) u satisfies the integral equation where the integrals above should be understood in the Fourier variables for almost every ξ. Proof. By Definition 3.1, the integral equation (6.1) can be rewritten as for every t > 0. Passing to the limit as t → ∞ in (6.3) and using the identity we obtain equation (6.2) in the Fourier variables. Now, assume that u solves (6.2). Repeating the arguments above in the reverse order, we obtain that u is the solution of the equation If we subtract from this equality the same expression multiplied by e −t|ξ| 2 we get (6.3) which obviously is equivalent to (6.1). ✷ Theorem 6.1 Assume that F ∈ PM satisfies F PM < ε < 1/(4η). There exists a stationary solution u ∞ to the Navier-Stokes system in the space PM 2 with F as the external force. This is the unique solution satisfying the condition u PM 2 ≤ 2ε. Proof. This theorem results immediately from Lemma 4.1 applied to the integral equation (6.2) (or its equivalent version (6.4)). The bilinear form is bounded on the space PM 2 and the proof of this property of B(·, ·) is completely analogous to the one of Proposition 4.1. Let us also skip an easy proof that y = ∞ 0 S(τ )IP F dτ satisfies y PM 2 = F PM . ✷ Now, the application of Theorem 5.1 gives the following result on the asymptotic stability of stationary solutions. Corollary 6.1 Assume that u ∞ is the stationary solution constructed in Theorem 6.1 corresponding to the external force F . Suppose that v 0 ∈ PM 2 and G ∈ C w ([0, ∞), PM) satisfy v 0 PM 2 + G Cw([0,∞),PM) ≤ ε < 1/(4η) and, moreover, Then, the solution v = v(x, t) of system (2.1)- (2.3) corresponding to v 0 and G converges toward the stationary solution u ∞ in the following sense Proof. Here, it suffices only to note that stationary solutions belong to the space C w ([0, ∞), PM 2 ) (treated as constant functions on [0, ∞) with values in PM 2 ) and satisfy the integral equation (3.2) (see Proposition 6.1). So, Theorem 5.1 is applicable in this case. ✷ Remark 6.1 Results from this section can be extended to solutions which exist for all t ∈ IR (and not only for t ≥ 0) as was done by M. Yamazaki [33]. In this case the corresponding integral equation (the counterpart of (6.1) and (6.2)) has the form and, like in [33], by the application of Theorem 4.1, one obtains solutions which are, for example, time periodic or almost periodic with respect to t ∈ IR. In the same manner, Theorem 5.1 allows us to describe solutions which converge in PM 2 as t → ∞ toward given time periodic (or almost periodic) solution. ✷ Smooth solutions Solutions of problem (2.1)-(2.3) constructed in the space X = C w ([0, ∞), PM 2 ) are, in fact, smooth (for sufficiently regular external forces), and they agree with mild solutions obtained by T. Kato [18] and in [3] for F ≡ 0, and, more generally, with solutions obtained in [6] when F = 0. The goal of this section is to clarify this remark. First, let us recall that, in [3], solutions of (2.1)-(2.3) were constructed for sufficiently small initial conditions from the homogeneous Besov spaceḂ −1+3/p,∞ p (IR 3 ) with 3 < p < ∞. The usual way of defining a norm in this space is based on the dyadic decomposition of tempered distributions. Here, however as in [3,16] we prefer the equivalent norm whose definition involves the Connections between PM 2 and homogeneous Besov spaces are described in the following lemma. for all t > 0 and u 0 ∈ PM 2 . Proof. Here, our tool is the Hausdorff-Young inequality. For 1/p + 1/q = 1 we obtain In the calculations above, we assume that 2q < 3 which is equivalent to p > 3. Since, Note that this proof requires an obvious modification for p = ∞ and q = 1. One can also recall here the embedding of any "critical space" into the Besov spaceḂ −1,∞ ∞ (IR 3 ), see [29,4]. ✷ Now, given u 0 ∈ PM 2 with sufficiently small PM 2 -norm, we may apply the theory described in [3] to get the solution u = u(x, t) which is unique in the space corresponding to u 0 as the initial condition and the zero external force. Moreover, this solution is smooth for all t > 0. On the other hand, our Theorem 4.1 gives a solution u = u(x, t) in C w ([0, ∞), PM 2 ). Both constructions lead, in fact, to the same solution, and we show this by analyzing the parabolic regularization effect in problem (2.1)- (2.3) in the scale of spaces PM a . We begin by a definition. The space Y a is normed by the quantity v Y a = |||v||| 2 + |||v||| a . Of course, Y 2 ≡ X with this definition. Remark 7.1 The norm ||| · ||| a is invariant under the rescaling u λ (x, t) = λu(λx, λ 2 t) for every λ > 0. This can be easily checked using the scaling property of the norm · PM a , see (3.5). ✷ First we show an improvement of Proposition 4.1. There exists a constant η a > 0 such that for every u ∈ C w ([0, ∞), PM 2 ) and v ∈ {v(t) ∈ PM a : |||v||| a < ∞} we have Proof. First note that as in the proof of Proposition 4.1 we have Thus, for every ξ = 0 we obtain The proof will be completed by showing that for every a ≥ 2 the quantity is bounded by a constant independent of ξ and t. Here, we decompose the integral with respect to τ into two parts t 0 ... dτ = t/2 0 ... dτ + t t/2 ... dτ, and we deal with the each term separately. ✷ Next, we show that that the heat semigroup regularizes distributions from PM 2 . Lemma 7.2 For every u 0 ∈ PM 2 and t > 0, it follows that S(t)u 0 ∈ PM a with a ≥ 2. Moreover, there exists C depending on the exponent a only such that Proof. Simple estimates (cf. Lemma 4.2) give Let us also explain how to handle more regular external forces in the scale of the spaces PM a . Lemma 7.3 Let 2 ≤ a < 3. Assume that F (t) ∈ PM a−2 for all t > 0 and There exists a constant C such that for w(t) = t 0 S(t − τ )IP F (τ ) dτ it follows that Proof. As in the proof of Lemma 7.2, we obtain From now on, it suffices to repeat the reasoning which leads to the estimates of the quantity in (7.2). ✷ Let us formulate an interpolation inequality involving L q and PM a norms. Assume that v is smooth and rapidly decreasing. Using the Hausdorff-Young inequality (with 1/p + 1/q = 1 and p ∈ [1, 2)) and the definition of the PM a -norm we obtain for all R > 0 and C independent of v and R. In these calculations, we require 2p < 3 which is equivalent to q > 3. Moreover, we have to assume that ap > 3 which leads to the inequality q < 3/(3 − a). Now, we optimize inequality (7.5) with respect to R to get (7.4). Proof. It follows from Theorem 7.1 that the solution u satisfies u(·, t) PM a ≤ Ct 1−a/2 for every a ∈ [2, 3). Hence, to complete the proof of this corollary, it suffices to apply Lemma 7.4. ✷ Let us finally prove that the difference of two (singular) solutions corresponding to the same external force is more regular than each term separately. This fact is in a perfect agreement with the regularity result for the bilinear term obtained in [7]. Proof. Here, the reasoning is similar to that presented above, hence we shall be brief in details. First, we subtract integral equations (3.2) for u and v to obtain We denote z(t) = u(t) − v(t) and z 0 = u 0 − v 0 , and we find the solution of the equation z = S(·)z 0 + B(u, z) + B(z, v) via the Banach fixed point theorem in the space Y a defined in (7.1). Here, Lemma 7.2 guarantees that S(·)z 0 ∈ Y a for every 2 ≤ a < 3. Moreover, Propositions 4.1 and 7.1 allow us to show the contractivity of the mapping z → S(·)z 0 + B(u, z) + B(z, v) for sufficiently small ε > 0 because, by Theorem 4.1, u and v satisfy (5.3). The second part of this theorem is deduced immediately from Lemma 7.4. ✷ Remark 7.2 Given u 0 ∈ PM 2 with sufficiently small norm and F ≡ 0, Theorem 4.1 guarantees the existence of a unique small solution u ∈ C w ([0, ∞), PM 2 ). Next, our analysis in Corollary 7.1 allows us to show that u(t) ∈ L q (IR 3 ) for q > 3 and all t > 0. Hence, standard regularity theorems imply that u(x, t) is a smooth function and satisfies the Navier-Stokes system in the classical sense. Even if it is not written explicitly, the same conclusion can be deduced from Yamazaki's results [33,Th. 1.3], where he showed that his solution belonging initially to C w ([0, ∞), L 3,∞ (IR 3 )) falls, in fact, into L p,∞ (IR 3 ) for every 3 < p < ∞. Now, applying the Marcinkiewicz interpolation theorem for the identity mapping, one obtains immediately that 3<p<∞ L p,∞ (IR 3 ) ⊂ L q (IR 3 ) for every 3 < q < ∞. ✷ We conclude this section by stressing again that the two norms approach by Kato imposes a priori a regularization effect on solutions we look for. In other words, they are considered as fluctuations around the solution of the heat equation S(t)u 0 . The solutions appear to be unique locally in the space of more regular functions. The approach with the only one norm in Theorem 4.1 gives the local uniqueness in the larger space which, in our case, may contain genuinely singular solutions (like those in (2.4)) which are not smoothed out by the action of the nonlinear semigroup associated with (2.1)-(2.3). Loss of smoothness for large solutions As far as blow-up for Navier-Stokes several possibilities can be conjectured. One may imagine that blow-up of initially regular solutions never happens, or it becomes more likely as the initial norm increases, or that there is blow-up, but only on a very thin set, of measure zero. As we have seen in the previous sections, when using a fixed point approach, existence and uniqueness of global solutions are guaranteed only under restrictive assumptions on the initial data and external forces, that are required to be small in some sense, i.e. in some functional space. In [3] we pointed out that fast oscillations are sufficient to make the fixed point scheme works, even if the norm in the corresponding function space of the initial data is arbitrarily large (in fact, a different auxiliary norm turns out to be small). Here we want to suggest how some particular data, arbitrarily large (not oscillating) could give rise to irregular solutions. It is extremely unpleasant that we do not know in general whether for arbitrary large data the corresponding solution is regular or singular. More precisely: Remark 8.1 Let us consider the Navier-Stokes equations (2.3) with external force F ≡ 0. Then, if one defines the functions u ε (x, 0) = εu(x), where u(x) is the (divergence free, homogeneous of degree −1) function given by (2.4) as the initial data, then for small ε the system has a global regular (self-similar) solution which is even more regular than a priori expected (Section 7) and for ε = 1 (and possibly for other large values of ε) the system has a singular "solution" for any time. ✷ Unfortunately, this loss of smoothness for large data does not hold in the "distributional" sense, but as explained in Section 2, only "pointwise" for every x ∈ IR 3 \ {(0, 0, 0)}. However, for a model equation of gravitating particles this loss of smoothness for large data holds in the distributional sense and will be dealt with in a forthcoming paper [2].
2019-04-12T09:11:10.799Z
2002-10-16T00:00:00.000
{ "year": 2002, "sha1": "0a584b8871e77ea61f45fcb64fbb6f1561b84336", "oa_license": "elsevier-specific: oa user license", "oa_url": "https://doi.org/10.1016/j.jde.2003.10.003", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "3855ae36aa4299192afdc1e74fe74985b12d5687", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
120526434
pes2o/s2orc
v3-fos-license
On integrable system on $S^2$ with the second integral quartic in the momenta We consider integrable system on the sphere $S^2$ with an additional integral of fourth order in the momenta. At the special values of parameters this system coincides with the Kowalevski-Goryachev-Chaplygin system. Introduction Let us consider particle moving on the sphere S 2 = {x ∈ R 3 , |x| = a}. Entries of the vector x and angular momentum vector J = p × x are coordinates on the phase space T * S 2 with the following Poisson brackets where ε ijk is the totally skew-symmetric tensor. The Casimir functions of the brackets (1) are in the involution with any function on T * S 2 . The phase space T * S 2 is four dimensional symplectic manifold. So, for the Liouville integrability of the corresponding equations of motion it is enough to find two functionally independent integrals of motion. In this note we discuss an integrable system on T * S 2 possessing integrals of second and fourth order in the momenta J k . The corresponding Hamilton function has a natural form, i.e. it is a sum of a positive-definite kinetic energy and a potential. So, according to Maupertuis's principle, this natural integrable system on T * S 2 immediately gives a family of integrable geodesic on S 2 . Integrals of the geodesic are also polynomials of the second and fourth degrees. Remind that description of all the natural Hamiltonian systems on closed surfaces admitting integrals polynomial in momenta is a classical problem [1]. For the systems with polynomial in momenta integrals of degree one or two there exists a complete description and classification [2]. The Kowalevski top is an example of a conservative system on S 2 which possesses an integral of degree four in momenta [3]. Later Goryachev [4] and Chaplygin [5] found generalization of the Kowalevski system on S 2 . Recently, these results were extended in [6]. The main aim of this note is to consider some another generalization of the Kowalevski-Goryachev-Chaplygin system using the reflection equation theory [7]. Generic case Following to [8] let us consider Lax matrix for the generalized Lagrange system with the following entries polynomial in the spectral parameter λ Here α is an arbitrary numerical parameter, f, g, m, n and ℓ are some functions of x 3 and of the single non-trivial Casimir a = x 2 gives rise integrals of motion in the involution for the generalized Lagrange system The corresponding equations of motion may be rewritten in the form of the Lax triad d dt With algebraic point of view coefficients of the trace of T (λ) give rise commutative subalgebra in the complete Poisson algebra generated by entries T ij (λ). All the generators of this subalgebra are linear polynomials on coefficients of entries T ij (λ), which are interpreted as integrals of motion for integrable system associated with matrix T (λ). Some special commutative subalgebras generated by quadratic polynomials on coefficients of T ij (λ) were considered in [8]. These subalgebras were associated with five integrable systems on S 2 with an additional integral of motion third order in the momenta. According to [7,9], we can try to construct another commutative subalgebra generated by quadratic polynomials on coefficients of T ij (λ), which are integrals of motion for another integrable system associated with the same matrix T (λ). Namely, using matrix T (λ) (3) and standard machinery of the reflection equation theory [7,9] we can construct another 2 × 2 matrix with the following trace Here the superscript t stands for matrix transposition and matrices K ± (λ) are numerical solutions of the reflection equation associated with the r-matrix of XXX type. Let us begin with the following partial numerical solutions the reflection equation which depend on two arbitrary parameters b 0 and b 1 only. Substituting T (λ) (3) and K ± (6) into L(λ) (4) one gets that function F 3 in (5) depends of variables x 3 and J 3 only. So, if we want to consider integrable system differed from the generalized Lagrange system we have to put F 3 = const. It leads to the following expressions of the functions f (x 3 ) and g(x 3 ) where d is arbitrary numerical parameter. Theorem 1 If functions f (x 3 ) and g(x 3 ) are given by (7) then the third coefficient F 3 in (5) is a constant F 3 = 2b 2 0 d, while two remaining coefficients F 1 and F 2 are in the involution on T * S 2 if and only if Here α, c 1 , c 2 are arbitrary parameters and all another functions in (3) are equal to The proof is straightforward. So, two functions F 1 and F 2 are in the involution {F 1 , F 2 } = 0 on the phase space T * S 2 . Moreover, direct calculation yields that they are functionally independent functions on T * S 2 . It means that these functions F 1 and F 2 define an integrable system on the sphere. Integrals of motion F 1 and F 2 are quadratic and quartic polynomials in the momenta. For instance, at α = 0 the corresponding Hamilton function is equal to For brevity we do not present the second integral of motion F 2 explicitly. This function F 2 may be restored from the definitions (3)(4)(5) and conditions of the Theorem 1. If we consider more generic solutions of the reflection equations which depend of four parameters, one gets the same integrals of motion up to rescaling of x and rotations where b and φ are the suitable parameters. Up to such transformations integrals of motion F 1 and F 2 depend of five numerical parameters α, b 0 , b 1 , c 1 /c 2 and d. Remind, that another two parametric family of integrable systems on the sphere with fourth order integral of motion was studied in [6]. However, ansatz for the Hamilton function proposed in [6] looks like more restrictive than Hamiltonians (8). The relations between these systems will be studied in the forthcoming publications. At α = 2 and c 1 = 0 it has the form These Hamiltonians define new integrable systems on the sphere, which depend of three arbitrary parameters only. Summary Using the Lax matrix for the generalized Lagrange system and the standard construction of the commutative subalgebras from the reflection equation theory we construct new integrable system on the sphere. The corresponding Hamilton function is given by (10,8) while the second integral is fourth order polynomial in the momenta. These integrals depend of five numerical parameters α, b 0 , b 1 , c 1 /c 2 and d up to canonical transformations. At the special values of parameters we recover the Kowalevski-Goryachev-Chaplygin system.
2019-04-18T13:05:37.121Z
2004-10-29T00:00:00.000
{ "year": 2004, "sha1": "8038e76ee28b5251de7c566dbc9878f9cf0f75f3", "oa_license": null, "oa_url": "http://arxiv.org/pdf/nlin/0410063", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "9d7594f9096e6b3f21b77a6212316490445d9b33", "s2fieldsofstudy": [ "Physics", "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }
8514433
pes2o/s2orc
v3-fos-license
Raised Trappin2/elafin Protein in Cervico-Vaginal Fluid Is a Potential Predictor of Cervical Shortening and Spontaneous Preterm Birth Early spontaneous preterm birth is associated with inflammation/infection and shortening of the cervix. We hypothesised that cervico-vaginal production of trappin2/elafin (peptidase inhibitor 3) and cathelicidin antimicrobial peptide (cathelicidin), key components of the innate immune system, are altered in women who have a spontaneous preterm birth. The aim was to determine the relationship between cervico-vaginal fluid (CVF) trappin2/elafin and cathelicidin protein concentrations with cervical length in woman at risk of spontaneous preterm birth. Trappin2/elafin and cathelicidin were measured using ELISA in longitudinal CVF samples (taken between 13 to 30 weeks' gestation) from 74 asymptomatic high risk women (based on obstetric history) recruited prospectively. Thirty six women developed a short cervix (<25 mm) by 24 weeks' and 38 women did not. Women who developed a short cervix had 2.71 times higher concentrations of CVF trappin2/elafin from 14 weeks' versus those who did not (CI 1.94–3.79, p<0.0005). CVF trappin2/elafin before 24 weeks' was 1.79 times higher in women who had a spontaneous preterm birth <37 weeks' (CI: 1.05–3.05, p = 0.034). Trappin2/elafin (>200 ng/ml) measured between 14+0–14+6 weeks' of pregnancy predicted women who subsequently developed a short cervix (n = 11, ROC area = 1.00, p = 0.008) within 8 weeks. Cathelicidin was not predictive of spontaneous delivery. Vitamin D status did not correlate with CVF antimicrobial peptide concentrations. Raised CVF trappin2/elafin has potential as an early pregnancy test for prediction of cervical shortening and spontaneous preterm birth. This justifies validation in a larger cohort. Introduction Preterm birth is a global healthcare problem associated with significant neonatal morbidity and mortality and substantial healthcare costs [1][2]. Spontaneous preterm birth (sPTB) accounts for approximately three quarters of all premature deliveries and the need for early identification of at-risk women is widely recognised, since this would facilitate management and instigation of appropriate interventions. Current predictors commonly used in clinical practice to assess risk of sPTB include cervical length and cervico-vaginal fluid (CVF) fetal fibronectin (fFN), but their use is limited to gestational ages beyond 18 weeks' and positive predictive power is suboptimal [3]. Earlier and more accurate prediction of risk would be advantageous. A test which is safe, easy to perform and globally acceptable would also have applicability in low to middle income countries where the incidence of prematurity is high [4]. sPTB is closely linked with underlying inflammation and infection, and there has been considerable focus on the potential of inflammatory cytokines as predictive biomarkers [5]. However, few have questioned whether host defence peptides (antimicrobial peptides, AMPs;), key components of the innate immune defence system, might be alternative biomarkers for the same purpose [6]. Several families of AMPs (e.g. whey acidic proteins, trappin2/ elafin, transferrins and human a and b defensins) have been identified in the female reproductive tract [7][8][9]. Trappin2/elafin (also known as peptidase inhibitor 3, PI3), a member of the whey acidic protein family, possesses anti-elastase and anti-protease 3 properties and exerts both antimicrobial and immunomodulatory actions at mucosal surfaces [6,[10][11][12]. The PI3 gene produces a spliced protein (117 aa; 12.3 kDa) which is cleaved intracellularly to a mature protein (9.9 kDA, Trappin 2). This can be secreted and tethered to the extracellular matrix via an exposed cementoin domain. Trappin2 can be further processed via extracellular tryptases to soluble elafin (6 kDA), a smaller molecule which is no longer tethered to the extracellular matrix [10][11]. Trappin2/elafin proteins are usually expressed constitutively at low concentrations within epithelial cell layers, but synthesis can be stimulated by lipopolysaccharide and inflammatory cytokines and down regulated by oestradiol [6,[10][11]13]. PI3 mRNA and associated trappin2/elafin protein has been reported to be increased in the amnion of women delivering preterm with chorioamnionitis compared to those without, but conversely also found to be reduced in amnion from women with preterm premature rupture of the membranes (PPROM) [14]. Lower trappin2/elafin CVF concentrations are also reported in low risk pregnant women presenting with bacterial vaginosis [15]. Less is known about cathelicidin antimicrobial peptide (cathelicidin) in the human reproductive tract, but mRNA and protein have been detected in vaginal epithelium originating from non-pregnant women [16]. Our knowledge of the utility of CVF AMPs to predict sPTB is limited; the presence and gestational profiles of AMPs in CVF and their relation to other immune modulators such as inflammatory cytokines and vitamin D is not well described. This is despite growing evidence that inflammatory mediators modulate expression of AMPs and the recognition that vitamin D is integral to pathways regulating cathelicidin synthesis and metabolism [11,15,17,18]. The relation between vitamin D and AMPs is of particular interest given reports suggesting a role for vitamin D insufficiency in poor pregnancy outcome [19,20]. Trappin2/elafin has previously been identified as a potentially useful clinical biomarker of breast cancer [21], and in graft versus host disease of the skin following bone marrow transplantation [22]. It follows that better understanding of the role of trappin2/ elafin and cathelicidin in the pathophysiology of spontaneous preterm birth may provide new avenues for prediction and treatment of women at high risk of spontaneous labour and early delivery. We hypothesised that trappin2/elafin and cathelicidin concentrations in CVF would be altered in women at risk of sPTB. This study, therefore, has investigated the relationships between CVF trappin2/elafin and cathelicidin concentrations and cervical length in a cohort of woman at high risk of sPTB (based on obstetric history). The association between serum vitamin D concentration and trappin2/elafin and cathelicidin was also explored. Ethics statement Samples for AMP analysis were obtained from a subset of women recruited to a previously reported prospective observational study (the Cervical Length and Inflammatory Changes: CLIC study) [23] designed to assess the relationship between inflammation and cervical shortening in women at high risk (i.e. women with a previous history or late miscarriage or sPTB) of sPTB. This study was approved by the Research Ethics Committee of St Thomas' Hospital, London UK (06/Q0704/ 66). All patients provided written informed consent and selfreported information on ethnicity, which was classified for analysis purposes as 'white, black or other'. Study population and design Women were enrolled from two preterm surveillance clinics at two teaching hospitals in London between 13 and 24 weeks' gestation from June 2006 until November 2008. Women with a history of at least one prior spontaneous preterm birth or late miscarriage between 16 and 34 weeks' gestation were eligible to participate. Exclusion criteria were multiple pregnancy, previous iatrogenic preterm births and inability to give informed consent. Thereafter, recruits were assessed until 30 weeks' gestation with each providing a CVF sample prior to transvaginal cervical length assessment every two weeks. However, if the cervical length shortened to less than 25 mm before 24 weeks' gestation, women (allocated to the case group) were offered treatment (either cervical cerclage or vaginal progesterone) according to clinical practice. In order to ensure equal numbers for analysis and to explore the impact of treatments on trappin2/elafin concentration, women were assigned to either treatment using computer generated open-label randomization by the study investigator. Samples and scans were then repeated weekly thereafter in women found to have a short cervix. The use of vaginally administered natural progesterone (Cyclogest, 400 mg once daily; Actavis UK Ltd, Devon, UK) was based on current clinical practice at the time of the study. Women who did not develop a short cervix by 24 weeks' gestation were allocated to the control group. Routine screening for vaginal organisms such as bacterial vaginosis, Trichomonas vaginalis and Candida was not included in the study protocol but if women presented with symptoms, a high vaginal swab was taken and treatment carried out according to antimicrobial sensitivities. Cervical length assessment Cervical length was assessed in accordance with standardised guidelines [23]. In brief, a sagittal view of the cervix was obtained with the long axis view of the echogenic endocervical mucosa along the length of the canal, allowing identification of both the internal and external os. The linear distance between the external and internal os was recorded 3 times (in mm) over a minimum of 3 minutes using optimal magnification and zoom settings, and the shortest measurement recorded. Transfundal pressure was exerted for 15 seconds and subsequent demonstration of a funnel was noted. The total closed length in all women was measured and if a cerclage was present, the closed length cranial to the cerclage was also recorded. Sample preparation and AMP analysis As reported previously [23], a single Dacron swab was obtained from the posterior vaginal fornix in order to obtain a high vaginal sample of CVF at each visit. During speculum examination, the swab was placed in the posterior vaginal fornix for 10 seconds to achieve saturation, then transferred into 750 ml of standard phosphate-buffered saline solution containing protease inhibitors (Complete, Roche Diagnostics GmbH, Germany) and immediately transported on ice to the laboratory. The swab was removed, placed in a clean tube, vortexed for 10 seconds and centrifuged (2600 g for 10 minutes at 4uC). Resultant fluid was collected and added to the fluid in the original tube. This was mixed and centrifuged for a further 10 minutes to remove cell debris. Cellfree supernatants were divided into aliquots (,110 ml) and stored at -80uC until analysis. Longitudinal trappin2/elafin CVF concentration analysis was undertaken on 437 individual samples from 74 women. Women from the control group were included for analysis if they had provided at least four longitudinal samples taken over the second trimester. A minimum of six samples were analysed for each woman who developed a short cervix (cases). Remaining samples from n = 64 women were used for cathelicidin analysis. The samples from cases included pre and post intervention samples, and the sample obtained at the visit when the cervix was found to be short prior to randomisation to an intervention. Samples were thawed at room temperature, briefly vortexed and analysed by ELISA [Trappin2/elafin, HK318; cathelicidin (LL-37), HK321 from Hycult, Biotech Cambridge] according to manufacturer's instructions. Samples used for trappin2/elafin measurement were diluted in sample buffer (1:10 for control samples and 1:50 or 1:100 for case samples) to ensure positioning within the standard curve. CVF samples for cathelicidin measurement were undiluted. A pooled control sample achieved by combining a random set of 10 samples was included in individual plates to control for inter-plate variation. Final concentrations were calculated from the standard curves using logistic regression (Stata, Texas Version 11.2). For trappin2/elafin, the lower limit of detection of the ELISA was 0.878 ng/ml and maximal limit of detection was 10 ng/ml. Serum vitamin D analysis Vitamin (25-OH)D concentrations were measured in all available serum samples (n = 67) taken at the first study visit using a chemiluminescent microparticle immunoassay (CMIA) for the quantitative determination of 25-hydroxyvitamin D (25-OH vitamin D) in human serum, according to manufacturer's instructions (ARCHITECT, Abbott Laboratories, Barcelona). For analysis, women were grouped into four categories of vitamin D status: ,25 ng/ml; 25-49.9 ng/ml; 50-74.9 ng/ml and greater than 75 ng/ml. Statistical analysis The sample size was not pre-determined due to inadequate published data informing the gestational profiles of CVF trappin2/ elafin or cathelicidin concentrations. Rather, it was determined by the availability of CVF samples. This study and analysis was therefore exploratory in nature. Trappin2/elafin and cathelicidin concentrations were expressed as ng/ml. The study was not designed or powered to directly compare the two treatment groups or the relation between biochemical markers and spontaneous preterm birth, but some exploratory comparisons have been included. Analysis was undertaken using Stata (version 11.2, Stata Corp, College Station, Texas). Distributions of data were first established by examination of distributional plots for raw and transformed values. Log transformations were applied to trappin2/elafin, cathelicidin and cytokine concentrations to achieve approximate Normality. Where sample concentrations were below the limit of assay detection, an interval regression method was used, with the missing values taken as being at an unknown point on the interval between zero and the smallest positive concentration observed [24]. When considering multiple measurements from the same woman, a random effects regression model was used. In order to determine the difference in trappin2/elafin and cathelicidin expression in cases and controls prior to treatment, samples up to and including the visit at which the cervix shortened, but prior to treatment over a period of time from 13 to 24 weeks' were included. Post treatment samples were taken up to 30 weeks'. The average difference between cases and controls was determined using a regression model with correction for effect of gestation (2 weekly categories) and interassay plate variation. Adjustments for body mass index (BMI), maternal age and ethnicity, and current smoking status were considered. Results were expressed as ratios of the concentration, and as weekly rates of change as appropriate, with 95% confidence intervals. Actual p-values are given (usually to 2 decimal places), except for very small values, shown as p, 0.001. Spearman's rank correlations (r s ) were used to show general association between markers. Graphs show geometric mean concentrations on a log scale, with standard error bars. Some descriptive data is provided as medians with quartiles. Test performance was described for prediction of sPTB and shortening cervix (prior to cervical shortening) using receiver operating characteristic (ROC) curves, sensitivity, specificity and predictive values. Results One hundred and twelve women were enrolled into the CLIC observational study [23]. Thirty eight controls and 36 cases provided suitable samples for longitudinal analysis of CVF trappin2/elafin. Cathelicidin was measured in a subgroup (n = 34 controls and n = 30 cases) in whom there was sufficient CVF sample available. Table 1 summarises the baseline characteristics of the 74 women studied. The median (quartiles) gestational age of cervical shortening for cases (n = 36) was 19 +1.5 weeks (17 +3 , 21 +2 ). Women in the control group had numerically more previous preterm deliveries between 24 and 34 weeks' gestation and were more likely to be white compared to women destined to develop a short cervix (,25 mm at ,24 weeks' gestation), who reported more previous second trimester miscarriage and were more likely to be black. The incidence of bacterial vaginosis was similar between groups (controls, 16% and cases 14%). The sPTB ,37 weeks'/late miscarriage rate was 16% for controls (n = 6 of 38 women, with one late miscarriage ,24 weeks) and 53% for cases (short cervix group, n = 19 of 36 women, 8 of which were ,24 weeks'). The demographics and sPTB rates were similar for the subgroup that provided samples for cathelicidin measurements. Trappin2/elafin gestational profile The CVF trappin2/elafin concentration was higher in women who subsequently developed a short cervix compared to high-risk controls with normal cervical length (Figure 1, ratio 2.71, CI 1.94 to 3.79, p,0.0005) and was maintained across the gestational range studied. Trappin2/elafin CVF concentrations were 3 fold higher than controls when cervical shortening was first detected (ratio 3.03 CI 1.92 to 4.81, p,0.0005). The CVF trappin2/elafin concentration in women with a short cervix at the time of randomisation (prior to start of treatment) showed a non-significant trend towards reduction when measured four weeks later in the cerclage group, (ratio: 0.57 CI 0.32 to 1.02, p = 0.058). Trappin2/elafin was unaffected following four weeks of vaginal progesterone therapy (ratio: 1.05 CI 0.68 to 1.61, p = 0.830). Controls showed little gestation related change over a similar period. Trappin2/elafin measured before 24 weeks' was also higher in women who delivered following sPTB at ,37 weeks' compared to those who delivered at term (ratio 1.79, CI: 1.05 to 3.05, p = 0.034). The influence of maternal ethnicity, BMI, age and current smoking status were also investigated as potential determinants of CVF trappin2/elafin concentrations, but no associations were found. As a result, these factors were not included in the final regression model. In women with a short cervix, there was no significant influence of treatment (cerclage or vaginal progesterone) [P = 0.667]; the rate of rise in CVF cathelicidin concentrations across gestation was similar in women receiving progesterone [11.2% (CI: 2.9 to 20.1)] to those allocated to cerclage treatment [13.5% (CI: 7.7 to 19.6)]. No association between BMI, age or ethnicity and CVF cathelicidin concentrations were found. Prediction of a short cervix by 24 weeks' and sPTB using CVF cathelicidin measurements CVF cathelicidin concentrations were not found to be predictive of a short cervix before 24 weeks' or for sPTB at ,37 weeks of pregnancy (data not shown). In contrast, associations were found between cathelicidin and the cytokines measured (Table 3). For example, a doubling of IL1B concentrations in CVF was associated with an 88% increase in cathelicidin (P,0.001). Vitamin (25-OH)D serum status in cases and controls Serum vitamin (25-OH)D concentrations were measured at study entry in 35 women with a short cervix and 32 with a normal cervix (Table 4). Distribution between the four vitamin (OH-25)D categories was similar for both groups. The vitamin (OH-25)D concentration was not predictive of cervical shortening (area under ROC curve 0.52, CI: 0.38-0.67). There was no relationship between serum vitamin D, CVF trappin2/elafin or cathelicidin. Discussion Understanding of the relationships between cervico-vaginal host defence peptides and spontaneous preterm birth is limited. The novel demonstration that the CVF trappin2/elafin concentration is raised prior to cervical shortening in women at high risk of sPTB is indicative of an altered innate immune status early in pregnancy in these women. The substantive difference between cases and controls also suggests that measurement of trappin2/elafin may be a useful biomarker for identifying women at high risk of cervical shortening; i.e. those who could benefit from additional surveillance and intervention. In contrast, cathelicidin did not distinguish between high risk women who were likely to develop a short cervix and those who were not. The associations between cathelicidin and the cytokines measured suggest that cathelicidin is reflective of an inflammatory response following cervical shortening. The reasons for raised CVF trappin2/elafin were not directly determined in this study. However, there is a wealth of literature describing the different roles trappin2/elafin proteins play in the innate immune response [6,11,25,[26][27][28][29][30]. Firstly, trappin2/elafin are well recognised as inhibitors of neutrophil elastase and Trappin2/elafin Predicts Preterm Birth PLOS ONE | www.plosone.org protease 3 which are produced by activated neutrophils [6]. This is a protective response initiated by inflammatory mediators [11,25] with trappin2/elafin proteins providing a 'brake' to excessive neutrophil activity and preventing damage of host tissues through induction of matrix metalloproteinases, glycoproteins, fibronectin and cadherins [26][27][28]. Neutrophils can also cause cleavage and release of elafin from trappin2. [29]. Secondly, trappin2/elafin can be induced directly in response to local infection, mediating disruption of bacterial membranes and inhibition of viral replication and/or attachment to epithelial cells [6,30]. Thirdly, trappin2/elafin possesses immunomodulatory properties which generally promote Th1 cytokine production, neutrophil recruitment and NFkB activation in macrophages and dendritic cell activation [31]. Under some circumstances trappin2/elafin will also lead to suppression of cytokines dependent on the stage of the inflammatory process and disease context [31]. There is also potential for a genetic influence of trappin2/elafin synthesis as several PI3 polymorphisms have been identified [32,33], but these are associated with a reduction in elafin expression. This complexity of trappin2/elafin regulation provides a challenge to interpretation of data from the present study. Clearly pregnant women who develop a short cervix display produce more trappin2/elafin than high risk controls, and this may indeed reflect the ability to mount an enhanced host-defence response in response to unknown stimuli in the cervico-vaginal environment. Given the association between infection and sPTB, we would postulate that the underlying cause of raised CVF elafin in women with cervical shortening is the presence of a local subclinical bacteria/viral infection. In this scenario, trappin2/elafin would be induced via activation of pathogen recognition receptors (e.g. Tolllike receptors or RNA helicases) and downstream production of inflammatory cytokines, and possibly in response to escalating neutrophil elastase/protease 3 concentration(s). The observation that trappin2/elafin is raised throughout gestation indicates the stimulus persists during pregnancy, but that the innate response involving trappin2/elafin is not sufficient to inhibit cervical shortening or sPTB. CVF neutrophil elastase concentrations were not measured, but the potential recruitment of neutrophils, and release of neutrophil elastase in this scenario (providing a stimulus to increased CVF trappin2/elafin) provides a compelling explanation as to the mechanism which initiates cervical tissue damage, leading to shortening. This hypothesis requires further investigation, as do alternative explanations such that women at risk of sPTB in this cohort may i) have constitutively raised trappin2/elafin basal production due to genetic predisposition or exposure to other stimuli or different reproductive tract microbiota profiles. The lack of strong relationship between trappin2/elafin and CVF cytokine profiles is intriguing given that cytokines such as IL1B can, in the absence of pathogens, induce trappin2/elafin expression in vitro. Cytokines were detectable in CVF from women in the short cervix and control groups in this cohort, but only GM-CSF and MCP-1 [23] related to cervical shortening and these showed no correlation with trappin2/elafin. Possibly, as a result of the substantially raised trappin2/elafin production, there may be a paradoxical suppression of cytokine production. Trappin2/elafin has been measured previously in tissues and CVF from non-pregnant and pregnant women [14][15][34][35]. In Table 3. Rank correlations between cathelicidin and 11 cytokines measured in matched cervico-vaginal fluid (CVF) samples (n = 167) taken between 13 and 24 weeks of gestation from 46 women. [34] and in pregnant women trappin2/elafin was found to be increased in amnion from women delivering preterm with chorionamnionitis [14]. These data concur with the suggestion that raised trappin2/ elafin rises in a response to subclinical infection/inflammation and processes leading to sPTB. Contrasting reports, have suggested that a failure to induce an appropriate trappin2/elafin response in CVF or the amnion results in bacterial vaginosis [15] and PPROM, respectively [14]. Bastek et al. [35], have assessed CVF elafin concentrations in parallel with cervical length and fFN measurements, in pregnant women at 20 weeks' and 23 +6 weeks' gestation [35]. At these gestations, trappin2/elafin was not useful as a predictor of sPTB. This reinforces our observations that elafin has a much better prediction capacity at 14 weeks', and that this reflects the process of cervical shortening rather than cervical length per se. Comparison of trappin2/elafin between the two studies is not possible as concentrations were not provided by Bastek et al. [35], but collection of CVF differed and snap-freezing of swabs in protease inhibitor prior to processing and analysis could release intracellular pre-trappin2 protein due to lysis of cellular material collected on the swab. In contrast, in our study swabs were removed from the protease inhibitor collection media and samples centrifuged to provide a cell free sample prior to storage at 280uC. A potential limitation of this study, and others, is the use of a commercial human trappin2/elafin ELISA assay, as it is reported to detect the 12.3 kDa (pre-trappin 2) and 9.9 kDa (trappin 2) proteins but is unsuitable for detection of the 6 kD protein (elafin). This would suggest that the raised CVF trappin2/elafin measured in our study reflects an increase in trappin2 specifically. However, using the same antibody, we carried out a pilot study on CVF samples which showed, using western blot, not only bands relating to pre-trappin 2 and trappin 2, but also a band at 6 kDa which approximates the protein size of elafin (data not shown). Future studies should attempt to define the trappin2/elafin ratio in the CVF of women who develop a short cervix, as the proteins can have differing biological properties and excessive neutrophil elastase can reduce tethering of trappin2 to cell membranes and induce proteolytic processing of trappin2 into elafin [29]. Despite sharing similar antimicrobial properties, the CVF cathelicidin profile did not mimic that of trappin2/elafin. The rise in cathelicidin around 18-19 weeks' pregnancy coincided with the mean gestation at which cervical shortening occurred, prompting administration of vaginal progesterone or a cervical suture. As there was no impact of treatment, this increment appeared to be related to the process of cervical shortening. Indeed, cathelicidin is reported in the literature [17] to play a role in wound healing as well as in the innate immune response. The correlation of cathelicidin with CVF cytokines was surprising as in vitro cathelicidin is not regulated by cytokines, but implies that the process of cervical shortening does involve both cytokines and cathelicidin. There are reports to suggest that cathelicidin is regulated by vitamin D in vitro, but neither trappin2/elafin or cathelicidin in the CVF was associated with the women's vitamin D status as assessed by measurement of vitamin (OH-25)D. However, this is not unexpected as the majority of women were vitamin D deficient which might obscure any association. It would be of interest to determine whether CVF cathelicidin was induced in a study of vitamin D supplementation. The clinical importance of this study lies in the identification of CVF elafin/trappin2 as a potentially useful tool to screen women for risk of cervical shortening at a gestational age as early as 14 weeks. Women with raised CVF trappin2/elafin could be stratified to receive more intensive surveillance such as regular cervical length transvaginal ultrasound and, at later gestation, to an fFN test for risk of premature birth. This strategy would optimise clinical resource use and avoid unnecessary surveillance of women unlikely to develop a short cervix on the basis of this new test. Most interventions currently employed to prevent sPTB rely on prior identification of women with a short cervix, generally only identified at later gestations (19)(20)(21)(22) weeks') which gives limited time for effective intervention. Early pregnancy recognition of those most likely to develop a short cervix would provide the opportunity for progesterone administration or elective cerclage at an earlier gestation than currently practised. Use of trappin2/ elafin as an earlier test of risk of sPTB could similarly identify women suitable for inclusion in clinical trials assessing prophylactic interventions to prevent premature labour. These observations require prospective validation in other cohorts; we have a study underway to validate trappin2/elafin as a predictor of cervical shortening and sPTB. In parallel, this will enable assessment of CVF and cervical epithelial cell trappin2/ elafin concentrations, as well as relationships with neutrophil activity and the CVF microbiome profile.
2018-04-03T02:32:49.874Z
2014-07-30T00:00:00.000
{ "year": 2014, "sha1": "8933490f472c6f6a92a86b3a93337120a4de8f05", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0100771&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8933490f472c6f6a92a86b3a93337120a4de8f05", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
215226295
pes2o/s2orc
v3-fos-license
CFTR Some studies of CFTR imply that channel activation can be explained by an increase in open probability (Po), whereas others suggest that activation involves an increase in the number of CFTR channels (N) in the plasma membrane. Using two-electrode voltage clamp, we tested for changes in N associated with activation of CFTR in Xenopus oocytes using a cysteine-substituted construct (R334C CFTR) that can be modified by externally applied, impermeant thiol reagents like [2-(trimethylammonium)ethyl] methanethiosulfonate bromide (MTSET+). Covalent modification of R334C CFTR with MTSET+ doubled the conductance and changed the I-V relation from inward rectifying to linear and was completely reversed by 2-mercaptoethanol (2-ME). Thus, labeled and unlabeled channels could be differentiated by noting the percent decrease in conductance brought about by exposure to 2-ME. When oocytes were briefly (20 s) exposed to MTSET+ before CFTR activation, the subsequently activated conductance was characteristic of labeled R334C CFTR, indicating that the entire pool of CFTR channels activated by cAMP was accessible to MTSET+. The addition of unlabeled, newly synthesized channels to the plasma membrane could be monitored on-line during the time when the rate of addition was most rapid after cRNA injection. The addition of new channels could be detected as early as 5 h after cRNA injection, occurred with a half time of ∼24–48 h, and was disrupted by exposing oocytes to Brefeldin A, whereas activation of R334C CFTR by cAMP occurred with a half time of tens of minutes, and did not appear to involve the addition of new channels to the plasma membrane. These findings demonstrate that in Xenopus oocytes, the major mechanism of CFTR activation by cAMP is by means of an increase in the open probability of CFTR channels. I N T R O D U C T I O N CFTR is the product of the gene that is mutated in cystic fibrosis, the most common, fatal genetic disorder in the Caucasian population. Mutations in the CFTR gene result in altered function in multiple organs including the lung, the pancreas, the intestine, the liver, the reproductive organs, and the sweat glands and ducts (Quinton, 1999). CFTR functions as an anion-selective channel, and activation requires phosphorylation of the R-domain by PKA and the hydrolysis of ATP at the two nucleotide binding folds, NBF1 and NBF2 (Anderson et al., 1991;Cheng et al., 1991;Rich et al., 1991;Anderson and Welsh, 1992;Winter et al., 1994;Sheppard and Welsh, 1999). cAMP-induced, CFTR-mediated Cl Ϫ currents have been observed, not only in native tissues, but also in cells transfected with CFTR cDNA and in Xenopus oocytes injected with cRNA encoding CFTR Drumm et al., 1991;Kartner et al., 1991;Tabcharani et al., 1991;Anderson et al., 1992;Sood et al., 1992), but the mechanism for the activation of Cl Ϫ channels by cAMP remains controversial. Some studies suggest that activation can be attributed to an increase in the open probabilities of CFTR channels resident in the plasma membrane (Denning et al., 1992;Dho et al., 1993;Prince et al., 1993;Santos and Reenstra, 1994;Hug et al., 1997;Loffing et al., 1998;Moyer et al., 1998), whereas others suggest that cAMP can induce the insertion of CFTR channels into the plasma membrane from a submembranous compartment via vesicle fusion (Schwiebert et al., 1994;Howard et al., 1996;Tousson et al., 1996;Lehrich et al., 1998;Howard et al., 2000). In particular, recent studies using Xenopus oocytes report increases in membrane capacitance and antibody labeling that were interpreted as being indicative of cAMPdependent, exocytotic delivery of CFTR to the plasma membrane (Takahashi et al., 1996;Peters et al., 1999;Weber et al., 1999). We have used engineered cysteines to identify residues that lie within the anion-conducting pore of CFTR (see Smith et al., 2001 in this issue). The targeted amino acid residues were replaced with cysteine, which in turn, could be modified with highly polar, mem-brane impermeant derivatives of methanethiolsulfonate (MTS)* reagents. One of these cysteine-substituted constructs (R334C) was readily modified by MTS reagents in the external bath and covalent modification gave rise to the changes in anion conduction that could be easily detected in the macroscopic I-V plots recorded in Xenopus oocytes, permitting us to distinguish modified from unmodified-channels. R334C CFTR, in conjunction with membrane impermeant thiol reagent, MTSET ϩ (Holmgren et al., 1996), offered an opportunity to test directly the hypothesis that activation of Cl Ϫ conductance in oocytes expressing CFTR is accompanied by an increase in channel number in the plasma membrane. We found that exposure of oocytes expressing R334C CFTR to MTSET ϩ for 20 s before activation resulted in labeling of the entire membrane pool of functional channels, suggesting that channels activated by cAMP are resident in the membrane before activation and that activation of Cl Ϫ conductance, therefore, is due largely to an increase in the open probability (P o ) of CFTR channels. Mutagenesis The CFTR mutants were generated using the QuickChange TM sitedirected mutagenesis kit from Stratagene. A dsDNA pBluescript vector with a CFTR insert and a pair of synthetic oligonucleotide primers containing the desired mutation were used in this procedure. The primers, each complimentary to the opposite strands of the vector, extend during the temperature cycling by means of Pfu DNA polymerase to generate a hybrid plasmid containing one mutated DNA strand and one wild-type parental strand. The final product is treated with DpnI, a restriction enzyme selectively digesting the methylated parental DNA strand. The newly synthesized DNA containing the desired mutation is not methylated, therefore, it is not susceptible to DpnI digestion. After Escherichia coli transformation, the colonies only contain the mutated plasmids. The sequences at the mutation region and in the whole PCR-generated region are confirmed by direct DNA sequencing. In Vitro Transcription The CFTR cRNAs for Xenopus oocyte injection were synthesized by using the in vitro transcription kit, mMessage Machine (Ambion, Inc.). The T7 RNA polymerase was used because the insert in the Bluescript CFTR clone is downstream from the T7 promoter. The Bluescript CFTR cDNA templates were prepared by digestion with XhoI, a restriction enzyme with a 5 Ј protruding end to avoid the problem of Nonspecific transcription associated with 3 Ј overhang ends. The XhoI site is located downstream from the CFTR insert, therefore, the run-off transcripts generated are of a defined size with the sequence the same as that of the mRNA. The transcription products were purified and the quality and quantity of the transcripts assessed on an agarose gel. Electrophysiological Recordings Electrophysiological recording methods were similar to those described by Mansoura et al. (1998). Briefly, individual oocytes were placed in the recording chamber and continuously perfused with frog Ringer's solution unless noted. The Ringer's solution contains the following (in mM): 98 NaCl, 2 KCl, 1 MgCl 2 , 1.8 CaCl 2 , 2.5 HEPES-Na, and 2.5 HEPES-H ϩ . The volume of the perfusion chamber used in the current study was ‫ف‬ 100 l, and the flow rate to the chamber was ‫ف‬ 67 l/s (4 ml/min). We estimated that the complete mixing time should be Ͻ 20 s. The room temperature was between 21 and 24 Њ C. The two-electrode voltage-clamp system (model Dagan Corp.) and the pClamp data acquisition program (Axon Instruments, Inc.) were used for data acquisition. Oocytes were normally kept under open circuit condition in experimental chambers. At the time of interest, the membrane potential was ramped from Ϫ 120 to ϩ 60 mV in a period of 1.8 s to construct the whole cell I-V plots. Reagents The CFTR Cl Ϫ channels were activated using a cocktail containing the phosphodiesterase inhibitor, isobutylmethyl xanthine (IBMX; RBI or Sigma-Aldrich), and the adenylate cyclase activator, forskolin, or the ␤ -adrenergic agonist, isoproterenol (Sigma-Aldrich). Coinjected ␤ -adrenergic receptor was used on occasion as an alternative to activate adenylate cyclase. Although the receptor has 10 native cysteines, three of which reside in the second extracellular loop, and five in the transmembrane domain, Javitch et al. (1997) showed that MTSEA ϩ had no effect on the binding of agonist or antagonist to wild-type ␤ 2 -adrenergic receptor expressed in HEK293 cells. The majority of the experiments for this study were done using 10 M isoproterenol and 1 mM IBMX as the stimulating cocktail (Isop ϩ IBMX), but all the results were confirmed using 10 M forskolin and 1 mM IBMX. Data Analysis The data was analyzed using an analysis program developed in our laboratory and was presented in the form of I-V plots. The conductance reported here was calculated from the slope of the I-V plot at the reversal potential. Data are reported as mean Ϯ SEM. Modification of R334C CFTR by MTSET ϩ Is Stable but Reversible The effects of MTSET ϩ and MTSES Ϫ modification on the conductance of R334C CFTR are documented in the companion paper (see Smith et al., 2001, in this issue) and were briefly summarized in Fig. 1. Expression of R334C CFTR in Xenopus oocytes gives rise to cAMPactivated Cl Ϫ conductance characterized by modest inward rectification, which is distinct from that seen with expression of wt CFTR that is characterized by modest outward rectification. Brief exposure of oocytes expressing R334C CFTR to MTSET ϩ results in an approximate doubling of the conductance and a change in the shape of the I-V plot to one that is linear. In contrast, application of MTSES Ϫ attenuates the conductance by ‫ف‬ 50% and enhances the inward rectification. Recordings from excised patches presented in the companion paper (see Smith et al., 2001, in this issue) also demonstrated that MTSET ϩ modification increased the single-channel conductance of R334C CFTR. The effect of MTSET ϩmodification was not spontaneously reversible, but was readily reversed by a reducing reagent such as 2-ME (see Fig. 3). These observations indicated that MTSET ϩmodified and -unmodified channels could be distinguished by their functional characteristics. To test the stability of MTSET ϩ labeling, we performed a group of experiments in which the whole-cell conductance was monitored for up to 5 h after a 20-s exposure of oocytes to 100 M or 1 mM MTSET ϩ . The conductance (@E rev ) was first obtained 30-40 min after the initial exposure of the oocytes to stimulatory cocktail when the activation of the Cl Ϫ conductance had attained a steady state (control). Each oocyte was then exposed to MTSET ϩ for 20 s. The MTSET ϩ -containing stimulatory cocktail was then replaced with stimulatory cocktail lacking the thiol reagent, and the oocytes were continuously perfused for up to 5 h. The conductances measured at 2 min, 2 h, and 5 h after exposure of oocytes to MTSET ϩ were normalized to the steady-state conductance before exposure to MTSET ϩ . The values displayed in Fig. 2 demonstrate that the MTSET ϩinduced conductance increase was identical at 2 min, 2 h, and 5 h after exposure to the reagent. The change in the shape of the I-V relation (unpublished data) was Oocytes were continuously perfused with a cocktail containing 10 M isoproterenol and 1 mM IBMX (control). An ‫-5ف‬ min exposure to 1 mM MTSET ϩ induced an approximate doubling of the conductance and a change in the shape of the I-V plot. (B) I-V plots obtained at steady-state activation (control) and after ‫-5ف‬min exposure to 1 mM MTSES Ϫ that attenuated the conductance by ‫%05ف‬ and enhanced inward rectification. Figure 2. Modification of R334C CFTR by MTSET ؉ was stable for at least 5 h. The conductance (g Cl @E rev ) was first obtained 30-40 min after the initial exposure of the oocytes to stimulatory cocktail when the activation of the Cl Ϫ conductance had attained a steady state (control). Each oocyte was exposed to 100 M or 1 mM MTSET ϩ for 20 s. The MTSET ϩ -containing stimulatory cocktail was then replaced with stimulatory cocktail lacking the thiol reagent, and the oocytes were continuously perfused for up to 5 h. The conductances measured at 2 min, 2 h, and 5 h after exposure of oocytes to MTSET ϩ were normalized to the steady-state conductance before exposure to MTSET ϩ . also characteristic of R334C CFTR as described in the accompanying paper (see Smith et al., 2001, in this issue) and Fig. 1 and did not change with time. The Entire Membrane Pool of R334C CFTR Activated by cAMP Is Accessible to MTSET ϩ before Activation The level of CFTR expression used in these studies was such that, before activation by stimulatory cocktail, the conductance of the oocyte membrane was similar to that seen in an oocyte that was not expressing R334C CFTR. The activated CFTR conductance was generally 50-100fold greater than that of the background conductance, so that before activation, the product of the number of channels and the open probability ( N P o ) for the CFTR channels was of the order of one hundredth of that seen in the active state due to a very low value of P o , a low value of N or some combination of the two. Therefore, we refer to this condition as the "inactive state," despite the fact that it could represent a large population of membrane-localized channels, each of which exhibits a very low, but nonzero, probability of opening. Fig. 3 contains plots of the conductance (@E rev ) versus time obtained from a group of experiments designed to determine if R334C CFTR channel can be modified by externally applied, impermeant thiol reagents in the active as well as the inactive state. As seen in Fig. 3 A, after the channels were activated by stimulatory cocktail containing 10 M isoproterenol and 1 mM IBMX, exposure of the oocyte to 1 mM 2-ME had no effect on the conductance (g Cl ) before MTSET ϩ modification. Subsequently, a 5-min exposure to 100 M MTSET ϩ caused about a doubling in g Cl , and the effect was reversed by 2-ME. The efficacy of MTSET ϩ modification of R334C CFTR did not depend on the state of activation of CFTR ( Fig. 3 B). Before activation, an oocyte was exposed to 100 M MTSET ϩ for about 5 min, and then to stimulatory cocktail. Afterwards, 1 mM 2-ME decreased g Cl to ‫%05ف‬ of the maximum value. A second exposure Figure 3. The entire membrane pool of R334C CFTR channels that were activated by cAMP was labeled with MTSET ؉ before the activation. Records of g Cl versus time obtained from R334C CFTR expressing oocytes that were obtained from the same frog and assayed on the same day. Oocytes were always perfused with frog Ringer's unless noted (see materials and methods). After a control period, they were perfused with stimulatory cocktail containing 10 M isoproterenol and 1 mM IBMX (Isop ϩ IBMX). (A) A record of g Cl versus time showing that at the steady-state activation, MTSET ϩ caused about a doubling of g Cl . (B) A record of g Cl versus time showing that the activated g Cl at the steady state of an oocyte preexposed to MTSET ϩ was much higher than the g Cl at unmodified condition as seen in A, and 2-ME reduced g Cl to ‫%05ف‬ of the maximum g Cl . (C) A record of g Cl versus time of showing that the exposure to MTSES Ϫ after prelabeling with MTSET ϩ had no effect on g Cl before and after activation. of the oocyte to MTSET ϩ and 2-ME induced the same response, suggesting that pre-and postactivation exposure to MTSET ϩ labeled the same population of R334C CFTR channels. To determine if labeling of the entire pool of R334C CFTR channels could be attributed to nonreacted MTSET ϩ that might remain after washing and, thus, be present during channel activation, the channels were . MTSET ؉ labeling did not affect the activation and inactivation process of R334C CFTR. A record of the conductance measured at the reversal potential throughout an experiment. After activation, an oocyte was exposed to 100 M MTSET ϩ , and then was inactivated without removing MTSET ϩ . The oocyte was then reactivated, showing a g Cl similar to the g Cl before inactivation. The g Cl was then reduced to ‫%05ف‬ by 2-ME. Figure 5. The entire membrane pool of R334C CFTR channels that were activated by cAMP was labeled with 20-s exposure to MTSET ؉ before the activation. (A) A record of the conductance measured at the reversal potential throughout an experiment. (B) Exposure to 1 mM MTSET ϩ or 1 mM MTSES Ϫ had no significant effects on the background conductance before activation of CFTR (1 and 2). (C) Modification by 1 mM MTSET ϩ for 20 s before activation prevented any further modification by MTSES ϩ or MTSET ϩ during or after activation (3-5), and 2-ME reversed the effect of MTSET ϩ (6). (D) Further modification by MTSET ϩ was possible after 2-ME treatment and was also reversed by 2-ME (7 and 8). The apparent lower values of E rev ‫71ف(‬ mV) resulted from a shift in the tip potential of the recording electrode. prelabeled with MTSET ϩ and then activated in the presence of MTSES Ϫ as illustrated in Fig. 3 C. If unlabeled channels were appearing at the surface during activation, then the relative abundance of MTSES Ϫ over that of MTSET ϩ (Ͼ1,000:1) would render it much more likely that negative charges would be added to the channel and g Cl would be consequently reduced. However, the result was identical to that obtained in the absence of MTSES Ϫ , which is consistent with the notion that all of the channels in the activatable pool were labeled during the exposure to MTSET ϩ in the inactive state. To determine if labeling with MTSET ϩ altered the process of R334C CFTR activation, we first labeled the channels with 100 M MTSET ϩ after activation, then inactivated the labeled channels, and then reactivated them at a later time (Fig. 4). Fig. 4 is an example of four similar experiments. It can be seen that the labeled channels were completely inactivated by ‫04ف‬ min after the removal of stimulatory cocktail. Reactivation of R334C CFTR increased g Cl to a level similar to that seen after the first modification, and 2-ME decreased g Cl by ‫.%05ف‬ The effects were reproduced by the second exposure to MTSET ϩ and 2-ME, indicating that MTSET ϩ did not interfere with the activation or inactivation of R334C CFTR. If the channels underwent rapid recycling, a 5-min exposure of oocytes to MTSET ϩ might be long enough to label channels that were in a submembranous pool but surfaced during this time, so we performed another group of experiments similar to that described in Fig. 2 C, in which the MTSET ϩ exposure time was reduced to 20 s. Fig. 5 contains the result of one of eight similar experiments conducted from day 4 to day 6 after RNA injection in which g Cl was measured throughout a single experiment. Also shown are I-V plots corresponding to specific points of interest in Fig. 5 (B-D). The oocyte was exposed to 1 mM MTSET ϩ for 20 s in the inactive state, and was immediately perfused with a second thiol reagent, MTSES Ϫ (1 mM), for 5 min. Neither reagent affected the background conductance (Fig. 5 B, 1 and 2). In the continuous presence of MTSES Ϫ , the oocyte was then exposed to stimulatory cocktail. The shape of the I-V plot obtained at steady-state activation in the presence of MTSES Ϫ was characteristic of MTSET ϩmodified R334C CFTR (Fig. 5 C, 3), indicating that prelabeling with MTSET ϩ for 20 s prevented modification by MTSES Ϫ before and during activation. Furthermore, subsequent exposure to MTSET ϩ had no marked effect on the conductance or the shape of the I-V plot (Fig. 5 C, 5). Exposure of the oocyte to the reducing reagent (2-ME) decreased the conductance to about half the stimulated value, and changed the shape of the I-V plot to inward rectification typical of unmodified R334C (Fig. 5 C, 6), indicating that the positively charged, TEA group that was added in the inactive state was readily removed. After 2-ME treatment, the second exposure of the oocyte to MTSET ϩ produced similar results as described above (Fig. 5 D, 7). The results indicate that the modification of R334C CFTR by MTSET ϩ was complete, regardless of whether exposure to the reagent took place in the active or inactive state, confirming that the entire pool of CFTR channels was accessible to MTSET ϩ before activation. The Time Course of Addition of R334C CFTR Channels to the Plasma Membrane after cRNA Injection After cRNA injection, new CFTR channels must be synthesized and inserted in the oocyte plasma membrane. To study the time course of the insertion of new channels, we recorded the level of whole-cell Figure 6. Time-dependent increase in the conductance of R334C CFTR in the first 5 d after cRNA injection. Oocyte conductance in both inactive and cAMP-activated states was measured on days 1, 2, 3, 4, and 5 (24, 48, 72, 96, and 120 h) after cRNA injection and the values were normalized to the conductance on day 2. conductance in single oocytes for five consecutive days after cRNA injection. Each day the channels were maximally activated using stimulatory cocktail and were subsequently inactivated within two hours by perfusing with frog Ringer's solution. The oocytes were then returned to the incubator overnight and subjected to the same experimental manipulation the next day. Not all of the oocytes survived the entire experimental period. Those that survived for at least two consecutive days were used in the analysis. Thus, the conductance was normalized to the conductance measured on day 2 (48 h after RNA injection). Fig. 6 is a summary of the conductance of R334C CFTR in its inactive and active state during the first 5 d after cRNA injection. It can be seen that the rate of addition of new channels was most rapid between 24 and 48 h, and the total conductance leveled off after day 3. A small cAMP induced conductance (2.82 Ϯ 0.27 S, N ϭ 4), greater than the background conductance (0.72 Ϯ 0.30 S), was observed 5-7 h after RNA injection. The Addition of New Channels via the Biosynthetic Pathway Can Be Monitored by Labeling Plasma Membrane Channels with MTSET ϩ If indeed the addition of the channels is most rapid between 24 and 48 h after cRNA injection, it should be possible to monitor the addition of new channels to the membrane by labeling the channels on the surface with MTSET ϩ , and assaying the labeled fraction of the surface pool as a function of time. Shown in Fig. 7 A is one of four similar experiments performed on day 1 (24 h after cRNA injection). Exposure of an oocyte to 100 M MTSET ϩ for 5 min after activation roughly doubled the conductance, as expected from previous experiments. After removal of stimulatory cocktail, the conductance returned to its background level within 2 h. Exposure of the oocyte to stimulating cocktail 4.5 h later induced a steady-state conductance ‫%04ف‬ higher than that measured 4.5 h earlier. Subsequent exposure of the oocyte to 2-ME reduced the conductance. The decrease in the conductance (23.13 S) was essentially the same as the original MTSET ϩ -induced conductance (21.17 S), as expected if the number of labeled channels remained constant during this period, and the entire increase in the total conductance was due to the addition of new (unlabeled) channels to the plasma membrane via the biosynthetic pathway. A second exposure to MTSET ϩ doubled the conductance as expected, indicating that both populations (old and new channels) were modifiable by the thiol reagent. However, when the same experiment was conducted on day 5 (Fig. 7 B, showing one of six similar experiments), MTSET ϩ had no additional effect when applied after the second activation, indicating that no new channels were added during the 4.5 h period. In some oocytes, such as the one shown, we observed run-down of conductance with time in experiments lasting several hours. BFA Reduced the Fractional Conductance Contributed by Newly Added Channels If the increase in conductance seen over a 4.5 h period on day 1 reflects the addition of new channels to the membrane, then blocking the trafficking of the new channels should prevent the increase in the conductance. Brefeldin A (BFA) has been shown to inhibit protein secretion at an early step in the secretory pathway by inhibiting the membrane traffic from the ER to the Golgi and enhancing the movement of Golgi membrane into the ER; the result being retention of proteins in the ER compartment (Klausner et al., 1992). Shown in Fig. 8 are two experiments done on day 1 and day 5, respectively, in which oocytes were exposed to BFA before activation and continuously exposed to BFA throughout the entire experiment. On day 1, BFA prevented the increase in conductance seen after 4.5 h in the absence of the drug, and there was no evidence of the appearance of unlabeled channels. Although the fractional decrease in the conductance induced by 2-ME (Fig. 8 A, time point 2) appeared to be somewhat smaller that the fractional increase in conductance induced by the first MTSET ϩ exposure (Fig. 8 A, time point 1), this was due to an increase in background conductance over the period. On day 5, no increase in con- Figure 8. BFA prevented the addition of new channels on day 1, and had no effect on the conductance on day 5. Records of the conductance measured at the reversal potential throughout experiments. Oocytes were exposed to BFA throughout the entire experimental period. No increase in the conductance with time was observed after 4.5 h on either day 1 (A) or day 5 (B). ductance was seen in the presence of BFA (Fig. 8 B) as expected from the result obtained in the absence of the drug. This sort of long term experiment (Ͼ8 h) was often difficult to perform because the membrane of some oocytes became leaky after a few hours perfusion, but the results obtained from oocytes that survived (two from day 1 and three from day 5-6) indicated that BFA prevented the addition of new channels on day 1, and it had no marked effect on the function of channels resident in the plasma membrane. The two oocytes shown in Fig. 7 that were never exposed to BFA appeared to have a slower rate of deactivation than the two oocytes exposed to BFA shown in Fig. 8, but a comparison of 15 control and 5 BFAtreated oocytes disclosed that the time required for the activated conductance to return to background level varied from 50 to 200 min in both groups. Because oocytes often did not survive prolonged periods of recording in the experimental chamber, we characterized the effects of BFA on CFTR expression in a separate group of oocytes by adding BFA to the incubating solution (MBSH) beginning a few hours after cRNA injection and assaying the cAMP-activated Cl Ϫ conductance after either 24 or 48 h continuous exposure to BFA (Fig. 9). The oocytes used in this group of experiments were obtained from the same frog and were injected with R334C CFTR RNA at the same time. In the absence of BFA treatment, cAMP elevated Cl Ϫ conductance was well above the background conductance (g bkg ) on day 1 and continued to increase on day 2. However, in oocytes treated with BFA, cAMP activated conductance was greatly reduced as expected if BFA attenuated the delivery of CFTR to the plasma membrane. In CFTR-expressing Oocytes cAMP Increases Cl Ϫ Conductance by Increasing the Open Probability of Channels The results presented here are consistent with the notion that the entire, activatable pool of R334C CFTR in the Xenopus oocyte is accessible to MTSET ϩ during a 20-s exposure to a perfusion solution containing the reagent. The simplest interpretation of this result is that the entire activatable pool of CFTR channels resides in the plasma membrane, and that the process of activation is due solely to an increase in the P o of membrane-resident channels. This interpretation is consistent with previous functional studies of CFTR channels in which single-channel records were obtained from detached patches exposed to the catalytic subunits of PKA and ATP on the cytoplasmic side (Tabcharani et al., 1991;Baukrowitz et al., 1994;Gadsby et al., 1994;Hwang et al., 1994;Zeltwanger et al., 1999). Although fewer in number, there are also several studies in which cell-attached recording provided evidence of increases in P o associated with activation of CFTR (Fischer and Machen, 1994;. The hypothesis that CFTR activation involves an increase in P o is also supported by studies showing that mutations in the R domain or the nucleotide binding folds alter the relation between P o and stimulating conditions (Carson et al., 1995;Wilkinson et al., 1996Wilkinson et al., , 1997. However, there are two alternative interpretations of the MTSET ϩ labeling results that lead to quite different conclusions about the mechanism of activation of CFTR mediated Cl Ϫ conductance. First, if the thiol Figure 9. Long-term BFA treatment prevented the addition of new channels. Some oocytes were kept in MBSH solution as usual (control). Some oocytes were transferred to the MBSH solution containing 5 M BFA a few hours after cRNA injection and remained in the same solution thereafter (BFA-treated). The background conductance (g bkg ) and cAMP-elevated Cl Ϫ conductance (g Cl ) were assayed at 24 and 48 h after cRNA injection. reagent used in these studies (MTSET ϩ ) can enter the oocyte, it is possible that exposing the cell to the reagent could lead to the labeling of CFTR proteins located in submembranous vesicles that might subsequently appear on the surface after activation. Second, if there is a pool of CFTR protein that is subject to rapid recycling between the plasma membrane and a submembranous pool of vesicles then, in principle, the entire pool could be labeled despite the fact that the label was only present in the extracellular bath for a short period of time. MTSET ϩ Is Impermeant The studies of Holmgren et al. (1996) and Yang et al. (1996) suggested that MTSET ϩ is not likely to cross the plasma membrane and label R334C CFTR protein that might exist in subplasma membrane vesicles. Holmgren et al. (1996) tested the membrane permeability to MTS reagents using three different membrane systems, liposomes, HEK293 cells, and Xenopus oocytes. They reported that MTSET ϩ and MTSES Ϫ were not permeant in liposomes at pH 7.4, whereas MTSEA ϩ (aminoethyl methanethiosulfonate) was. In HEK293 cells transfected with Shaker K ϩ channel, they tested the accessibility of a cysteine residue (391C) located in the intracellular loop between the transmembrane segments S4 and S5 of the Shaker K ϩ channel construct to cis-and trans application of MTS reagents in excised (inside-out and outside-out) patches. The authors showed that 391C mutant could only be modified by MTSET ϩ when applied from the intracellular side, whereas MTSEA ϩ modified the channel when applied from either side. In Xenopus oocytes expressing 391C Shaker K channel, recording of macroscopic currents indicated that extracellular MTSET ϩ did not modify the channel, whereas 2 mM extracellular application of MTSEA ϩ produced the typical response. We have confirmed this observation in oocytes (unpublished data). The results of a study of the voltage-gated sodium channel by Yang et al. (1996) were also consistent with the notion that MTSET ϩ cannot reach cysteines by crossing the plasma membrane. The S4 segment of domain 4 of the voltage-gated sodium channel is thought to move outwardly upon depolarization. The authors showed that two basic residues in S4, referred to as R1 and R3, when substituted with cysteine, were accessible to extracellular MTSET ϩ (which altered the inactivation kinetics) only when the cell was depolarized. Under the same conditions, however, R3C was inaccessible to MTSET ϩ applied on the cytoplasmic side, but it became accessible immediately after repolarization. This result indicated that MTSET ϩ did not cross the membrane and modify reactive cysteines. Recycling due to Constitutive Endocytosis and Exocytosis Is Not likely to Be Rapid Enough to Account for the Labeling of the Entire Pool of CFTR Channels in 20 s In three mammalian cell lines, CFTR was reported to undergo rapid endocytosis when incubated at 37ЊC (Prince et al., 1994;Howard et al., 1996;Lukacs et al., 1997). These authors used similar approaches in which they first labeled the cell surface CFTR at 4ЊC, and then warmed up the cells to 37ЊC for various times to allow for endocytosis. The rate of endocytosis was determined by measuring the time-dependent, fractional internalization of CFTR or the amount of CFTR that remained on the surface. Prince et al. (1994) reported that in T84 cells, 50% of the surface CFTR was internalized in ‫1ف‬ min. They also reported that after ‫5.7ف‬ min incubation at 37ЊC, the internalized CFTR started to return to the surface. Lukacs et al. (1997) reported that in CHO cells, ‫%51ف‬ of biotinylated cell-surface CFTR was internalized in 3 min. Howard et al. (1996) reported that in HeLa cells expressing a construct bearing an epitope tag in the fourth extracellular loop (M2-901 CFTR), most of the M2 antibody bound CFTR was internalized within 2 min. If CFTR endocytosis in Xenopus oocytes proceeds at the highest rate reported for mammalian cells, then, in 20 s, the amount of recycled CFTR would be only ‫%71ف‬ of the membrane channel pool. Because all of our experiments were conducted at ‫42-12ف‬ЊC, the rate of endocytosis is expected to be even less. Furthermore, we found no difference in the effectiveness of MTSET ϩ labeling when the exposure time was increased to 5 min. Therefore, it seems unlikely that the recycled CFTR channels contributed to the labeled channel pool under our experimental conditions. If endocytosis of R334C CFTR were proceeding in Xenopus oocytes at the highest rate reported for mammalian cells (‫/%05ف‬min), then the maintenance of the steady-state, activated CFTR conductance commonly observed in oocytes would require an equally high exocytotic delivery rate to maintain the channel population. In this condition, however, just after a brief exposure to MTSET ϩ , a decrease in conductance should be observed because labeled, high conductance channels would be predicted to leave the membrane, whereas unlabeled, low conductance channels were being added. This sort of behavior was never observed, suggesting that the rate of turnover of R334C CFTR is relatively slow in Xenopus oocytes. The absence of detectable endocytosis in oocytes might be attributed to the reduced temperature or to some other differences between Xenopus oocytes and mammalian cells. In the case of the human low density lipoprotein receptor, however, not only can the protein be synthesized, glycosylated and transported to the cell surface in Xenopus oocytes, but it also undergoes rapid internalization similar to that observed in mammalian cells (Peacock et al., 1988). The observations of Peacock et al. (1988) suggest that the signals for glycosylation and endocytosis of the low density lipoprotein receptor are similar in Xenopus oocytes and mammalian cells. Although the results reported here provide no evidence for a stimulation-dependent increase in the number of CFTR channels in the plasma membrane of oocytes, we cannot exclude a small contribution to activation via this mechanism. In any single experiment, we assume that we can measure the conductance with an accuracy of at least 1 S. If we assume the unitary conductance of CFTR channel expressed in oocytes to be ‫5ف‬ pS and P o to be 0.5, then a conductance of 100 S represents ‫04ف‬ million channels/oocyte and 1 S would represent ‫004ف‬ thousand channels. It must also be emphasized that all of the results reported here were obtained using a CFTR mutant (R334C), so that it is possible this mutation suppresses stimulation-dependent trafficking mechanism that is more prominent in the wild-type and M2-901 CFTR (see Mechanism of CFTR Activation). However, like other membrane proteins that undergo clathrin-mediated endocytosis, the internalization signal contained in the cytoplasmic tail of the wt CFTR (which was retained in R334C CFTR) was found to be sufficient for promoting endocytosis (Prince et al., 1999). It also seems unlikely that labeling of R334C CFTR with thiol reagents would cause the apparent low rate of recycling, given that biotinylated CFTR is efficiently endocytosed in mammalian cells (Prince et al., 1994;Lukacs et al., 1997). Blocking the Insertion of Channels Did Not Affect Activation After labeling with MTSET ϩ , the appearance of unlabeled channels on the plasma membrane was observed during the first few days after the injection of R334C cRNA when the insertion of new channels via the biosynthetic pathway was expected. We inferred that the unlabeled channels were likely to be newly synthesized channels that were inserted into the membrane via vesicle trafficking mechanism because their appearance on the membrane was prevented by BFA treatment. On the other hand, blocking channel insertion with BFA did not alter activation or inactivation of surface CFTR. The results presented here indicate that the insertion of new CFTR channels into the plasma membrane via the biosynthetic pathway occurred in the absence of agonist-dependent increase in cAMP. The endogenous cAMP level in Xenopus oocytes is expected to be much lower than that elicited by stimulatory cocktail. For example, Smith et al. (1987) reported that exposure of oocytes with intact follicular enclosure to forskolin or isoproterenol increased intracellular cAMP concentration by 15-fold. Therefore, if the delivery of CFTR channels to the plasma membrane via the biosynthetic pathway is cAMP-dependent, the cAMP level required is much less than that required for channel activation. Mechanism of CFTR Activation The results of several recent studies using either oocytes (Takahashi et al., 1996;Peters et al., 1999) or mammalian cells (Howard et al., , 2000 have been interpreted as indicating that cAMP-dependent activation of CFTR Cl Ϫ conductance is due, in large part, to an increase in the number of CFTR channels (N) in the plasma membrane. Although mechanisms of activation involving increases in N or increases in P o are not mutually exclusive, the results presented here are consistent with the hypothesis that activation of CFTR conductance occurs largely, if not solely, by an increase in the P o of CFTR channels. The most direct comparisons of the present results are with those of Takahashi et al. (1996) and Peters et al. (1999) who reported that the activation of either wt CFTR or M2-901 CFTR expressed in Xenopus oocytes was associated with an increase in membrane capacitance, C m . The increase in C m was correlated with an increase in Cl Ϫ conductance (g Cl ) and also with an increase in membrane area determined via electron microscopy and digital morphometry. However, Weber et al. (1999) used a frequency domain assay for C m to study the effects of cAMP on g Cl and C m in oocytes expressing wild-type CFTR and obtained different results. They observed a correlation between increases in g Cl and C m , but found that in oocytes exposed to inhibitors of PKA (KT5720 or H8), cAMP increased g Cl without affecting C m . A similar dissociation of changes in g Cl and C m was seen in oocytes injected with the Ca 2ϩ chelator, BAPTA. These results strongly suggest that, although increases in oocyte cAMP levels may lead to an increase in C m , there is no obligatory relation between cAMPinduced increases in g Cl and the increases in C m . Hug et al. (1997) used a two-frequency, lock-in amplifier method to assay C m in CHO cells expressing wt CFTR and found no change in C m despite a sevenfold increase in g Cl . Furthermore, Chen et al. (2001) recently demonstrated that large increases in membrane conductance can produce measurement errors that lead to apparent changes in C m when none actually occur, suggesting that any correlation between increases in conductance and capacitance must be evaluated carefully. These investigators proposed that neither cAMP nor Ca 2ϩ induced the delivery of CFTR to the plasma membrane of Calu-3 cells. Peters et al. (1999) reported the results of a more direct assay of surface localization of M2-901 CFTR expressed in oocytes. They observed an increase in fluorescence intensity in oocytes treated with a cocktail of forskolin and IBMX, and interpreted the result as a confirmation of exocytotic delivery of CFTR protein to the plasma membrane induced by cAMP. Their data indicated that a near 10-fold increase in CFTR conductance was paralleled by a near sixfold increase in fluorescence intensity, as if the majority of cAMP-activated channels were newly inserted. This amount of channel insertion should be readily detectable by covalent labeling, but it was not evident in the studies reported here. There are several potential problems associated with immunostaining. First, the protocol required overnight incubation of oocytes with the antibody at 4ЊC, so that it was not possible to determine if there was any temporal correlation between changes in g Cl and staining intensity. It is also possible that the staining by M2 antibody may greatly underestimate the level of protein in unstimulated oocytes. In fact, Howard et al. (2000) recently reported that immunoprecipitation of M2-901 CFTR by M2 antibody was much less efficient than that by a COOH terminus antibody. Thus, the ratio of actual protein expression in the stimulated and unstimulated conditions could be dramatically smaller than that indicated. It is also possible that M2 antibody recognition of plasma membrane-localized M2-901 CFTR is in some way enhanced if the protein is in the active conformation. There are well-documented examples of conformation-dependent antibody binding to membrane proteins (Vassilev et al., 1988(Vassilev et al., , 1989Aoki, 1992;Anthony and Azmitia, 1997). Howard et al. (2000) recently suggested that the mode of CFTR activation in cells might be dependent on the level of protein expression, such that at high expression levels an increase in P o predominates because the exocytotic pathway is "saturated." This hypothesis cannot explain the discrepancy in the oocyte results, however, in as much as the level of expression used in covalent labeling studies was comparable to that used previously for determination of capacitance or antibody staining. A survey of the literature reveals that the question of cAMP-induced delivery of CFTR to the plasma membrane of mammalian cells is controversial and that varying results have been reported, even for the same cell type. For example, in T84 cells, Prince et al. (1993) and Denning et al. (1992) using surface biotinylation and/or antibody staining reported no change in the amount of CFTR in the apical membrane after cAMP stimulation. Tousson et al. (1996) on the other hand, reported increases in fluorescent intensities toward the apical membranes of T84 cells when they examined changes in antibody staining patterns in cross sections of the cells. The interpretation of the latter studies is likely to be complicated by the difficulties in determining that channels marked by immunostaining or fluorescence actually reside within the plasma membrane. In MDCK cells, Moyer et al. (1998) used surface biotinylation and a green fluorescent protein-CFTR expression vector (green fluorescent protein linked to the NH 2 terminus of CFTR) for labeling CFTR and observed no cAMP stimulated translocation of CFTR from an intracellular pool to cell surface. In contrast, Howard et al. (2000) reported in the same cell that when epitope-tagged, virally expressed CFTR (M2-901 CFTR) was expressed at low levels, forskolin increased immunostaining of the apical membrane. In HeLa cells, Denning et al. (1992) found no change in the amount of CFTR in the apical membrane after cAMP stimulation, whereas Howard et al. (1996) reported that a 10-min treatment with 10 M forskolin caused about a twofold increase in surface fluorescence intensity in HeLa cells expressing M2-901 CFTR. The covalent labeling technique used in the present study offers a relatively straightforward method for assaying time-and stimulation-dependent delivery of channels to the plasma membrane. It has the advantage of monitoring changes in real time, and it is a direct assay of functional channels that contain a reactive thiol, the modification of which is readily detectable. Snyder (2000) recently used a similar method to reveal cAMPmediated translocation of the epithelial Na ϩ channel. In mammalian cells the role of cAMP-induced plasma membrane delivery of CFTR in the activation of Cl Ϫ conductance remains to be tested directly. However, in oocytes, covalent labeling studies lead us to conclude that most, if not all cAMP-induced increase in g Cl was the result of an increase in the P o of membrane localized CFTR channels.
2019-08-18T00:34:26.339Z
2001-10-01T00:00:00.000
{ "year": 2004, "sha1": "da841dfe1cc9ca6b778c80b8fbed37fe439e74ef", "oa_license": "CCBYNCSA", "oa_url": "http://jgp.rupress.org/content/118/4/407.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "ea8383302843ec04ab01877094f03231489e86d7", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }