id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
235767262 | pes2o/s2orc | v3-fos-license | Imaging characteristics of the eyes of cinereous vulture (Aegypius monachus): morphology and comparative biometric measurement
The aim of this study is to describe radiographic, ultrasonographic, and computed tomographic appearance of normal cinereous vulture’s eye and to determine normal biometric values of intraocular structures. Twenty-six eyes of thirteen healthy cinereous vultures were examined. Under general anesthesia with isoflurane, ultrasonography (US), computed tomography (CT) and skull radiography were performed. Differences between both eyes as well as between US and CT measurements were investigated and correlation of measurements between both eyes as well as correlation between CT and US measurements of the various ocular structures were calculated. Most of paired data did not show any significant differences between both eyes and the CT and US measurements, while there were significant differences (P<0.05) between CT and US measurements of depth of both vitreous and anterior chambers, and axial length of the lens in right eyes. There was also a significant difference (P<0.05) between both eyes in depth of vitreous measured by CT. All the measurements had strong correlations between both eyes and between US and CT. In conclusion, ocular imaging techniques provided useful data of biometry and morphology, showing good correlation between CT and US in cinereous vulture’s eye. Especially, when ophthalmoscopic examinations would not be available due to opaque anterior segment, imaging techniques could be essential for diagnosing and managing of the eye.
to diagnose cerebral or skeletal diseases like meningo-encephalocele, cranial malformation, hemorrhage, or otitis [9]. It is not a routine practice to use ultrasonography (US) in avian ophthalmology, however, it has been used in small animals. Few examples are available to show the possibilities of use of ultrasonography in avian ophthalmology [4].
There are few studies related the anatomy and ultrasonography of raptor's eyes. Recently, US has become routine procedure for veterinary ophthalmology. It provides biometric measurement of eye and different structures of the eye including pecten oculi (LP), lens, anterior chamber, and vitreous chamber. It also allows assessment of the eyes with an opacity of transparent media. There is variation among anatomy of birds, so it is impossible to use same ultrasonographic and biometric measurements for every species [4].
The objectives of this study were to describe radiographic, ultrasonographic, and computer tomographic appearance of normal cinereous vulture's eye. It was also included to determine normal values of biometry of intraocular structures and axial length for further treatment.
Animals
Thirteen healthy cinereous vultures (mean body weight, 9.21 ± 0.84 kg; range, 8.0 to 10.8 kg) were examined. These birds were obtained from captivity of Gyeongnam Wildlife Rescue Center present in College of Veterinary Medicine, Gyeongsang National University, South Korea. Twenty-six eyes of all birds were examined in this study. All animals were young adult, suspected between 1 to 5 years of age, but gender and exact age were unknown. They were previously free living birds but could not be released because of devastating injuries not related to the eyes. They have been provided proper treatment and care by veterinarians to resolve their health issues prior to be brought for this study. This study was approved by Institutional Animal Care and Use Committee (IACUC) of College of Veterinary Medicine, Gyeongsang National University, South Korea with an approval number GNU-140307-E0017.
Restraining and Anesthesia to raptors
All raptors were restrained manually and examined in sternal recumbent position under general anesthesia with isoflurane. Mask induction was performed using 5% isoflurane (Ifran ® , Hana Pharm, Co., Ltd., Korea) saturated in 100% oxygen (3 l/min). Trachea was intubated by an uncuffed endotracheal tube after induction. The size of an endotracheal tube was chosen according to size of bird to avoid leakage of an anesthetic agent. After intubation, intubation was stabilized by spontaneous ventilation with 2 to 2.5% isoflurane saturated in 100% oxygen (3 l/min). Medical imaging at anesthetic status in a sequence of ultrasonography, computed tomography and skull radiography were performed.
Ultrasonographic examinations and biometry
Ultrasonography was performed with a B-mode ocular unit (Xario, SSA 660A, Toshiba Inc., Japan) with 8 MHz convex transducer. The eyes of raptors were anesthesized by 0.5% proparacaine hydrochloride (Alcaine ® , Alcon, Puurs, Belgium) and scanned bilaterally in dorsal plane (oculus dexter (OD), right eye; oculus sinister (OS), left eye). The transducer was positioned directly on the cornea, with ultrasound transmission gel (Fany Sonic ® , Taiheung Medical Co., Korea) for contact. Standoff pad was not used (Fig. 1). Additionally, the anterior eye chamber was scanned in the transverse plane with transducer placed at temporal area. With a routine sonography, biometric measurements of the axial length (WL) of lens, length of pecten oculi (LP), depth of the anterior chamber (AC), the axial globe length (LB), and the vitreous chamber (VC) were performed. The images of the transversal scan at 6 hr were used to determine LP. All measurements were taken in dorsal plane at level of lens maximal diameter. During this, ocular surfaces were aligned along with an eye's central optical axis symmetrically. Measurement were stored on the computer and documented for results.
Computed Tomographic examinations and biometric measurement
All the CT examinations were measured in sternal recumbency having general anesthesia with isoflurane using two channel multi-detector row CT (MDCT) scanner (Somatom Emotion, Siemens Medical System, Erlangen, Germany) (Fig. 2). An orbital area was scanned in axial planes with 1 mm slice thickness, 110 kVp, evaluated in soft tissue, and 200 mAs (200 window width, 40 window level). In CT, the whole examination took about 5 to 10 min. The data was stored in hard drive of computer. All data were analyzed after CT examination of the birds. Biometric measurements of anterior chambers, the vitreous chamber, the axial length of lens, length of axial globe of both eyes were performed. Biometry was performed in an appropriate selected cross section image that reconstructed from the original axial CT scan with clinically used PC-based software called multiplanar reformatting (MPR) program (LUCION ® , Infinitt Technology, Seoul, Korea) (Fig. 3). The CT images were compared with the sonographic appearance.
Radiographic examination
Radiographs of the eyes including heads were taken in right lateral and dorsoventral positions. If necessary, oblique views under general anesthesia with isoflurane were also performed.
Ocular volumetric measurement in CT
Images of two channel multi-detector row CT (MDCT) scanner (Somatom Emotion, Siemens Medical System) were converted to dorsal and sagittal plane with clinically used PC-based software i.e., multiplanar reformatting (MPR) program (LUCION ® , Infinitt Technology). The imaging parameters were 110 kVp, 200 mAs, 1 mm section thickness, and continuous slices. The anterior border of globe of the eye was determined as first CT slice with visible maxillary sinus and posterior border as an apex of the orbit. The areas in each axial CT slice were measured with measuring tool mentioned earlier as LUCION ® . With slice thickness of the CT scan, known volumes could be calculated automatically.
Statistical analyses
The IBM SPSS Statistics 21 ® software program (IBM Corp., Armonk, NY, USA) has been used for statistical analysis of data. Kolmogorov-Smirnov test was implemented to investigate Gaussian distribution of qualitative variables. Obtained values were expressed as mean ± SD. Correlation of measurements between OS and OD eyes along with correlation between CT and US measurements of the various ocular structures were calculated by using the Pearson's correlation coefficient. Difference between right and left eyes as well as differences between US and CT measurements of LB, WL, LP, AC, VC were investigated by paired t-test, when these are parametric, and by Wilcoxon signed rank test, for the nonparametric values. P-values <0.05 were considered significant statistically.
Ultrasonographic appearance of eye
According to ultrasonographic examination, all 26 eyes and their structures had similar appearance. The corneal echo was curvilinear and super reflective, while anterior chamber of the eye was anechoic. The lens was appeared as double hyperechoic curved lines presenting an anterior and posterior echogenicity in every side of lens. The ossicles of scleral ring were hyperechoic and appeared on medial and lateral wall of the eye. These ossicles caused the distal shadowing (Fig. 4), that cause hindrance in measuring the transverse diameter of bulbus. Pecten was moderately echogenic structure, while extending from optic nerve to vitreous chamber was anechoic (Fig. 5). The posterior eye wall including choroid, retina, and sclera were slightly curvilinear but super reflective echo. Posterior wall of globe could not be differentiated from retrobulbar tissue clearly. The optic nerve was appeared as hypoechoic structure, very close to the pecten insertion, within hyperechoic retrobulbar tissue.
Computed tomographic appearance of eye
The cornea and periocular tissues appeared as hyper attenuating. Anterior eye chamber and vitreous chamber was appeared as soft tissue opacity and homogeneous. In CT, it was impossible to differentiate cornea from the anterior eye chamber in cinereous vulture. The lens was consisted to hyper attenuating capsule and hyper attenuating nucleus (Fig. 6). A slightly less hyper attenuating capsule or peripheral zone was seen that was interpreted as cortex of lens. Two hyper attenuating slim profiles framing bulbus presented the scleral ring. No imaging artifacts were detected that could be caused by these bony structures. As compared to US, LP could not be measured because of no differentiation. In US and more specifically CT, bulbus was appeared in globular shape and length of the axial lens was one fourth to entire axial bulbus depth.
Biometric measurements of eye
The US and CT measurements can be found in Table 1, showing comparison results between both eyes and between the 2 measurements. Most of paired data did not show any significant difference between both eyes and the CT and US measurements. However, there were significant differences (P<0.05) between CT and US measurement of VC, AC, and WL in right eyes, among WL, VC, LB, LP, and AC dimensions. In addition, there was also a significant difference (P<0.05) in measurement of VC in the CT for between both eyes. Tables 2 and 3 show the correlations between the left and right eye and the measurements took place in dorsal planes in CT and US. The depth and width of the anterior chamber, vitreous chamber, and lens as well as the axial length of bulbus and pecten had strong correlation between both eyes and between the measurements of the US and CT. Ocular volumetric measurement values were shown in Table 4.
Radiographic appearance of head and eyes
Right lateral (Fig. 7) and ventrodorsal (Fig. 8) skull radiographs showed cranium of cinereous vultures contained many connects to the sinuses. An osseous scleral ring is more visible in radiographs but interlobular septum between both eyes is hardly visible.
DISCUSSION
In veterinary medicine, first biometric study was evaluated canine eyes by A-mode US [22] and published in 1982. So, ocular biometry became an interesting object in studies of several species [25]. It has been described in several species e.g., cat, dog, goat, rabbit, cattle, horse, guinea pig, capybear, rhesus monkey, ferret, one humped camel, striped owl, elephant, and human. So far, in few studies, there are different anatomic configurations regarding shape and size of various avians eyes [4,7,25].
Being an endangered species, cinereous vultures (Aegypius monachus) were examined in this study. Especially in the winter, they have been presented often in wildlife rescue center due to emaciation, poisoning, skeletal fractures, and poor nutritional conditions [10,18]. There are most common diseases being found in cinereous vultures i.e., retinal degeneration, cataract, corneal lesions, glaucoma, traumatic uveitis, retinal dysplasia, and ciliary body malformations [1,3,7]. Because of these debilities, these suffer from weakened eyesight even blindness, sometimes it can lead to the death. It is difficult and dangerous to capture and restraint the raptors as compare to other animals because they get more stressed during their examination. As described in previous study of striped owls, ocular medical imaging examination was assessed without need of chemical sedation [25]. While, in another study, ocular ultrasound was performed in a colony of Screech owls was performed with help of the general anesthesia using isoflurane [7]. B-mode ultrasonography is an inexpensive, precise, and easy method for eye measurements, performed in various studies of ocular biometry [4,7,8,16,20,21,25,26]. In certain studies, there was strong correlation between B-mode US and CT ocular measures. If eye surfaces are not aligned symmetrically along eye's central optic axis, there is chance of inaccuracy of the measurement by the ultrasound [2,6]. In this study, it was performed corneal contact technique that provides sufficient visualization of the entire globus along with cornea. While, comparing to immersion technique, the superior anatomic definition of posterior pole of the eye was obvious in previous study [5].
The ocular morphology of cinereous vulture (Aegypius monachus) was very similar to other raptors. As described in a previous study, usefulness of US is limited to evaluate optic nerve, fibrous tunics, and orbit of eyes due to artifacts and distal shadowing by scleral ossicles [5]. Similar observations were stated in different studies along with this study during examination in US [4,7,25]. This limitation can be resolved by using CT along with US.
In the present study, CT allowed proper visualization of lens, anterior eye chamber, poster eye call, cornea, scleral ring, ocular nerve, retrobulbar space, and whole skull except for examination of pecten as found in previous studies [4,17]. It shows that US is a useful tool having ability to detect pecten anomalies even in opacification of eye's anterior segment.
Other studies about various avian species revealed that dorsal plane images obtained from CT presented a satisfactory overview of both eyes at the same time [4,25]. On the other hand, bulbi were distorted in sagittal plane but it was barely possible to find central axis. In this study, an orbital area was scanned in axial planes with 1 mm slice thickness. Biometry was performed in MPR slice in which nucleus lentis appeared in its maximal size along the optical axis. With consideration of anatomical differences of head and eye position of cinereous vulture (Aegypius monachus) with other avians, it showed promising results.
The axial LB means were 28.57 mm and 28.61 mm in US for left and right eye consecutively. These values are higher than other studies in small raptors like striped owl, screech owls or tawny owls having 23.76 mm for left eye, 24.25 mm for right eye, or 24.70 ± 0.82 mm respectively [4,7,25]. In other species, authors found no significant difference (P<0.05) between right and left eye's axial length and other intraocular structures dimensions in CT and US [4,7,25]. Most of the paired data of biometry did not show significantly difference among US and CT. In the current study, most of biometric results showed strong correlation between left and right eyes in CT and US. But, some paired values in CT showed significant differences (P<0.05). The selection of cross section image which reconstructed from original axial CT scan with MPR program is supposed to affect the value of measurement.
The pecten of variable shapes, extending from optic nerve to vitreous chamber, is highly pigmented, nonsensory structure. Length of pecten in US ranged from 11.47 mm to 13.50 mm in cinereous vulture (Aegypius monachus) that is much longer to values of pecten found in striped owl (4.49 mm to 6.73 mm) and tawny owl (4.40 mm to 6.60 mm) [4,25]. In US, the posterior wall was seemed to be under estimated because of hyperechoic retrobulbar space and weak differentiation of hyperechoic presentation of wall. However, it was not possible to differentiate the previous wall efficiently in common kestrels and many of barn owls. This was distinguished more easily in CT due to adjustable window levelling. The retrobulbar fat was appeared to be hypo attenuating compared to posterior wall.
In this study, all of examined birds were normal but ocular diseases were common findings, however, microphthalmia was most common disease among 16 raptors in a previous study [1]. Informations and measurements for both eyes for every species were highly valuable for diagnosing the affected eyes.
It can be concluded that variable ocular medical imaging techniques of these raptors provides useful data of biometry and morphology. Especially, when ophthalmoscopic examinations are not available because of opacity in anterior segment of eyes, CT or US is essential examination of definitive diagnosis in veterinary ophthalmology. These results showed good correlation between CT and US. The US is a safe procedure and easy to be implemented in raptor's eyes. Even though, this species has same ultrasonographic aspects to other species but it is essential to carry out a correct record of eye examination and biometric values for cinereous vulture (Aegypius monachus).
Although CT requires anesthesia, but it provides specific measurements of bony scleral ring along with proper visualization of size and shape of intraocular and extraocular structures. It may be helpful in situations of bone affection and unclear conditions in cinereous vulture (Aegypius monachus). As the advantages and shortcomings of the US and CT are supplementary to each other, it would be best to perform ocular examinations with both the US and the CT. However, our study shows that these two diagnostic methods show good correlation in results. Hence, if one is highly trained in both examination methods and could come up with an accurate diagnosis, one can reach an accurate ocular examination and biometric measurements with the use of just one of these methods.
CONFLICT OF INTEREST. The authors declare no conflicts of interest and there was no financial support of manufacturer associated with the products used in this study.
ACKNOWLEDGMENTS. This study was supported by the BK21 PLUS Program for Creative Veterinary Science Research and the Research Institute for Veterinary Science (RIVS) of Seoul National University, Korea. The study was also partially supported by the Research Institute for Veterinary Science, Seoul National University. | 2021-07-09T06:16:58.614Z | 2021-07-08T00:00:00.000 | {
"year": 2021,
"sha1": "1b2819ab0594b5d8e3ba98a92aaf1400c123412e",
"oa_license": "CCBYNCND",
"oa_url": "https://www.jstage.jst.go.jp/article/jvms/advpub/0/advpub_21-0119/_pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "53a2018b3f720c8b11240be62406b613015588d1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16748451 | pes2o/s2orc | v3-fos-license | Efficient Similarity Search In Sequence Databases
. We propose an indexing method for time sequences for processing similarity queries. We use the Discrete Fourier Transform (DFT) to map time sequences to the frequency domain, the crucial observation being that, for most sequences of practical interest, only the (cid:12)rst few frequencies are strong. Another important observation is Parseval's theorem, which speci(cid:12)es that the Fourier transform preserves the Euclidean distance in the time or frequency domain. Having thus mapped sequences to a lower-dimensionality space by using only the (cid:12)rst few Fourier coe(cid:14)cients, we use R (cid:3) - trees to index the sequences and e(cid:14)ciently answer similarity queries. We provide experimental results which show that our method is superior to search based on sequential scanning. Our experiments show that a few coe(cid:14)cients (1-3) are adequate to provide good performance. The performance gain of our method increases with the number and length of sequences. (cid:3) On sabbatical from the Dept. of Computer Science, University of Maryland, College Park. This research was partially funded by the SystemsResearchCenter (SRC) at the University of Maryland, and by the National Science Foundation under Grant IRI-8958546 (PYI), with matching funds from EMPRESS Software Inc. and Thinking Machines Inc.
Introduction
Sequences constitute a large portion of data stored in computers. There have been several e orts to model timesequenced data, to design languages to query such data, and to develop access structures to e ciently process such queries (see 25] for a bibliography). Most of the work, however, has focussed on \exact" queries. New emerging applications, particularly database mining applications 2], require that databases be enhanced with the capability to process \similarity" queries. The following are some examples of the similarity queries over sequence databases: Identify companies with similar pattern of growth. Determine products with similar selling patterns. Discover stocks with similar movement in stock prices. Find if a musical score is similar to one of the copyrighted scores. Similarity queries can be classi ed into two categories: a. Whole Matching. The sequences to be compared have the same length n.
b. Subsequence Matching. The query sequence is smaller; we look for a subsequence in the large sequence that best matches the query sequence. We concentrate on whole matching, and present an indexing technique that can be used to e ciently process such queries. Within the whole matching case, we consider the following problems: a1. Range Query. Given a query sequence, nd sequences that are similar within distance . a2. All-Pairs Query (or`spatial join').
Given N sequences, nd the pairs of sequences that are within of each other. The parameter is a distance parameter that controls when two sequences should be considered similar. It could be either user-de ned, or determined automatically (eg., =10% of the 'energy' of the query sequence; see Eq. 3 for the de nition of 'energy').
Approximate matching has been attracting increasing interest lately. Motro described a user interface for vague queries 18]. Shasha and Wang 24] proposed an indexing method that uses the triangular inequality and some precomputed distances to prune the search. However, the space overhead of the method seems quadratic on the number of objects, which may make it prohibitive for large databases. Aurenhammer 5] surveyed recent research on Voronoi diagrams, along with their use for nearest neighbor queries. Although Voronoi diagrams work well for approximate matches in 2-dimensional spaces, they need intricate transformations to work for a 3-d space, and they do not work at all for higher dimensionalities. Jagadish 15] suggested using a few minimum bounding rectangles to extract features from shapes and subsequently managing the resulting vectors using a spatial access method, like k-d-B-trees, grid les, etc.
For numerical sequences, we propose extracting k features from every sequence, mapping it to k-dimensional space, and then using a multidimensional index to store and search these points. The multidimensional indexing methods currently in use are R -trees 6] and the rest of the Rtree and k-d-Btree family 12,14,16]; linear quadtrees 22]; and grid- les 19]. There are two subtle problems with this approach that must be addressed: Completeness of feature extraction: How to extract features, and how to guarantee that we do not miss any qualifying object (time sequence, in our case). To guarantee no \false dismissal", objects should be mapped to points in k-dimensional space such that the Euclidean distance in the k-dimensional space is less than or equal to the real distance between the two objects. Dimensionality \curse": Most multidimensional indexing methods scale exponentially for high dimensionalities, eventually reducing to sequential scanning. For linear quadtrees, the e ort is proportional to the hyper surface of the query region 13]; the hyper surface grows exponentially with the dimensionality. Grid les face similar problems, since they require a directory that grows exponentially with the dimensional-ity. The R-tree based methods seem to be most robust for higher dimensions, provided that the fanout of the R-tree nodes remains > 2. Experiments 21] indicate that Rtrees work well for up to 20 dimensions. The feature extraction method should therefore be such that a few features are su cient to di erentiate between objects.
We propose to use the Discrete Fourier Transform 20] for feature extraction. Given a sequence, we transform it from the time domain to the frequency domain. We then index only on the rst few frequencies, dropping all other frequencies. This approach addresses the two problems cited above as follows: Completeness of feature extraction: Parseval's theorem 20], discussed in Section 2, guarantees that the distance between two sequences in the frequency domain is the same as the distance between them in the time domain. Dimensionality curse: As we discuss in subsection 3.3, a large family of interesting sequences exhibit strong amplitudes for the rst few frequencies. Using the rst few frequencies then avoids the dimensionality problem, while still introducing few false hits. The false hits are removed in a post-processing step.
The organization of the rest of the paper is as follows. Section 2 gives some background material on the Discrete Fourier Transform, and introduces Parseval's theorem that provides the basis for the indexing technique we propose. A resume of our indexing technique is given in Section 3.
We also justify our choice of similarity measure and the selection of DFT for feature extraction in this section. Section 4 contains performance experiments that empirically show the e ectiveness of our technique. We conclude with a summary in Section 5.
The signalx can be recovered by the inverse transform: x t = 1= p n n?1 X f=0 X f exp (j2 ft=n) t = 0; 1; : : :; n?1(2) X f is a complex number (with the exception of X 0 , which is a real, if the signalx is real). There are some minor discrepancies among books: some de ne X f = 1=n P n?1 t=0 : : : or X f = P n?1 t=0 : : :. We have followed the definition in (Eq 1), for it simpli es the upcoming Parseval's theorem (Eq 4).
ax t ]() aX f ] (8) Also, a shift in the time domain changes only the phase of the Fourier coe cients, but not the amplitude.
x t?t0 ]() X f exp (2 ft 0 j=n)] (9) Given the above, Parseval's theorem gives kx ?ỹ k 2 kX ?Ỹ k 2 (10) The latter implies that the Euclidean distance between two signalsx andỹ in the time domain is the same as their Euclidean distance in the frequency domain.
We believe that for a large number of time sequences of practical interest, there will be a few frequencies with high amplitudes. Thus, if we index only on the rst few frequencies, we shall have few false hits. This is a key observation for our proposed method.
Proposed Technique
We propose using the square root of the sum of squared di erences as the distance function between two sequences.
Speci cally, the distance D(x;ỹ) between two sequencesx andỹ is the square root of the energy of the di erence: If this distance is below a user-de ned threshold , we say that the two sequences are similar.
The importance of Parseval's theorem (Eq 4) is that it allows to translate the query from the time domain to the frequency domain. Coupled with the conjecture that few Fourier coe cients are enough, it allows us to build an effective index with a low dimensionality.
The following is a resume of our proposed technique: 1. Obtain the coe cients of the Discrete Fourier Transforms of every sequence in the database.
Build a multidimensional index us-
ing the rst f c Fourier coe cients, where f c stands for`cut-o frequency'. Thus, each sequence becomes a point in a 2f c -dimensional space (recall that the Fourier coefcients are complex numbers). We discuss in subsection 3.3 why f c can be taken to be small (< 5). As discussed earlier, we recommend the R -trees as the indexing structure, since it has been shown to work well for at least up to 20 dimensions 21]. This index will be called`F-index' henceforth. 3. For a range query, obtain the rst f c Fourier coe cients of the query sequence. Use the F -index to retrieve the set of matching sequences that are at most distance away from the query sequence. 4. For an all-pairs query, we do a spatial join using the F -index. The result of the join will be a superset of the answer set. 5. The actual answer set is obtained in a post-processing step in which the actual distance between two sequences is computed in the time domain and only those within distance are accepted.
The`completeness' of this method is based on the following lemma: We only give the proof for range queries; the proof for`all-pairs' queries is very similar. Suppose we want all sequencesx that are similar to a query sequenceq, within distance , i.e.: D(x;q) (12) (14) Keeping only the rst f c < n coecients, we have Thus, equation (14) implies the following condition fc?1 X f=0 jX f ? Q f j 2 2 (16) In other terms, the condition of (Eq. 16) will retrieve allX that are in the answer, plus some false hits. Thus, our index acts as a lter that returns a superset of the answer set.
Choice of Similarity Measure
The similarity measure is clearly application-dependent. Several similarity measures have been proposed, for In fact, the Euclidean distance is the optimal distance measure for estimation 11], if signals are corrupted by Gaussian, additive noise. Thus, ifq is our query andx is a corrupted version of it in the database, a searching method using the Euclidean distance should produce good results.
A valuable feature of the Euclidean distance is that it is preserved under orthonormal transforms. Other distance functions, like the L p norms L p (x;ỹ) = ( X jx t ? y t j p ) 1=p (17) do not have this property, unless p = 2 (because L 2 Euclidean distance).
Using DFT
Having decided on the Euclidean distance as the distance measure, we would like a transform that (a) preserves the distance (b) is easy to compute and (c) concentrates the energy of the signal in few coe cients. The distance-preservation requirement is met by any orthonormal transform 10], DFT being one of them. Orthonormal transforms form two classes: (1) the data-dependent ones, like the Karhunen-Loeve (K-L) transform, which need all the data signals to determine the transformation matrix and (2) the data-independent ones, like the DFT, Discrete Cosine (DCT), Harr, or wavelet transform, where the transformation matrix is determined a-priori.
The data-dependent transforms can be ne-tuned to the speci c data set, and therefore they can achieve better performance, concentrating the energy into fewer features in the feature vector. Their drawback is that, if the data set evolves over time, e.g., a recomputation of the transformation matrix may be required to avoid performance degradation, requiring expensive data reorganization. We, therefore, favor data-independent transforms.
Among them, we have chosen the DFT because it is the most well known, its code is readily available and it does a good job of concentrating the energy in the rst few coe cients, as we shall see next. In addition, the DFT has the attractive property that the amplitude of the Fourier coe cients is invariant under shifts (Eq. 9). Thus, using Fourier transforms for feature extraction has the potential that our technique can be extended to nding similar sequences ignoring shifts.
Note that our approach can be applied with any orthonormal transform. In fact, our response time will improve with the ability of the transform to concentrate the energy: the fewer the coefcients that contain most of the energy, the faster our response time. Thus, the performance results presented next are just pessimistic bounds; better transforms will achieve even better response times.
Using Few Fourier Coe cients for Indexing
Using a small value for the number of Fourier coe cients retained f c does not a ect the correctness | the F -index is a lter that returns a superset of the answer set. However, our proposed technique will not be very e ective if the choice of a small f c results in a large number of false hits. The worst-case signal for our method is white noise, where each value x t is completely independent of its neighbors x t?1 , x t+1 . The energy spectrum of white noise follows O(f 0 ) 23], that is, it has the same energy in every frequency. This is bad for the F -index, because it implies that all the frequencies are equally important. However, we have strong reasons to believe that real signals have a skewed energy spectrum. For example, random walks (also known as brown noise or brownian walks) exhibit an energy spectrum of O(f ?2 ) 23], and therefore an amplitude spectrum of O(f ?1 ). Stock movements and exchange rates have been successfully modeled as random walks (e.g., 8,17]).
Using the data set available through ftp from s .santafe.edu, we show in 1] that the Fourier transform of the movement of the exchange rate between the Swiss franc and the US dollar follows closely the same 1=f behavior as for a random walk.
Our mathematical argument for keeping the rst few Fourier coecients agrees with the intuitive argument of the Dow Jones theory for stock price movement (see, for example , 9]). This theory tries to detect primary and secondary trends in the stock market movement, and ignores minor trends. Primary trends are de ned as changes that are larger than 20%, typically lasting more than a year; secondary trends show 1/3-2/3 relative change over primary trends, with a typical duration of a few months; minor trends last roughly a week. From the above de nitions, we conclude that primary and secondary trends correspond to strong, low frequency signals while minor trends correspond to weak, high frequency signals. Thus, the primary and secondary trends are exactly the ones that our method will automatically choose for indexing.
In addition to stock movements and exchange rates, it is believed that several families of real signals are not white noise. For example, 2-d signals, like photographs, are far from white noise, exhibiting a few strong coe cients in the lower spatial frequencies. The JPEG image compression standard 26] exactly exploits this phenomenon, e ectively ignoring the highfrequency components of the Discrete Cosine Transform, which is closely related to the Fourier transform. If the image consisted of white noise, no compression would be possible at all. Birkho 's theory 23] claims that interesting' signals, such as musical scores and other works of art, consist of pink noise, whose energy spectrum follows O(f ?1 ). The argument of the theory is that white noise with O(f 0 ) energy spectrum is completely unpredictable, while brown noise with O(f ?2 ) energy spectrum is too predictable and therefore boring. The energy spectrum of pink noise lies inbetween. Signals with pink noise also have their energy concentrated in the rst few frequencies (but not as few as in the random walk). In addition to the above, there is another group of signals, called black noise 23]. Their energy spectrum follow O(f ?b ), b > 2, which is even more skewed than the spectrum of the brown noise. Such signals model successfully, for example, the water level of rivers as they vary over time 17].
Performance Experiments
To determine the e ectiveness of our proposed method (the F -index method), we compared it to a sequential scanning method. We used the Rtree for the index. For range queries, the sequential scanning method computes the distance between the query sequence and each data sequence. In our e ort to do the best possible implementation for the sequential scanning, we stop the test as soon as the square of the distance exceeds 2 , and we declare the two sequences to be dissimilar. Thus, a data sequence is fully scanned only if it is similar to the query sequence. For`all-pairs' queries, each sequence in the database is tested against every other sequence, for a total of N (N ? 1)=2 tests.
We investigated the following questions in these experiments: How to choose the number of Fourier coe cients to be retained (cuto frequency f c ) in the F -index method. A larger f c reduces the false hits but at the same time increases the dimensionality of the Rtree, and hence the search time.
How does the search time grow as a function of number of sequences in the database? How does the length n of the sequences a ect the performance?
Experimental setup
We generated synthetic sequences for the experiments. Each sequencex = x t ] was a random walk: x t = x t?1 + z t (18) where z t (t = 1; 2; : : :) are independent, identically distributed (IID) random variables. For implementation convenience, each z t variable is uniformly distributed in the range (-500, 500). The probability distribution of each z t is immaterial; the results would be the same had we chosen a gaussian distribution, or a fair, random coin. For each set S of N sequences, queries were generated by creating a distorted copy x t ] of each sequence x t ] in S. This was accomplished by adding a small amount of noise to every x t , i.e., x t = x t + p w t (19) where p=0.05 and w t (t = 1; 2; : : :) are IID random variables, each following a uniform distribution in the range (-500,500).
Let Q be the set of distorted sequences, which we shall use as queries.
For range queries, we search S for sequences within distance for every distorted sequence in Q. For allpairs queries, we concatenate S and Q, and ask for all sequence pairs within distance. The execution time for the F -index method includes both the search time in the R -tree and the postprocessing time. We repeated each experiment 10 times by generating 10 sequence sets with di erent seeds, and averaged the execution times from these repetitions. Table 1 summarizes the parameters of the experiments. As the number of Fourier coe cients (`cut-o frequency' f c ) increases, the dimensionality of the R -tree increases. Recall that each Fourier coe cient, being a complex number, increases the dimensionality of the R -tree by 2. The increase in dimensionality results in better index selectivity, which gives fewer false hits. This reduction in false hits is re ected in the post-processing time, which decreases with the cut-o frequency. However, the time to search the R -tree increases with the dimensionality, because the fanout is smaller, and the tree is taller. Figures 3 and 4 are in complete agreement with the above intuitive arguments.
Given the trade-o between the tree-search time and the post-processing time, it is natural to expect that there is an`optimal' f c . Indeed, the total execution time of our method shows such a minimum, as illustrated in Figures 1 and 2. Notice that this minimum is rather at, and, more importantly, it occurs for small values of the cut-o frequency f c . This experiment con rms our early conjecture that we can e ectively use a small number of Fourier coe cients for indexing sequences.
For the rest of the experiments, we kept f c =2 Fourier coe cients for indexing, resulting in a 4-dimensional R -tree.
Varying the number of sequences in the database
The next experiment compares the F -index method with the sequential scanning method for increasing number of sequences in the database. Figures 5 and 6 show the execution time per query for range and all-pairs queries respectively, for di erent values of the number of sequences in jSj. Clearly, the F -index method outperforms the sequential scanning. As the number of sequences increases, the gain of the F -index method increases, making this method even more attractive for large databases. 4.4 Varying the length of sequences First, we show the results for range queries. We varied the length of sequences, keeping the number sequences in S xed to 400. The distance parameter was set to (1000 n) 1=2 (where n is the length of a sequence). Figure 7 shows the execution time per query for range queries for di erent sequence lengths. The gain of the F -index method increases with n. Figure 8 shows the results of the experiments for all-pairs queries. The trends are similar with the ones for range queries.
Discussion
The major conclusions from our experiments are: The minimum in the execution time for both range and`all-pairs' queries is achieved for a small number of Fourier coe cients (f c = 1 ? 3). Moreover, the minimum is rather at, which implies that a sub-optimal choice for f c will give search time that is close to the minimum. Increasing the number of sequences in the database results in higher gains for our method.
Increasing the length of the sequences n also results in higher gains for our method.
Thus, the experiments show that the proposed F -index method achieves increasingly better performance, as the volume of the data increases.
Finally, we should mention that we also examined whether a`naive' feature extraction method would work as well. For example, consider a method that keeps the rst few values of each time sequence, and indexes on them. We carried out an experiment in which we indexed on the rst 10 values of each time sequence. The performance of this method was very poor compared to the F -index method; there were many false hits, resulting in a large postprocessing time. Judging that further details are of little interest, we omit the experimental results.
Summary
We proposed a method to index time sequences for similarity searching. The major highlights of this method are: The use of an orthonormal transform, and speci cally, the Discrete Fourier Transform, to extract features from a sequence. The attractive property of the DFT is that the Euclidean distance in the time domain is preserved in the frequency domain, thanks to Parseval's theorem. Thus, the DFT ful lls the \completeness of feature extraction" criterion. In addition, the DFT is fast to compute ( O(n logn) ). The recognition that a large family of sequences have only a few (f c ) strong Fourier coe cients. For example, random walks, stock price movements, exchange rates, exhibit an amplitude spectrum of O(1=f). Ignoring the weak coe cients, we introduce a few false hits, but no false dismissals. The importance of this observation is that it avoids the \dimensionality curse" at the expense of a modest post-processing cost. Keeping the rst f c coe cients, each sequence becomes a point in a 2f c {dimensional space (recall that the Fourier coe cients are complex numbers). The use of spatial access methods, and speci cally R -trees, to index those points. We believe that R -trees are more robust than their competitors, for medium dimensionalities.
Extensive empirical evaluation demonstrated the e ectiveness of the proposed method. We generated random walks, which model well stock price movements. The conclusions from our experiments are the following: (a) the execution time of our method shows a rather at minimum for a small cut-o frequency (f c 1-3) (b) compared to sequential scanning, our method achieves better gains with increasing number of sequences and increasing length. Thus, our method will be more and more attractive, as the volume of the database increases to Gigabytes and Terabytes.
Although we have made certain choices (Euclidean distance between sequences in time domain for similarity measure, DFT for feature extraction, and R tree for maintaining indexes), our technique can be trivially adapted for any similarity measure that can be expressed as the Euclidean distance between feature vectors in some feature space any distance-preserving (eg., orthonormal) transform (the more the energy concentrated on few coe cients, the faster our response time) any multi-dimensional index that performs well for the number of features used for indexing. Future work could examine the following issues Examination of other orthonormal transformations, in addition to the Discrete Fourier Transform. Extensions of our approach to 2-d and higher-dimensionality signals (e.g., images), in addition to 1-d signals (time sequences) that we have examined.
The work reported in this paper has been done in the context of the Quest project 2] at the IBM Almaden Research Center. In Quest, we are exploring the various aspects of the database mining problem. Besides the problem of queries over large sequences, some other problems that we have looked into include the enhancement of the database capability with the classi cation queries 3] and with \what goes together" kinds of association queries 4]. The eventual goal is to build an experimental system that can be used for mining rules embedded in massive databases. We believe that database mining is an important application area, combining commercial interest with intriguing theoretical questions. | 2014-10-01T00:00:00.000Z | 1993-10-13T00:00:00.000 | {
"year": 1993,
"sha1": "d67feafc224cfb6c97655066c151723214c74567",
"oa_license": "CCBY",
"oa_url": "https://figshare.com/articles/journal_contribution/Efficient_Similarity_Search_In_Sequence_Databases/6605123/1/files/12095561.pdf",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f253c48f2bcb9f20f851ca6ecaeb54a19cf13829",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
246063599 | pes2o/s2orc | v3-fos-license | Generative Models for Periodicity Detection in Noisy Signals
We present the Gaussian Mixture Periodicity Detection Algorithm (GMPDA), a novel method for detecting periodicity in the binary time series of event onsets. The GMPDA addresses the periodicity detection problem by inferring parameters of a generative model. We introduce two models, the Clock Model and the Random Walk Model, which describe distinct periodic phenomena and provide a comprehensive generative framework. The GMPDA demonstrates robust performance in test cases involving single and multiple periodicities, as well as varying noise levels. Additionally, we evaluate the GMPDA on real-world data from recorded leg movements during sleep, where it successfully identifies expected periodicities despite high noise levels. The primary contributions of this paper include the development of two new models for generating periodic event behavior and the GMPDA, which exhibits high accuracy in detecting multiple periodicities even in noisy environments.
INTRODUCTION
From heartbeats to commutes, global climatic oscillations to Facebook log-ons, periodicity -the phenomena that events happen with regular intervals -is omnipresent. Detecting periodicity in time series is often referred to as "the periodicity detection problem." In the case of event time seriesbinary time series, which indicate only the occurrence of some event -the periodicity detection problem has been approached using algorithms such as Fast Fourier Transform (FFT) and Auto-correlation. And typically has been formulated in the context of a single, stationary periodicity [1], [2], [3], [4], [5]. Several issues, however, have not been sufficiently addressed in the literature such as (i) the development of generative models which appropriately describe "noise" in periodic behavior -in addition to false positives and negatives -as variance in interval length and the challenges of (ii) multiple overlapping periods, and (iii) non-stationary periodic signals.
Existing periodicity detection algorithms are often based on the FFT or the Auto-Correlation Function (ACF) and focus on single period detection. FFT maps a time series to the frequency domain, and one would typically use the inverse of the frequency with the strongest power as the predicted period. FFT is sensitive to sparse data [6], to noise [7], and even in the absence of noise, suffers from "spectral leakage" for low-frequencies/large periods [8]. Other approaches include the Lomb-Scargle periodogram [7], [9] a least-squares-based method for fitting sinusoids (to deal with noise) and unevenly sampled data -that shares the same problems with the FFT. In real applications, however, the hierarchy implied by the FFT may not be appropriate to describe the signal, especially when the periodic signals are random walks with Markov properties and the signal is non-stationary.
ACF-based methods estimate the similarity between sub-sequences of event intervals that have been extracted with a set of lags and periods are selected as the lags, which maximize the ACF. ACF-based methods have been employed for multiple periodicity detection in character series such as texts, for instance, in [10], [11]. Auto-correlation detects a large number of candidate "periods" (especially integer multiples), many of which hardly differ from each other, which thereby necessitates a self-selected significance threshold for selecting "true periodicities." ACF also suffers from smaller data. In addition, these methods are typically not designed for finding multiple periodicities in event time series.
Outside of the FFT/ACF framework, E-periodicity [12] is a method for single period detection based on the modulus operation, with a primary focus on periodicity detection in unevenly/under-sampled time series. E-periodicity finds the interval around which the modulus operator of the event time-stamps is minimal. Essentially, the algorithm segments the time series into all possible periodicities within some a-priori specified range. It then overlays the segments and selects the true periodicity as the periodicity that "covers" the most events.
Other methods that have been developed for single period detection include partial periodic patterns and a chisquared test [13], max sub-pattern tree [14], and projection [15]. Like FFT and ACF-based methods, these methods struggle in the presence of low-frequency periodicities and low sampling rates [16]. They are only designed to recognize a single periodic pattern in stationary signals.
Little work has been done in the area of multiple period detection in time-series event data. Most multiple-arXiv:2201.07896v1 [stat.ME] 19 Jan 2022 periodicity methods use a hierarchical extraction method where the frequency with the highest power (in the case of FFT) or most probable periodicity (in the case of ACF) is selected and removed iteratively. FFT is a natural choice to disentangle multiple elements of a complex function and has been used by [17] in the Lomb-Scargle framework to detect multiple periods with a hierarchical extraction method, but is not designed for event data. Another approach for multiple period detection uses ACF to identify periods, and then subtracts them from the original signal using a comb filter [18].
An alternative approach in [19] focuses on computing a set of possible periodicities using intervals between events and selecting the set of periods as intervals above some threshold. However, this threshold is not well defined, and there is no method for dealing with noise. It also struggles with smaller data [19].
The authors in [20] present a generative model for discrete signals with a Gaussian probability density function (PDF) of the period and a Poisson process for describing the false positive noise events. The presented approach is adaptive to noise and a changing periodicity but cannot detect multiple, simultaneously overlapping periods.
A different approach in the research field of "periodic pattern-finding" finds "time slots" when events of a particular periodicity occur [16], i.e., locations, spaced periodically on the time series, where an event may happen. This approach is suited for data where certain events are expected to happen at certain sub-sequences of time (for instance, a student logging onto a computer every Wednesday and Thursday between 14:00 and 17:00). The algorithm aims to detect multiple-interlaced periodicities and relies on a scoring function and a heuristic algorithm to maximize the objective function to solve the NP-hard problem. The method does not account for variance in the event location. It is particularly optimized for anomaly detection tasks (when a periodic behavior is broken) and time series with a low sampling rate.
To address some of the challenges of multiple periodicity detection for noisy event time series, we propose the Gaussian Mixture Periodicity Algorithm (GMPDA). The algorithm is based on a novel generative model scheme of periodic event time series, which implicates variability in interval length through Gaussian distributed noise. Here, we compared the GMPDA to other algorithms on a large set of test cases and reported superior performance of GMPD the accuracy, sensitivity, and computational performance with outperforming results in most cases.
The rest of the paper is organized as follows. In Section 2 we introduce the generative models and discuss their inference. Section 3 presents the GMPDA algorithm. The performance of the GMPDA framework was tested in Section 4. An application of GMPDA to real data can be found in Section 5. Section 6 concludes this paper.
GENERATIVE MODELS
Consider a uni-variate event time series (X t ) t=1,...,N T where x t = 1 if the event starts at time t and else x t = 0. In this work, we ignore the case of un-sampled/missing data.
Then, the information in X t can be compressed to the set of non-zero/positive time stamps S := {s i |x si = 1} i=1,...,N S . If the positive time stamps occur (at least partially) at regular intervals, the time series exhibits a periodic behavior, and the regular intervals correspond to periodicities or periods. We formulate the periodicity detection problem to search for the set of periodicities that explain the intervals between timestamps in S.
We are particularly interested in the set of prime periodicities, that is, the minimum integer frequency that describes the intervals. For instance, for a timestamp set S = {12, 23, 34, 45, 56, 67}, many intervals could be explained by a periodicity of 22 or 33, but 11 would be the prime, which explains the most data and 22 and 33 are integer multiples of this prime period. In the following, the set of underlying prime periods in X t is denoted by µ * = {µ * p , p = 1, . . . , P }. In addition, we assume that for most real applications, the interval between two consecutive time stamps associated with a periodicity µ * p in S will most probably vary from µ * p with a variance denoted by σ * 2 If the time series X t is generated by a single, stationary periodicity µ * 1 , we can compute µ * 1 and thus the the prime periodicity µ * directly from the data as: Please note, the first equality in (1) is the ratio between the length of the time series and total number of events. The second equality in (1) describes the "average interval" between two adjacent time stamps and holds for a time series with a single, stationary periodicity without noise. The associated variance σ * 2 can be estimated as the square of the standard deviation. However, estimation of µ * and σ * 2 according to equation (1) will not be sufficient when (i) the time series X t is generated by multiple, overlapping periodicities, (ii) the time series X t is noisy, i.e., there are false positives, (iii) when there are missing values (false negatives), or (iv) when there are different patterns of periodic behavior over time, i.e., non-stationarity.
The generative model that we present here accounts for challenges (i) and (ii): specifically, we formulate a generative model of the positive time stamps S with multiple periodicities, an explicit term that incorporates noise, and a loss that enables inference of the model parameters.
Let us assume, that the set of positive time-stamps S can be generated by a function f , as: where: • µ * is the set of P prime periodicities in the time series, • σ * is the set P variances of the periodic intervals, • β is the rate of false positive noise in a Bernoulli sense, • α p is the starting point of periodicity p, • M is the generative model scheme.
The generative model scheme M is characterized by the priors for the distribution of the intervals, its mean values µ * , and the variances σ * 2 . We follow the generative approach in equation (2) and assume that a single event s i is generated according to one periodicity (except in the case of overlaps) or false positive noise β. Then, the set S is the union of subsets S p of positive time stamps s i associated with periodicity µ p , or random noise β: Further, without loss of generality, we parameterize the distribution of the intervals by the Gaussian distribution; any other distribution, for instance, a member of the exponential family would also be appropriate. In Section 2.1 and Section 2.2 we formulate two different model schemes denoted in the following as the Clock Model (M = C) and the Random Walk Model (M = RW ).
Clock Model
The "Clock Model" describes a periodic behavior governed by a fixed period µ * p with Gaussian noise, which does not incorporate information from previous positive time-stamps when computing the occurrence of the next event, i.e., for p = 1, . . . , P the events in S p are generated by: The number of events associated with uniformly distributed false positive noise is given as Note that for the Clock Model, the location of any event only depends on the location in the time series and Gaussian noise around some regular location, but does not depend on previous time steps. Accordingly, one can predict with equal accuracy any time step s p i+m for m > 0. This formulation is a generalization of generative models in much of the previous work on periodicity detection, e.g., [16] and [12]. In their formulation, one needs to find a time-slot s i as a pair of a period (l) and an offset i, denoted by [l : i]. This formulation is equivalent to finding a period µ * p and a starting point α p , with σ * = 0, which might be a limitation in real applications, as this formulation does not allow for potential variability in the realization of event locations in the time series. In order to account for this, Gaussian noise σ * is added (the formulation with σ * = 0 would be a special case of the Clock Model).
However, the notion of an external pacemaker is a realistic expectation only for some systems, thus motivating the development of the Random Walk Model.
Random Walk Model
The Random Walk Model, exhibits the Markov property, i.e., a system where the next event's temporal location depends on the current event's temporal location and Gaussian noise. For p = 1, . . . , P the events in S p are: Again, the number of events associated with uniformly distributed false positive noise is given as β * |S µ * | in the interval [0, As the noise is Gaussian (and thus is identically distributed), the series of event-time stamps in S µp , for p = 1, . . . , P , describes a random walk. And thus, the formulation in equation (5) is hereafter referred to as the Random Walk Model (RWM).
RWM has the property that, concerning some event, s i , the σ * 's add up for each subsequent time step. Therefore, the distribution's variance of the expected location of an event further in the future increases linearly with the distance from the current event. This assumption is an essential and a realistic expectation for many real-life systems, which have no pacemaker and are therefore predictable with decreasing accuracy as steps increase, i.e., given s i , we can predict s i+1 more accurately than s i+10 .
Inference
Given an event time series X t , a straightforward approach to extract the possible periodicities is to study the empirical histogram of all pairwise, forward order inter-event intervals. For every event, we consider not only the interval to the next event (onset to onset) but also to all subsequent events.
The possible range of the intervals is defined by (0, N T ) 1 . On the other hand, we can estimate a histogram of expected all forward order inter-event intervals with respect to one of the generative models defined by equations (4) and (5). This histogram is obtained by (i) estimating analytically the expected number of intervals for each µ * p ∈ µ * , (ii) incorporating all the intervals between any pair of events associated with any two different prime periodicities µ * p and µ * q , and (iii) by incorporating all the intervals due to noise (in the case the source of the noise is known this could be done analytically, otherwise an estimate is required). The comparison of the empirical histogram and the parametric expectation will define the loss function used to identify the optimal underlying periodicities.
In the following, we define for every µ ∈ (0, N T ) the function of interval counts D(µ) by: The evaluation of D(µ) for a given X t results in a histogram of all pairwise inter-event intervals. The generative models provide a statistical model for the intervals. Thus, we can estimate the expected number of intervals for µ ∈ (0, N T ) in reference to a fixed periodicity µ * p and variance σ * p as: equality in equation (7) is due to linearity of expectation and equality in equation (8) is due to the fact that for a random variable A, The distribution of all m-th order inter-events intervals depends on the specific generative model and can be written as for the Clock Model, and as for the Random Walk Model. For the latter, the variance grows linearly with the number of steps between events.
Further, we assume that the starting point is zero, i.e., α p = 0. For a time series of length N T equation (8) can be rewritten into a more explicit form by writing the indicator function as a definite quantity. Assuming no missing values on (0, N T ) for the generative models, we should observe N T µ * p first order intervals (m = 1) in the time series, distributed according to the Gaussian probability density function (PDF) parametrized by σ * and µ * p . For the second order intervals (m = 2) the scaling factor would be and so on. Thus, for a single periodicity µ * p the expected value of D(µ) can be written for the Clock Model as: and for the Random Walk Model as (12) and (11) are therefore the expected values of the function D(µ) counting all order intervals that might be observed for a single periodicity µ * p for the Random Walk Model and for the Clock Model, respectively.
In the case of multiple, overlapping periods, and/or false positive noise, the set of positive time stamps S would consist of multiple sets: S µ * 1 ∩S µ * 2 · · ·∩S µ * p ∩S β . A-priori, the affiliation of events to periodicities is unknown and therefore we slightly adapt our definition of D(µ) in equation (6) by removing the superscript p: Thus, the operator D(µ) counts now not only the intervals between events from the same periodicity set, but also between events in different sets and/or between noise events. We call the latter two "interaction intervals" and denote their contribution to D(µ) by: with three possible scenarios (or their combination): (i) intervals between events from different periodicity sets, .i e., s i ∈ S µ * p and s i+m ∈ S µ * q , (ii) intervals between events from any periodicity set and noise, i.e., s i ∈ S µ * p and s i+m ∈ S β , (iii) intervals between events due to noise, i.e., s i ∈ S β and s i+m ∈ S β .
The estimates in equations (11) and (12) do not include these interaction intervals. In the next step we discuss how to estimate ζ (µ) and to account for the three cases explicitly. The distribution of the interaction intervals for all the three cases can be obtained in a closed form by applying the convolution formula, which provides the distribution of the sum/difference of two interdependent discrete or continuous random variables [21]. For the Clock and the Random Walk Models we would obtain the following: For case (i) the interaction intervals between two periods, denoted as ζ pq (µ), are again Gaussian distributed with a mean µ pq = µ p − µ q and a variance σ 2 pq = σ 2 p + σ 2 q , where µ p > µ q without loss of generality. For case (ii) the interaction intervals, denoted as ζ pβ (µ), follow a Gaussian-like distribution, adjusted for the corresponding uniform support. For case (iii) the forward interaction intervals, denoted as ζ β (µ), follow the right sight of a triangle distribution. In this context, we can write down ζ (µ) as Please note that in real applications we do know neither the amount of noise in real data nor which events are associated to which periodicities, and therefore do not have an exact formulation of (15) and need an approximation of ζ β (µ). For this, we will assume that the events that contribute to the interaction intervals are uniformly distributed. We will see in Section A.1 that this assumption is not too restrictive.
Proposition 1. For uniformly distributed events on
with a constant z, for every µ ∈ [1, N T ].
Proof. Consider two noise events s β i , s β j each with a uniform probability mass function P S on the support [1, . . . , N T ]. The difference between the events s β j −s β i = µ ∈ [−N T , N T ] is a random variable and its probability mass function can be derived by using the convolution formula for distributions [21]: Next, as P S is defined on [1, . . . , N T ], the probability of is equal to zero, taking this into account we get: Finally we get the probability mass function as a decaying function of the difference: In case we would be focusing on |s β j − s β i | = µ the right hand side of equation (19) has to be multiplied by 2, due to symmetry. In the next step we want to estimate the expectation of ζ (µ), that is: The equality in equation (21) is due to linearity of expectation, equality in equation (22) is due to the fact that for a random variable A the following equality holds: Inserting equation (19) into equation (22) results in The number of all pairwise, forward order differences for the noise events, with n = N T β = |S β | is given as: thus we obtain: By setting z = 2n 2 −(n 2 +n) 2N T we obtain equation (16).
In real applications the constant z cannot be estimated as the amount of false positives, β, in unknown a-priori. An approximation for z,ẑ will be inferred from the data in Section A.1. For now, the expected number of interaction intervals is approximated via: In case of multiple periodicities, due to the linearity of the expectation, the expected number of intervals over multiple periods is the sum over the expectation for D(µ) for each periodicity µ * p present in the data, plus the expected number of the interaction intervals approximated by (14): Hereinafter, the first addend on the right hand side is denoted as the deterministic parametric function G M (µ;μ,σ) for the Clock Model: ], (28) and the Random Walk Model: ). Once we obtain estimatesμ p andσ p for the true periodicities µ * p and variances σ * p , and given a prior on the generating function (in our case, Random Walk or Clock), we can write a loss function for our estimates as the difference between the empirical D(µ) and the parametric G M (µ;μ,σ) functions. The loss function can be either the absolute error or a quadratic loss; since we have deterministic expectations, we focus on the absolute error as follows: Finally, for either the clock model or the random walk model, the aim is to find a set of periodicities and variances that minimize the corresponding loss. A straightforward approach would consider all possible combinations of acceptable periodicities and variances where the optimal combination minimizes the loss. However, such an approach is computationally not feasible, and therefore the following section outlines the Gaussian Mixture Periodicity Detection Algorithm (GMPDA).
GMPDA ALGORITHM
Given an event time series X t ∈ R (1×N T ) , the aim is (i) to extract an estimateμ of the true generating periodicities µ * , (ii) to infer σ * , and (iv) to test the fit of the chosen generative model M . GMPDA provides a method to learn the parameters of the generative function of X t in an accurate and computationally efficient manner by minimizing the loss L defined in equation (32). The GMPDA algorithm is opensource and available on https://github.com/nnaisense/ gmpda. GMPDA is based on comparing D(µ), the empirical distribution of the intervals observed in the time series X t with parametrized estimates of its generative function G M (μ,σ) plus the contribution coming from the interaction intervals, using the loss function (32). The main steps of the GMPDA algorithm for the estimation of the optimal parametersμ,σ are outlined Algorithm 1. In the first step, and subtracts ζ(µ), estimated wrt. equation (33) 3 Identify candidate periods, deploying integral convolution 4 Initialize and optimize variance for candidate periods 5 Find optimal combination of periodicities, which minimize the loss defined in (32) 6 Update loss and variance wrt. optimal periodicities GMPDA computes D(µ) wrt. equation (34) and subtracts the approximated contribution from the interaction events. The approximation of the length of interaction intervals is either limited by the minimal expected periodicity or by a user-defined parameter, denoted as noise range. The estimation of the approximation is outlined in Appendix A.1. Further, for estimation of D(µ) and the loss, the range for µ is limited by the parameter loss length, mainly due to flattening of the Gaussian distribution with increasing variance for the Random Walk Model. A detailed discussion can be found in Appendix A.2.
In the second step, GMPDA estimates a set of candidate periodicities using a heuristic approach, since computing G M (μ p ,σ) for all possibleμ p is computationally expensive. The heuristic approach searches iteratively for periodicitieŝ µ by performing in each iteration "integral convolutions" on D(µ). The convolution smoothes the function for extracting periods that explain the time series. The maximal number of candidates is controlled by parameter max candidates, and the maximal number of iterations by the parameter max iterations. The heuristic approach is described in detail in Appendix A.2.
In the third step, GMPDA performs least-squares curvefitting to improve the initial guess forσ, please see Section A.3 for more details. Please note, this is optional and can be controlled by the parameter curve f it. The curve fitting procedure is described in Appendix A.3.
In the final step, GMPDA computes the function G M (μ p ,σ) for all combinations of candidate periodicities and corresponding variances and selects the set of "prime periodicities"μ * as those that minimizes the loss, defined and explained in Appendix A.4.
PERFORMANCE EVALUATION ON TEST CASES
This section describes the evaluation of GMPDA's capacity to detect periodicities µ * and variances σ * on synthetic time series generated according to Clock and Random Walk Models. The performance of GMPDA detection of periodicity µ * was compared to those of other periodicity detection algorithms including FFT, Autocorrelation with FFT, Histogram with FFT, and Eperiodicity 2 . Their specific algorithms are described below.
GMPDA: We used the baseline Algorithm A withσ set equal to σ * (i.e., σ * = log(µ)) and no non-linear curve fitting. Please note, in real applications sigma is unknown and if no non-linear curve fitting is deployed, we suggest to run the algorithm multiple times for a range of possiblê σ, and the optimalσ can then be chosen with respect to the lowest loss.
FFT: This is a power spectral density estimates approach [1]. In the case of a single periodicity, the frequency with the highest spectral power is selected as the prime periodicity. In the case of multiple periodicities, the frequencies |µ * | with the highest spectral power are selected as the true periodicities.
Autocorrelation with FFT: The Autocorrelation Function (ACF) estimates how similar a sequence is to its previous sequence for different lags and then uses the lag that maximizes ACF as the predicted period [5]. Since all integer multiples of true periods will have the same function value, an FFT is applied to the ACF to select the frequencies with the highest spectral power as the true periodicities. In the case of multiple periodicities, the frequencies with the highest spectral power are selected as the true periodicities.
Histogram with FFT: An FFT applied to the histogram of all forward differences in the time series D(µ) [22]. In the case of multiple periodicities, the number of frequencies with the highest spectral power are selected as the true periodicities.
Eperiodicity: We implemented the method presented in [12], which computes a "discrepancy score" for each possible periodicity, i.e. the number of intervals between events which are equal to the candidate periodicity. To detect multiple periods, we select the top |µ * | candidate periods from the discrepancy function.
There are several conceptual differences and similarities between GMPDA and the alternative algorithms: GMPDA is, like all the methods listed above, based on computing frequencies/periodicities based on the observed intervals between positive observations in the time series. For data that follows a clock model, variance in intervals can be handled in the regression framework for ACF and with spectral methods for FFT. However, for very small/large variation in intervals, parametrized by σ * in the Random Walk Model, these methods may struggle -particularly for multiple periodicities -because of the linear variance increase. Eperiodicity/Histogram methods will likely have performance decrease for variable intervals since they have no specific capacity to deal with variance in intervals. This will be particularly true for time series following the Random Walk Model.
GMPDA is designed for multiple-periodicity detection, and its loss function is explicitly oriented toward finding all the periodicities present in the data: Once the set of candidate periodicities is identified, GMPDA checks all possible combinations of periodicities and selects the one with the smallest loss.
In this context, ACF and FFT both accepted methods for hierarchical frequency detection, but do not present a "stopping criteria" for deciding how many periodicities are significant in the considered time series. it is, therefore, possible to over or underestimate their number. In our test cases, we always selected the top |µ * | frequencies as the true periodicities and likely overestimated the accuracy of these methods since there would be no way to know this number without a priori knowledge of the generative mechanism. Also, Eperiodicity/Histogram is not explicitly modeled as a method for multiple periodicity detection and has no capacity for dealing with noise in intervals.
In addition, we also want to discuss, conceptually, the main difference between GMPDA and the classical Gaussian Mixture/Hidden Markov approaches. All three methods aim to fit the shape of a distribution, which is empirically described by the corresponding histogram of the data. However, the main difference of GMPDA is the generative models that account for the peaks in the histogram at prime periods and their integer multiples. In this context, GMPDA combines these peaks to get a better estimate. Classical MM/HMM models do not deploy this information. Instead, if the number of mixture models K is large, it will try to fit all peaks individually or average them if case K is small and thus get biased results.
In the following, we compare the performance of the above-described algorithms for a large set of generated test cases.
Test cases
The performance of the GMPDA algorithm was evaluated on a wide range of test cases for the Clock Model and the Random Walk Model. The test cases cover systematic variations of the following model parameters: periodicity µ, variance σ, noise β, and the number of events n. These generative model parameters influence the histogram of interevent intervals, the input data for all applied algorithms.
To better understand how these model parameters influence the histogram, we show two illustrative test cases with different model parameters. Figure 1 shows a well posed test case, with two underlying periodicities and no noise, while Figure 2 displays an ill-posed test case where the signal to noise ratio is 1:2. The identification of underlying periodicities requires advanced analysis of the histogram.
The following analyses examined an extensive range of test cases in order to study the limitations of the presented GMPDA algorithm and the alternatives described in Section 4.
Configurations
The considered range for the model parameters was set as follows: Please note, test cases with σ = log(µ) represent scenarios where no σ optimization is required as σ = log(µ) is default initialization in GMPDA. Small values for n and large values for σ, β were chosen to investigate the limits the periodicity detection algorithms. For every combination of σ * , β, and n we generated 100 event time series with randomly drawn µ * ∈ [10,350]. For test cases with multiple periodicities we enforced the difference between the involved periodicities to be bigger then log(µ). Otherwise, the generative curves become indistinguishable too fast and multiple periodicity detection is getting too ill-posed.
The combination of the above settings resulted in 28800 test cases for each generative model. All algorithms were applied to identify the underlying periodicities for every generated test case.
Thus for a fixed configuration of the parameters σ, β, and n, the performance of the algorithms is measured by accuracy, which is the averaged (across the 100 generated test cases) number of correctly identified periodicities with a value between zero and one.
In the following, we first present the results for |µ * | = 1 and identify valid ranges for n, β and σ. Second, within the valid range we compare the performance of GMPDA to that of the other algorithms for |µ * | = 1, 2, 3.
GMPDA Performance
In this section we focus on the performance of GMPDA with respect to |µ * | = 1 in order to select the realistic limits for σ, β and the number of events. Figures 3 and 4 display the performance of GMPDA for fixed β = 0 and |µ| = 1 for different values of σ and for different number of events n, without and with curve-fitting, respectively. The confidence intervals (CI) in all the following figures (if present) are estimated asx±1.96 SEM, wherex is the mean and SEM is the Standard Error of the Mean. The results in Figures 3 and 4 show, as expected, that accuracy decreased with increasing σ and decreasing number of events. Or, stated differently, with increasing variance more events were required for an accurate detection. The figures can also be used to compare the performance of GMPDA with and without curve-fitting. GMPDA without curve-fitting performed worse except in the case of σ = log(µ). The explanation for this behavior is as follows: In the algorithm, the default initialization value of σ is log(µ), and therefore for this configuration, GMPDA without curve-fitting worked with a known sigma. In all the other cases, GMPDA with curve-fitting provided better results.
Next, to compare the effect of noise, we restricted our evaluation from here on to GMPDA with curve-fitting due its better performance. Please note, the comparison between results with curve fitting and without curve fitting can be found in B.1. Further, we focus on the case of |µ * | = 1 and σ known, which can be viewed as an ideal scenario, as only µ needs to be estimated. For this ideal case, we compared the effect of varying noise levels across a varying number of events on detection accuracy. Figure 5 shows the performance of the different algorithms with respect to increasing amounts of noise in the time series, for the case with |µ| = 1 and σ = log(µ), and separately for Random Walk and the Clock Models. For the Random Walk Model, Figure 5 panel (a), perfor-mance was acceptable for signals with n ≥ 300 and noise up to β = 4, for n ≤ 300 performance dropped below 0.75 already for β ≥ 2. In comparison, the Clock Model was substantially more sensitive to noise ( Figure 5, panel b) with acceptable results only for β ≤ 1.
In summary, in cases where the actual variance is unknown, GMPDA with curve-fitting outperformed GMPDA without curve-fitting. GMPDA was not suited for cases with less than 50 events. GMPDA's performance increased with the increasing number of events. GMPDA could also handle moderate to high amounts of noise, and we show in the next section how this compares to other periodicity detection algorithms.
Comparison with alternative periodicity detection algorithms.
Next, we compared the GMPDA (with curve-fitting) algorithm to other periodicity detection algorithms regarding their performance in conditions with varying noise and variance. With increases in noise and variance, the histograms of the inter-event intervals analyzed by all algorithms become less informative, i.e., the peaks that indicate periodicities become less identifiable. Therefore, we investigated the sensitivity to noise and different variances used for generating the periodicities. We first investigated the effect of varying levels of variance σ for cases where no noise was present, i.e., β = 0. The results for all algorithms and n = 100 are shown in Figure 6. The results for different numbers of n averaged overall levels of β can be found in Appendix B.3. For the Random Walk Model, GMPDA was very accurate up to σ = µ 8 . Interestingly, all other algorithms performed worse when variance was very small (σ = 1 and σ = log(µ)), a case where GMPDA excelled. FFT and AutoCor converged to the accuracy bound given by GMPDA for σ > 1. While the accuracy of EPeriodicity and Hist had its maximum at 0.8. For all methods, the performance dropped for σ ≥ µ 8 . This behavior is distinctive for the random walk model, where the variance increases with every step. Therefore the generative distributions will start to overlap, which happens faster when the variance is larger. Performance was generally lower for the Clock Model, which was also more sensitive to increases in the variance. GMPDA was sufficiently accurate only for σ = 1 and σ = log(µ), with a distinct drop in performance with increased variance. For the other algorithms, except the histogram methods, performance initially increased with increasing variance up to σ = µ 8 and strongly declined afterward. Next we evaluated the performance of all methods with respect to increasing levels noise and with the results shown in Figure 7. For these analyses, the variance was fixed to σ = log(µ) and number of events to n = 100. The plots for all numbers of n can be found in Appendix B.2. For the Random Walk Model, GMPDA was insensitive to noise up to β = 1 with performance decreasing linearly after that. Performance of FFT and AutoCor mirrored that of GMPDA with slightly lower levels of accuracy. Of note was EPeriodicity's performance, which increased up to β = 1 and declined after that, while Hist was very sensitive to all levels of noise and performed worse than all other algorithms.
For the Clock Model GMPDA behaved similar, while the performance of the other methods was more sensitive Accuracy is plotted for different levels of variance σ for cases with one period (|µ| = 1), no noise (β = 0) and number of events, n = 100.
to noise and accuracy was generally lower than for the Random Walk Model. Accuracy is plotted against increasing levels of noise β for cases with one period (|µ| = 1), known variance, i.e., σ = log(µ)) and number of events, n = 100.
The presence of moderate noise (i.e., with β ∈ [0.1, 0.7]) did not affect performance, except for EPeriodicity, where performance increased for noise levels up to β = 2.
Further, the maximal noise levels that the algorithms could handle were not higher than two β ≤ 2, i.e., a signal to noise ratio of 1:2, one periodic event to two noise events.
Concluding the comparison, we averaged performance over all acceptable values of noise and variance, i.e., σ = {1, log(µ), µ 16 , µ 8 } and β ≤ 2). The results are shown in Figure 8. Overall, detection of a single periodicity was increasingly accurate with an increasing number of events for all methods and both the Random Walk and Clock Models (see Figure 8). For both models, the periodicity detection with the Hist algorithm had very low accuracy with the maximal performance of less than 0.4.
For the Random Walk Model, GMPDA outperformed alternative approaches with the accuracy converging to one as the number of events increased, and even for n = 30, its performance was larger than 0.75. FFT/Autocor achieved similar performance when the number of events was larger than 300. In contrast, EPeriodicty's performance for the Random Walk Model was relatively poor, with a maximum at 0.6 for 500 number of events. For the Clock Model, GMPDA outperformed alternatives when the number of events was smaller than 300. For the number of events larger than 300, the performance of all the approaches, except Hist, became equally good.
Performance w.r.t. |µ * | > 1
This section compares the performance of GMPDA (with and without curve-fitting) to that of the alternative methods for multiple periodicity detection, focusing on the set of sensible simulation parameters identified in Section 4.2.2, i.e., n = 50, 100, 300, 500, σ = {1, log(µ), µ 16 , µ 8 }, and β ≤ 1, resulting in 8000 test cases for each setting of |µ| = 2 and |µ| = 3 and for every generative model. For comparison, the performance was summarized over n, µ, σ, and β and is visualized as a histogram, where the x-axis displays the number of correctly detected periodicities and the y-axis the number of test cases. Figures 9 and 11 show the results for the Random Walk Model for |µ| = 2 and |µ| = 3, respectively. Figures 10 and 12 show the results for for the Clock Model for |µ| = 2 and |µ| = 3, respectively. For the case with two periodici- ties, |µ| = 2, GMPDA outperformed the alternative methods, both with and without curve fitting. GMPDA without curve-fitting performed slightly better, suggesting that the currently deployed sigma optimization might require further development. The detection of three periodicities, |µ| = 3, was challenging for all methods, as shown in Figure 11. One possible explanation is that with more periodicities, there are more interaction intervals, i.e., intervals between the periodic events from different periodicities. Furthermore, at least for the Random Walk Model, the histogram is becoming less and less identifiable, as σ grows for every subsequent step, which flat out the distribution responsible for the events and this effect is amplified when more than one periodicity is present. We conclude that GMPDA in the current version is not well suited for detecting more than two periodicities.
Our analysis showed that the computational performance had a strong dependence on maximal number of allowed periodicities, max periods. The CPU for both models (averaged over the number of executions, number of events n, and loss length) is shown in Figure 13. All other parameters had a comparatively minor influence on the performance (data not shown here). In additional experiments not shown here, we also investigated the influence of noise, β, on the computational performance of the algorithm. The results indicated that although, on average, the CPU time increased slightly with increasing noise, β, the influence of minimal when compared to the maximal number of allowed periodicities, max periods. Finally, the maximal number of candidates periods, max candidates, will affect the CPU: a lower max candidates resulted in faster execution time but with decreasing algorithm accuracy.
Summary
We have evaluated the performance of the GMPDA algorithm for a large set of test cases, covering different configurations of the Random Walk and the Clock Models. Our main findings indicate that, first, for time series following the Random Walk Model, GMPDA outperformed alternative algorithms. Second, for time series following the Clock Model, GMPDA outperformed alternative methods in cases with a low variance of the inter-event intervals. All algorithms struggled to identify more than two periodicities. In addition, we analyzed the sensitivity to critical simulation parameters across the different algorithms and found that both sigma and the the number events emerged as the strongest determinants of periodicity detection accuracy. The details of the analysis can be found in Appendix B.4.
REAL APPLICATION
Finally, we also applied the GMPDA algorithm to real data and specifically to the recording of leg movements during sleep from the publicly available MrOS data set [23], [24], [25], [26], [27].
From 2905 available sleep recordings in communitydwelling men 67 years or older (median age 76 years), we considered all recordings with at least 4 hours of sleep, a minimum of scored events (10 leg movements, 10 arousals), and adequate signal quality based on various parameters in the MrOS database. This resulted in 2650 recordings satisfying our inclusion criteria, from which we randomly selected 100 recordings for this real application case. We have chosen to look at leg movements during sleep because it is known that in a relatively large proportion of the population (up to 23 percent [28]), these leg movements tend to occur in a periodic pattern, the so-called periodic leg movements during sleep (PLMS) [29], with a typical intermovement interval around 20 to 40 seconds [30]. We, therefore, expected to find some amount of periodicity in this data set, which -it could be argued -makes this analysis a real-life positive control.We applied GMPDA to raw data and preprocessed data. In the preprocessing step, the time series of leg movements of every subject was segmented into sleeping bouts according to the following criteria: Each bout (i) contained only sleep interrupted by not more than 2 minutes of wake, (ii) lasted at least 5 minutes, and (iii) contained at least four leg movements. This resulted in 579 sleep bouts from the 100 recordings where GMPDA was applied independently to each bout. The number of events was less than 100 for 85% of the bouts, and for those, the average bout length was 2572 seconds.
GMPDA Configurations
The following GMPDA parameters were fixed for both data sets: L min = 5, L max = 200, max iterations = 5, max candidates = 15, loss length = 400, max periods = 5, noise range = 5, loss tol change = 0.1. We chose the tolerance value for a decrease in the loss to be 0.1. (i.e., additional periodicities are only considered if their inclusion results in a change of loss greater than this tolerance value). This value is substantially higher than in the simulated examples (0.01) because, in this first real-life application, we aimed to generate robust results with the expected noise in the data. In this sense, the results presented here and the periodicities identified can be seen as "low-hanging fruits." Moreover, the detection of additional periodicities would be expected with different GMPDA parameters. For the MrOS data set, we assumed a Random Walk Model, which we applied both with and without the curve fitting of the variance parameterσ. Consistent across all single records, the curve fitting approach identified periodicities with a lower loss, so that we will describe only the curve fitting results in the following. The GMPDA loss with and without curve fitting is compared in Appendix B.5 Figure 24 and 25
Reference loss
The GMPDA algorithm will identify the periodicity with minimal loss. However, even if minimal, this loss might still be numerically significant. In a real-life application where it can be assumed that some of the time series do not contain periodic events, there is a need to identify loss values that do not support the existence of periodicities in the data. We have chosen to address this issue by constructing a reference loss, which we derive from the minimal GMPDA loss returned for times series that only contain random noise.
For the MrOS data set, the length of the included bouts and the number of events ranged from 300 to 24000 seconds and 5 to 430 events, respectively. In order to obtain an overall reference loss, we constructed 100 noisy bouts with uniformly distributed events for all different combinations of the number of events [10,30,50,100,200,400] and length of the bout [500, 1000, 2000, 4000, 8000, 16000]. Applying GMPDA to each combination, we obtained an empirical distribution of loss values for cases where the events were generated randomly and did not exhibit any clear periodic pattern. The global MrOS reference loss is set to the 0.01 quantile of this distribution, corresponding to a value of 0.74468, rounded to 0.75 in the following.
In addition, we also estimate a local reference loss for every single bout in the MrOS data set by generating 100 time series with the bout-specific length and the number of events and taking 0.01 quantile of the resulting lossdistribution. A significant periodicity was identified when the GMPDA-loss for this bout was lower than the local reference loss. However, the significant periodicities obtained with local reference loss and global one did not differ significantly, and for simplicity, we focus on the results obtained for a global reference loss of 0.75.
Results
The distribution of the GMPDA model loss for all time series is shown in Figure 14 and Figure 15 for the whole night recording and the single sleep bouts, respectively. The figures suggest that GMPDA loss did not systematically change with the length of the times series. However, the loss tended to decrease with the number of events in the time series, or more specifically -as already seen in the simulation experiments -for time series with a low number of events, the resulting loss was not distinguishable from the loss found for non-periodic time series. Please note that Figure 15, which shows the distribution of the loss for the number of events in the MrOS data set, could also be used to suggest a minimum number of events needed for the GMPDA algorithm to detect a significant periodicity in this data set. For the records selected here, no significant periodicity was detected for any bout with less than 30 events (see reference number of events in the Figure). Other records from the same data set and new data periodicities is around 20. Another minor peak (rather unexpected) is at 15. Significant periodicities ranged from 10 to 33 seconds (except two bouts with a periodicity of 49 and 192 seconds). Periodicities around 20, i.e., µ ∈ [17,18,19,20], were present in 95 bouts (out of 183), from 77 subjects (out of 100). Periodicities around 15, i.e., µ ∈ [12,13,14], were present in 30 bouts from 18 subjects.
Although the minimal periodicity and noise range were set to five, µ = 12 was the smallest periodicity identified by the algorithm for significant bouts.
CONCLUSION
In this paper, we developed the Gaussian Mixture Periodicity Algorithm (GMPDA) to address the overlapping periodicity detection problem for noisy data. The GMPDA algorithm is based on a new generative model scheme that accounts explicitly for a Clock Model and a Random Walk Model. The Clock Model describes periodic behavior in systems, in which variances do not change over time because a pacemaker governs the events in the system. Examples for a Clock Model are scheduled or seasonal behaviorlike traffic patterns guided by working hours or migration patterns governed by seasons. In contrast, in Random Walk Models, the variances increase over time, making distant temporal predictions difficult or impossible. The Random Walk Model describes a biological behavior -like footsteps or gene expression -where events only depend on the interval to the last event.
The main entry point for GMPDA is the empirical histogram of all forward order inter-event intervals, i.e., for every event, we consider not only the interval to the next event (onset to onset) but also to all subsequent events. This histogram contains information about the underlying prime periodicities but also about the interaction noise between events associated with different periodicities and about false positive noise. We approximate the overall noise by an explicit formulation under the assumption of the noise being uniformly distributed. The approximation accounts for all interaction intervals, which length is limited by a user-defined parameter in the GMPDA algorithm. After its subtraction, the GMPDA Algorithm hierarchically extracts the multiple overlapping periodicities by minimizing the loss, which is defined as the absolute difference between the parametrized histogram obtained by the generative scheme and the empirical histogram.
GMPDA is implemented in a computationally efficient way and is available open-source on https://github.com/ nnaisense/gmpda. We have demonstrated its performance on a set of test cases. Test cases included up to three overlapping periodicities with different values for the involved Gaussian noise and the different number of events. For the Random Walk Model, we can conclude that GMPDA outperformed the FFT and Autocorrelation-based approaches and the EPeriodicity algorithm in identifying true prime periodicities. For the Clock Model, GMPDA outperformed other algorithms in cases with a low variance of intervals.
GMPDA performed well in the presence of noise for a signal-to-noise ratio of 1:1, and it performed adequately up to a ratio of 1:2, with an appropriate number of events. This appropriate number of observed events depends on the signal-to-noise ratio. However, it needs to include more than 30 actual periodic events for GMPDA to identify any periodicity.
Finally, we applied GMPDA to extract significant periods in real data, focusing on leg movements during sleep. The main results here were (i) that GMPDA was able to identify the expected periodicities around 20 seconds, (ii) that we have introduced a procedure to identify a data set dependent reference loss (of 0.75) that could be used to distinguish significant from spurious periodicities, (c) and that our results suggest that a minimal number of events (30) required by GMPDA for performing periodicity detection successfully in biomedical data.
The general nature of the generative framework and the formulation of GMPDA allow an alternative statistical parametrization for the event data. An extension, for example, would be to model events as a Poisson process, which for multiple periodic generative functions could similarly be modeled in terms of a sum of scaled probability density functions. Further, GMPDA can be extended towards periodicity extraction in non-stationary event time series. One approach could be to assume that the time series under investigation can be divided into locally stationary segments. Then, deploying a bottom-up-based segmentation strategy, we could estimate in an alternating manner the optimal switching points between the segments and the underlying prime periodicities for each stationary segment. Another approach could be to incorporate the Monte Carlo based particle approach for an adaptive periodicity detection presented in [20]. This would allow GMPDA to adapt to nonstationary changes and remains for future work.
APPENDIX A GMPDA ALGORITHM
In the section the steps involved in the GMPDA algorithm are outlined in very detail.
A.1 Approximation of |ζ(µ)|
In equation (16) we estimate the number of interaction intervals as a decreasing linear function E[ζ(µ)] = z(1 − µ N T ). An approximation of z is required, as the information about the amount of noise and the number of true periodicities is unknown a priori. Here, we propose the following approximation:ẑ This approximation follows the idea that first, z should be close to the maximal value of E[ζ(µ)] and that second, all non zero contributions in D(µ) for µ < argmin µ µ * are due to the interaction intervals, asymptotically for an increasing number of events and/or increasing number of involved prime periodicities. For the GMPDA algorithm we set default z min = L min − 1. Thus, in the algorithm, the length of the interaction intervals is limited either by the minimal expected periodicity or can be adjusted by the user. We verified this approximation empirically on a set of 50000 test cases with and without noise and a priori known two random chosen prime periodicities µ * 1/2 ∈ [10, 60]: E[ζ(µ)] withẑ provided on average a good linear fit, since the distribution of the mean errors between E[ζ(µ)] and E[ζ(µ)] was centered at zero and was approximately normal (data not shown).
However, it must be stressed that the assumption of an uniform distribution of interaction intervals between the periodic and the noise events maybe unrealistic in real life data sets and there is currently no alternative available for estimating zeta from the observed data. This remains an area of improvement for the GMPDA algorithm.
A.2 Candidate Period Identification
The proposed algorithm hierarchically extracts a set of candidate periodicities which can explain D(µ) using an integral convolution approach. The method works by iteratively selecting periodicities which explain many of the observed intervals, and then subtracting the integer multiple intervals which can be explained by these periodicities.
The algorithm takes as input some guessσ, a range in which to search for periodicities {L min , L max }, a number of hierarchical periodicity extraction steps max iterations, and a maximal number of periodicities to extract at each hierarchical iteration, max candidates.
Recall that D(µ) counts the number of times a given interval µ appears between any two events in the time series.
One tempting method would be to select argmax µ D(µ) as the first prime periodμ 1 . However, for small or noisy real world data argmax may not be the prime period. We introduce the notion of an integral convolution, in which we integrate around a fixed µ to capture how much of the observed intervals in D(µ) are explainable by that particular "mean" periodicity, which also acts to smooth D(µ). We therefore define a function τ (μ,σ, D(µ)) which will act as a symmetric convolution kernel across D(µ), centered at the candidate meansμ. This function provides a point-wise estimate of the explained data for a given µ in Because we don't want to calculate the full loss function (32) at this stage due to computational expense, we approximate our loss function with the function τ (μ,σ, D(µ)) for the Clock Model: and for the random walk: Functions (36) and (37) approximate, for each candidate prime period, how much of the data can be explained by this periodicity in some confidence interval about µ p . If a periodicity is present and is persistent through the time series, integer multiples of the periodicityμ p will also frequently appear in (36) and (37), we can use this information to select the periodicity which explains the most data.
The GMPDA algorithm performs reasonably well at identifying the true periods µ * asμ without the use of the loss function or adjustments toσ. But without a measure of relative goodness of this estimates, we have no stopping criteria for finding multiple periodicities. Instead, we repeat this procedure for max iterations iterations.
Once we have initialized (hierarchically) a set of candidate prime periodsμ init using this "fast" method, we compute a better estimate of the variance and loss using methods which will be elaborated in the following sections.
A.3 Non-linear least squares fitting forσ
We can improve our guess of the varianceσ by formulating a non-linear least squares curve fitting optimization problem, in which our set of parameters are those of a Gaussian PDF. That is, here we consider D(µ) which can be modeled as the sum of Gaussian PDFs, and a set of candidate meansμ for those Gaussian PDFs. For a fixed set µ p we initialize guesses forσ p , for p = 1, . . . , P , and deploy Trust Region Reflective algorithm to obtain an update for the guesses ofσ p . It is implemented with curve f it() from Scipy's optimization package.
A.4 Selecting true parameters: Loss Function
The parameter estimatesμ init , M ,σ are assessed with respect to D(µ) -the observed intervals between events in the time series -using a loss function. This loss function describes the proportion of intervals in the data which can be explained with (i) the parametrized "generative function" G M (μ,σ) implicated by the estimates, which is asymptotically the same as the expectation of D(µ) ifμ = µ * andσ = σ * and (ii) the noise approximation.
Computing the loss function (32) is expensive because in the case of the Random Walk Model G RW requires computation at increasingly large intervals since the variance terms grow linearly, and thus the area covered with some density by a single Gaussian distribution grows at the same rate. Thus, we only want to compute G M for a few very probable periodicities (the setμ init computed in the fast algorithm), using the optimal variance guessesσ, and only across a limited range of intervals specified by loss length chosen a priori.
We also adjust the scaling factor of the generative function to account for sections of the time series which may not have any events, for instance missing values, or large intervals of the time series with no observations).The concerns the scaling factor c p , for p = 1, . . . , P , of G M (μ,σ) for Clock Model and Random Walk Model in equations (28) and (29), respectively. The adjusted scaling factor is: where N T (μ p ) is the sum of intervals which are smaller than µ p + (σ p · 2). This correction ensures that we only count "possible appearances" in the time series on sections which actually have events. Without this scaling factor, missing values would bias our results towards higher frequencies and the scaling factor would be far too large for lower frequencies which may appear in the time series, but with intervals of no-events. Our final loss function will therefore be constructed using D(µ) computed from the real data, E[ζ(µ)], G M (μ init ,σ), and one additional parameter, loss length. This parameter manages high variance at high integer multiples and decreases the computational complexity. In the Random Walk Model, for high integer multiples of a periodicity, the implied Gaussian distributions of intervals begin to have large tails and the distributions density in mean decreases. Meanwhile for the Clock Model, estimating many integer multiples is not actually necessary to compute the true periods. Therefore, the loss we compute in the algorithm is: Within the algorithm, we compute G M for all combinations of the setμ init up to order max combi, and select our true set of periodicities as that which minimizes (39). Please note, the number of true periodicities max combi is not known a priori, the optimal value of max combi will minimize the loss. However, in real applications we might have situations where there are weak peaks in D(µ) around very large µ due to noise or the influence of large/slow interactions intervals. Adding these to the set of prime periodicities will decrease the loss, but will not contribute to the identification of intrinsic periodicities. To account for this, GMPDA provides the possibility to control the magnitude of the loss decrease by a parameter loss decrease tol, with the loss being typically of magnitude 1 and lower, see Section 5. That is, setting this tolerance parameter to a very low number, e.g., loss change tol = 0.001, will result in including more periodicities (that might be due to noise), while a larger number, e.g., loss change tol = 0.1, will be more conservative.
APPENDIX B PERFORMANCE B.1 |µ| = 1
Here the performance of GMPDA with respect to noise with curve fitting and without curve fitting is presented. The models the case n = 10 performs not sufficient, indicating that the number of events must be definitely higher then ten.
B.2 Comparison to alternative Methods wrt. Noise β
Here the performance of GMPDA and of the alternative methods is compared regarding an increase in noise. In the following figures the accuracy of all the involved methods is plotted for |µ = 1|, different number of events n, while it was averaged over all considered values of variance σ.
B.3 Comparison to alternative Methods wrt. Variance σ
Here the performance of GMPDA and of the alternative methods is compared regarding an increase in variance. In the following figures the accuracy of all the involved methods is plotted for |µ = 1|, different number of events n, while it was averaged over all considered values of noise β.
B.4 Sensitivity Analysis
We summarized differences in sensitivity to critical simulation parameters across the different algorithms in Table 1. Based on the simulation results obtained in Section 4, we used generalized linear mixed models with number of periods |µ| = 1, 2, 3 nested within trials (n = 38400) with the response being the accurate detection of a single periodicity (coded as 1 if the estimate is within an intervals around the true value ±σ) and the independent factors being Mixed logistic models were computed separately for the clock model and the random walk model and each algorithm. Table 1 lists the Anova type II sum of squares (SoS), i.e. the SoS of each main effect after the introduction of all other main effects. While the SoS are not directly comparable between models, their relative contribution is and suggests that for the Random Walk Model the number of events had a major effect in all algorithms except the FFT histogram algorithm. The number of periods had a small to moderate effect except for the E-Periodicity where it did not play a role. Both E-Periodicity and GMPDA with curvefitting were very sensitive to the noise level and across algorithms variations in sigma had one of the strongest effects on accuracy, again with the exception of the E-Periodicity algorithm.
The results for the Clock Model were largely similar with some notable exceptions. Compared to the Random Walk Models, the E-Periodicity algorithm was considerably less sensitive to variations in noise but more sensitive to variations in variance. Overall, the three algorithms E-Periodicity, FFT, and FFT auto-correlation showed a similar patter, with the number of events having the strongest influence, with sigma being the second strongest, and noise and number of periods having only relatively minor effects. For the two GMPDA algorithms the strongest effect was seen for sigma.
Across all models and algorithms, both sigma and the number events emerged as the strongest determinants of periodicity detection accuracy.
B.5 Real Application: Loss
This section shows the GMPDA loss obtained with and without curve fitting for MROS dataset, Figure 24 shows the loss for 100 recordings, while Figure 25 shows the loss comparison for the bouts. | 2022-01-21T02:15:44.052Z | 2022-01-19T00:00:00.000 | {
"year": 2024,
"sha1": "a73989488a31eceb0865ae9ad6a655209975407a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a73989488a31eceb0865ae9ad6a655209975407a",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics",
"Medicine"
]
} |
216508640 | pes2o/s2orc | v3-fos-license | Assessing the genetic diversity in Argopecten nucleus (Bivalvia: Pectinidae), a functional hermaphrodite species with extremely low population density and self‐fertilization: Effect of null alleles
Abstract Argopecten nucleus is a functional hermaphroditic pectinid species that exhibits self‐fertilization, whose natural populations have usually very low densities. In the present study, the genetic diversity of a wild population from Neguanje Bay, Santa Marta (Colombia), was estimated using microsatellite markers, and the effect of the presence of null alleles on this estimation was assessed. A total of 8 microsatellite markers were developed, the first described for this species, and their amplification conditions were standardized. They were used to determine the genotype of 48 wild individuals from Naguanje Bay, and 1,010 individuals derived from the offspring of 38 directed crosses. For each locus, the frequencies of the identified alleles, including null alleles, were estimated using the statistical package Micro‐Checker, and the parental genotypes were confirmed using segregation analysis. Three to 8 alleles per locus with frequencies from 0.001 to 0.632 were detected. The frequencies of null alleles ranged from 0.10 to 0.45, with Ho from 0.0 to 0.79, and He from 0.53 to 0.80. All loci were in H‐W disequilibrium. The null allele frequencies values were high, with lower estimations using segregation analysis than estimated using Micro‐Checker. The present results show high levels of population genetic diversity and indicate that null alleles were not the only cause of deviation from H‐W equilibrium in all loci, suggesting that the wild population under study presents signs of inbreeding and Wahlund effect.
reefs zones, from 5 to 50 m of depth (Díaz & Puyana, 1994;Lodeiros et al., 1993). Argopecten nucleus is a species with a short life span (1-2 years), characterized by early sexual maturity (at 3 months) (Lodeiros & Freites, 2008;Velasco & Barros, 2008) and spawning activity occurring throughout the year (Lodeiros et al., 1993;Velasco, Barros, & Acosta, 2007). This species is also a simultaneous hermaphrodite with external fertilization that is able to release gametes of one sex first, usually male, before releasing other sex gametes, with a time lapse of approximately 15 min between them .
Studies on population genetic structure of marine species attract considerable interest, since they allow us to assess the state of conservation of natural populations (Piñero, Caballero-Mellado, Cabrera-Toledo, & Zúñiga, 2008;Smith, 1996), to adopt proper management practices in their fisheries, and to develop aquaculture strategies to reduce the risk of inbreeding depression and fitness loss (Gjedrem & Baranski, 2009), thus contributing to sustainable production over time (Petersen, Baerwald, Ibarra, & May, 2012;Wang, Fu, & Xia, 2013). To date, there is no information about the genetic structure of wild populations of A. nucleus, but the low population densities usually found for this species suggest the existence of high levels of genetic drift and inbreeding, as previously reported for other mollusks such as Haliotis iris (Smith & Conroy, 1992). This situation could be further exacerbated due to the effects of spontaneous self-fertilization, which has been estimated at 12% ± 1% on average for reproduction in hatchery ; . In Argopecten irradians irradians, it has been estimated that self-fertilization might be able to reduce the genetic diversity of the population in 10%-40% in just one generation (Zheng, Zhang, Guo, & Liu, 2008). However, although similar or even higher self-fertilization rates have been observed in A. purpuratus (Winkler & Estévez, 2003), no significant deviations from the Hardy-Weinberg (H-W) equilibrium for allozyme loci have been observed in wild populations, and when present, deviations only affect some loci, but not all of them (von Brand & Kijima, 1990;Galleguillos & Troncoso, 1991;Moraga et al., 2001).
The use of selectively neutral genetic markers allows to estimate genetic diversity and obtain information about the genetic structure of populations, which can be useful to infer the occurrence of phenomena such as genetic drift and inbreeding (Blouin, 2003;Qin, Liu, Zhang, Zhang, & Guo, 2007;Wu, Feng, Ma, Pan, & Liu, 2015). Furthermore, they are valuable to understand the factors regulating population dynamics (Bert et al., 2014). Among these markers, microsatellites (SSR, Simple Sequence Repeats) are particularly convenient because they usually display Mendelian inheritance and codominance. SSRs are short DNA sequences of 2-6 base pairs that can repeat in tandem a variable number of times in the genome and are frequently flanked by sequences with unique copies in the genome, which make possible to design specific primers and to amplify them using polymerase chain reaction (PCR; Liu & Cordes, 2004;Tagu & Moussard, 2003). They exhibit high polymorphism and allele number per locus, and since they are noncoding sequences, their variations usually are not affected by natural selection (Liu & Cordes, 2004). These markers have been very useful for paternity analysis in bivalves (Taris, Baron, Sharbel, Sauvage, & Boudry, 2005), for the assessment the diversity and population genetic structure (Wang et al., 2013), and for studies of linkage and genetic maps building (Petersen et al., 2012;Qin et al., 2007).
Although most microsatellite loci show random recombination, deviations from Mendelian proportions have been reported in experimental crosses for different species associated with the presence of null alleles (Baranski, Rourke, Loughnan, Austin, & Robinson, 2006;De Meeûs, 2018;Hedgecock, Li, Hubert, Bucklin, & Ribes, 2004;Li, Park, Kobayashi, & Kiima, 2003;Zhan et al., 2007). Null alleles appear when a mutation affects one of the union sites of the primers with the flanking zone of the microsatellite and impedes that it amplifies, and heterozygotes for them are then genotyped erroneously as homozygotes (Lewin, 2004).
In order to correct the bias that the presence of null alleles can introduce in population genetic analysis, their frequency can be inferred using different theoretical models (Jones et al., 1998;van Oosterhout, Weetman, & Hutchinson, 2006;Xu et al., 2001). For such purposes, statistical packages based on maximum likelihood algorithms use the differences between the observed and the expected heterozygosity in a sample to estimate their frequencies (Brookfield, 1996;Chakraborty, de Andrade, Daiger, & Budowle, 1992;van Oosterhout, Hutchison, Wills, & Shipley, 2004). Software packages such as Micro-checker indicate the presence of null alleles if the combined probability of Fisher test shows a significant, general excess of homozygotes that are evenly distributed between the homozygotes classes (van Oosterhout et al., 2004). The presence of null alleles has been observed in different aquatic species, such as the bivalves Donax trunculus (Rico et al., 2017), Pinna nobilis (González-Wangüemert et al., 2015), Pinctada margaritifera (Lemer, Rochel, & Planes, 2011) and C rassostrea gigas (Hedgecock et al., 2004), and the flatfish Scophthalmus maximus (Borrell et al., 2004), among others. These estimations, however, do not allow to correct the index of genetic diversity, and the precision and confidence interval of the estimations can be very variable depending on the real frequency of the null alleles and other factors such as the sample size, the number of sampled generations, the level and duration of the genetic bottlenecks of the population, nonrandom mating, changes of density or migration rates (Dabrowski et al., 2014). In addition, the frequencies of null alleles could be overestimated due to the occurrence of inbreeding (Chybicki & Burczyk, 2009).
An alternative method to estimate the actual frequency of null alleles is the use of segregation analysis, a method that allows to verify the presence of recessive alleles in the genotype of individuals with dominant phenotype, based on data from families (Elston, 1981). Under an appropriated crossing design, it can be used to verify the genotype of a putatively homozygous individual for a dominant gene, based on the genotypes of their progenies, and the expected frequencies with respect to the corresponding Mendelian proportions (Zhan et al., 2007). This kind of analysis has been used to test the presence of null alleles in the Pacific oyster (Crassostrea virginica; Reece, Ribeiro, Gaffney, Carnegie, & Allen, 2004), rainbow trout (Oncorhynchus mykiss; Holm, Loeschcke, & Bendixen, 2001), the Coleoptera Pissodes strobi (Liewlaksaneeyanawin, Ritland, & El-Kassaby, 2002), hop (Humulus lupulus; Vašek et al., 2010) and coffee (Coffea liberica; López, Rutherford, & Moncada, 2014), among other species.
Therefore, the segregation analysis allows to correct the biases generated by the presence of null alleles on microsatellite markers, thus making the estimations of genetic diversity more reliable when such alleles are present in high frequencies in a population.
At present, there are no microsatellite markers described for A. nucleus, and hence, this tool is not currently available to study relevant genetic aspects of its populations. In addition, there is no information on the genetic diversity of this scallop species in natural populations. Therefore, the purpose of the present study is to identify and characterize the first set of primers for microsatellite loci in A. nucleus and use them to estimate the genetic diversity in a wild population from Bahía Neguanje, Santa Marta (Colombia), as well as to assess the effect of the presence of null alleles on the estimations of genetic diversity in a species with self-fertilization and very low population densities in the wild.
| Experimental design
To evaluate the levels of genetic diversity in A. nucleus, primers for 8 SSR loci were developed. At the same time, scallops from a wild population were collected and conditioned in captivity. They were later induced to spawn under controlled conditions, performing cross-fertilization following a nested design. Observed self-fertilization rates were registered during spawning process for each individual used as dam. The progeny of each full-sib family was cultured separately until they reach adulthood. Tissue samples were extracted from each parent and their progenies for the microsatellite loci genotyping. The genotype of each parent was confirmed after the segregation analysis of the SSR alleles in their progenies.
| Obtention of biological material and DNA
In order to develop and standardize the microsatellites, samples of approximately 0.1 cm 3 were obtained from the abductor muscle of 10 adults of A. nucleus (43.6 ± 4.3 mm shell length), which were produced by cross-fertilization in the Laboratorio de Moluscos y Microalgas of Universidad del Magdalena, and cultured in suspended systems in Taganga Bay (Santa Marta) (11°16′03″N, 74°11′24″W). Tissue samples were preserved in ethanol 99% at room temperature until their analysis. DNA was extracted using the phenol-chloroform technique (Gardes & Bruns, 1993), until obtaining 20 µl with a concentration of 50 ng/µl of DNA. DNA integrity was confirmed using electrophoresis in agarose gel 1.2% and stained with SYBR Green. The amount of DNA was quantified in an Epoch equipment, and its purity was verified by the absorbance ratio 260/280 nm, using DNA samples with 1.8-2.0. The DNA was sent to the OMICS-Solutions company, which provided a genomic library enriched with 300 microsatellites (SSRs) and their respective flanking regions. The microsatellite reads were characterized based on the number of nucleotides of their motif, structure, and the number of repeats. Microsatellite loci with clearly distinct amplification products and consistent amplification were selected for the study. The conditions of PCR amplification were standardized for the 10 loci, testing different annealing temperatures. An initial denaturation temperature of 95°C for 5 min was used, followed by 30 denaturation cycles at 94°C for 45 s, an annealing gradient (Ta) of 46 to 56°C for 30 s (Table 1) These general conditions were used for PCR in the following amplifications, with adjustments of annealing temperature for each locus, as shown in Table 1.
| Segregation analysis
From crosses of 48 wild scallops (41.2 ± 1.8 mm shell length) collected from Neguanje Bay, Santa Marta (Lat. 11°20′03′′N, Long. 74°09′24′W), a total of 1,010 individuals were obtained from 38 fullsib and 10 half-sib families. Brooders were obtained as seed from artificial collectors suspended in the bay area, which were provided by the Instituto de Investigaciones Marinas y Costeras (INVEMAR, Santa Marta). Families were built as described by , , following the protocols for spawning and culture described by Velasco andBarros (2007, 2009) and . Crosses were performed using a nested design, according to which the sperm of one scallop used as male (sire) was used to separately fertilize the oocytes of 2-4 individuals (dams). Thus, each dam was crossed with only one male and no scallop was simultaneously used as mother and father. The self-fertilization rate was estimated as the frequency of nonfertilized oocytes that spontaneously initiate embryonic development within 4 hr after spawning in a sample (10 ml) from each individual used as female Winkler & Estévez, 2003).
When the progenies were 9 months old, the scallops were removed from the culture system and samples of the mantle tissue (aprox. 0.05 cm 3 ) were collected (60 scallops per FS-family). To estimate the genetic diversity of the population of Neguanje Bay, the genotype from each individual was assessed for the microsatellite loci previously standardized. The putative genotype of each parent was confirmed based on the genotypes observed in the offspring of each mating. The existence of full-sib and half-sib families allowed to confirm the genotype of each father in two or more crosses with different mothers, thus reducing the uncertainty in determining genotypes that can be introduced by the potential occurrence of self-fertilization. An individual was considered to be the result of self-fertilization when its genotype for 2 or more loci analyzed was only possible to arise if both alleles came from the individual used as mother, and the genotype for the remaining loci did not contradict this hypothesis. For this analysis, absence of self-fertilization in the fathers was assumed. Therefore, in crosses between two putative homozygotes for different alleles that progenies presented three genotypes, both parents were assumed to be heterozygous for a null allele, regardless of the genotypic proportions in the progeny of that cross.
| Data analysis
The size of each amplification product from individuals used as broodstock, and their progeny were determined with the software QIAGEN ScreenGel QIAxcel v1.0. The polymorphic information content (PIC) was estimated with the software CERVUS 3.0.7. For the wild brooders, the null allele frequencies were estimated using the models of van Oosterhout (Girard & Angers, 2008), Brookfield 1 (Brookfield, 1996) F IS statistic, and deviations of the H-W equilibrium were estimated with the GenAlEx 6.4 software (Peakall & Smouse, 2009 where p i is the frequency of the ith allele in the population and N is the sample size (Cockerham, 1969). For data from the segregation analysis, observed (Ho) and expected (He) heterozygosity, the frequency of null alleles, and F IS were estimated according to Nei (1975).
The estimations of the different genetic parameters obtained for wild (= parental) animals with the two methods for detection of null alleles were compared using a one-way ANOVA, followed by Tukey's post hoc test for multiple comparisons. The normality and homoscedasticity of all the mentioned response variables were previously examined by the Kolmogorov-Smirnov's and the C of Cochran tests, respectively. In the case of He, this parameter was analyzed using the nonparametric test of Kruskal-Wallis, since data did not comply with the parametric requirements of homoscedasticity and normality (Zar, 1999). The consistency of the frequencies of null alleles estimated with both methods was verified using the Spearman correlation analysis. All the statistical analyses were performed using the software Statgraphics Centurion XVI.I.
| RE SULTS
From the 10 primer pairs selected, two pairs were not included in the analysis since they could not be consistently amplified by PCR, even when using different annealing temperatures and reactants concentrations. The remaining 8 primer pairs exhibited reproducibility and consistent amplification over time. The designation of each locus, the motif of each microsatellite, and the sequences of the primer pairs (forward and reverse) are detailed in Table 1, in addition to the conditions for optimal PCR amplification, the size range of the alleles in each locus, and the polymorphic information content (PIC).
Between 0% and 25% of the zygotes derived from each family exhibited embryo development in unfertilized oocytes samples, with an average of 12% ± 7%. From 304 potential genotype combinations per locus, only 255 were observed and 214 of them were useful to infer the probable genotype of the parents. The remaining 41 genotype combinations were not informative, since both putative parents and their progenies were phenotypically homozygotes, or the number of individuals recovered per family was too low to verify the Mendelian proportions (see Appendix S1). In 76% of the families, allele segregation was observed for all markers. In 8 of the 38 families analyzed (21%), 1 to 5 individuals (4%-20%) exhibited a genotype that indicated the occurrence of self-fertilization, that is, the genotype for more than two loci could only arise from the fusion of gametes produced by the individual used as mother.
All loci were polymorphic in the wild population (= parental), with 4 (An010) to 9 (An007) alleles per locus (Appendix S1). The allelic frequencies ranged from 0.005 (An001) to 0.313 (An007) and were similar to the range of frequencies obtained using the software Micro-Checker 2.2.3 and from the segregation analysis with similar standard deviations of allele frequencies using both methods (Table 2). The frequencies of null alleles obtained from the segregation analysis ranged from 0.10 to 0.41 (Table 2), and the estimations inferred using Micro-Checker 2.2.3 were, on average, 11.5% ± 9.8% higher than those estimated based on the segregation analysis (p < .05, Table 2). No significant correlation across loci was observed between both methods for the estimation of null allele frequencies ( Figure 1) (r < .3114; p > .05). There were no statistical differences between null allele frequencies estimated using different models (van Oosterhout, Brookfield 1 and 2) (p > .05). The observed heterozygosity (Ho) ranged between 0 and 0.79, depending on the locus and estimation method for allele frequencies, while the expected heterozygosity (He) ranged from 0.53 and 0.80, with higher values obtained based on the segregation analysis (p < .05, Table 3). The values of F IS ranged between −0.01 and 1.00, with lower values estimated based on the segregation analysis (p < .01, Table 3).
| D ISCUSS I ON
The present study reports the first 8 microsatellite markers for A. nucleus, which were designed, standardized, and used to estimate the genetic diversity in a wild population of this species. This population exhibited relatively high values of genetic diversity, although a considerable excess of homozygotes with respect to Hardy-Weinberg equilibrium was observed. The segregation analysis shows that Micro-Checker 2.2.3 tends to overestimate the null allele frequencies in comparison with the segregation analysis, thus generating a bias that was not consistent across loci.
The intrapopulation genetic diversity of wild organisms for neutral loci, such as microsatellites, is mainly driven by the counteracting effects of genetic drift and inbreeding, which tend to reduce it, and the effects of mutation and migration, which are able to increase it (Gjedrem & Baranski, 2009 & Valero, 1998). They are particularly frequent in marine invertebrates (Hare, Karl, & Avise, 1996;Hedgecock et al., 2004), and their occurrence can result in the incorrect classification of heterozygote individuals for these alleles as homozygotes for the dominant alleles (Dakin & Avise, 2004;Lemer et al., 2011;Pompanon, 2005). Other factors that can further reduce the observed frequency of heterozygotes in a population are high levels of inbreeding (Chapuis & Estoup, 2007) and the Wahlund effect (Wahlund, 1928).
The use of statistical tools and the segregation analysis evidenced the presence of null alleles in all the examined microsatellite loci in A. nucleus, with similarly high accuracy, which can significantly influence the estimated values of heterozygosity and inbreeding in this species. The frequency of null alleles per locus was high (39%-45%), with the exception of locus An007 (12%). Their frequency estimated using statistical methods, however, was higher than those obtained from the segregation analysis, and no significant correlation between the results using both methods was observed. The segregation analysis of alleles allows to infer the parental genotype based on the parental phenotype and the genotypes observed in their offspring (Reece et al., 2004). Although this analysis is methodologically more laborious, it is a direct and reliable technique to estimate the frequency of null alleles in a population. However, if both parents are phenotypically homozygotes for the same dominant allele, it is not possible to infer with certainty if one or both parents are heterozygotes for a null allele with the same genotype. Thus, the frequency of the null alleles estimated using this method could be slightly underestimated. The mating of one male with several females reduces this risk by increasing the certainty about the genotypes of the males and improving the reliability of the inferences about the mother's genotype. However, it does not completely prevent the difficulties that arise from determining the genotype of the fathers, as it has been observed in individuals (20%) whose genotype could not be confirmed for some loci using this method in the present study. In addition, if the individuals of A. nucleus used as parents were able to contribute to the offspring by self-fertilization, this technique could overestimate the frequencies of null alleles.
The estimation of the frequency of null alleles using statistical methods, such as those included in softwares like Micro-Checker v.2.2.3 (van Oosterhout et al., 2004;Shipley, 2003), and GENEPOP v.3.4 (Raymond & Rousset, 1995), are based on the assumptions that the presence of null alleles produces an excess of homozygotes in the population dataset, in comparison with the expected Hardy-Weinberg equilibrium (van Oosterhout et al., 2004). Both inbreeding and the Wahlund effect can cause a consistent increase of homozygosity across the genome, unlike the effect of the null alleles, whose influence varies among loci depending on the frequencies of null alleles present in them (Girard & Angers, 2008;Waples, 2015). Nonetheless, the consequences of null alleles, inbreeding or the Wahlund effect are rather complex to distinguish solely based on the application of statistical analysis strategies. Recently, different tools for statistical inference have been proposed to distinguish between the presence of null alleles and the Wahlund effect in studies on population genetics (De Meeûs, 2018;Waples, 2015Waples, , 2018Zhivotovsky, 2015), but their application requires the use of specific sampling designs (De Meeûs, 2018;Waples, 2018) and does not consider the simultaneous occurrence of inbreeding. The estimations of null allele frequencies in the wild population of A. nucleus using statistical methods were much higher than those obtained from the segregation analysis, and no significant correlation was found between them. These results are not in agreement with those reported in the study by Oddou-Muratorio, Vendramin, Buiteveld, and Fady (2009), in which no significant differences were found between the use of segregation analysis and statistical methods for the tree species Fagus sylvatica, suggesting that frequency estimations for null alleles with both methods could strongly depend on the studied species, probably as a consequence of its reproductive strategy and population structure. Thus, the reproductive strategy and population structure must be considered when defining the experimental design for analyzing the genetic diversity in species similar F I G U R E 1 Association between the null allele frequency estimations using segregation analysis (Se) and Micro-Checker 2.2.3 software (St) applying three different models to A. nucleus. The use of alternative methods to statistical analysis for estimating null alleles frequency in those populations is highly recommended, as well as increasing the number of studied loci and eliminating those that present significantly high frequencies of null alleles (Bürkli, Sieber, Seppälä, & Jokela, 2017;De Sousa, Finkeldey, & Gailing, 2005;Estoup, Jarne, & Cornuet, 2002;Oddou-Muratorio et al., 2009;Stadhouders et al., 2010); redesigning primers to avoid the presence of mutations affecting the primer pairing with flanking zones (Holm et al., 2001;Reece et al., 2004); and, whenever possible, performing a segregation analysis to infer the parental genotypes.
Another option is the use of SNPs markers, since they are easier to identify, show Mendelian segregation, and exhibit few null alleles at controlled crosses, compared to microsatellites (Harney et al., 2018).
Argopecten nucleus is a functional hermaphrodite that forms sparse, low-density populations in nature. To date, the only source of live wild individuals has been seeds obtained from artificial collectors suspended in the sea (0-19 seed m −2 collector) (Díaz & Puyana, 1994;Lodeiros et al., 1993;Valero et al., 2000;Velasco & Barros, 2009 (Concha, Figueroa, & Winkler, 2011;Toro et al., 2010;Winkler & Estévez, 2003). In addition, the occurrence of self-fertilization has been verified by molecular analysis of massive spawns in farmed A. irradians and N. subnodosus (Petersen, Ibarra, Ramirez, & May, 2008). Winkler and Estévez (2003) have hypothesized that self-fertilization in A. purpuratus takes place in the nephridia during the release of gametes, since all possible precautions to avoid self-fertilization of the released oocytes were considered in hatchery operations. The same situation seems to occur in A. nucleus.
The adults of A. nucleus exhibit a limited capability of displacement, but its larvae spend 11-15 days in the plankton. In the particular geographic area of this study, planktonic larvae are continuously exposed to diverging marine currents, such as the Caribbean current that flows toward the west and the Darien countercurrent that flows in direction to north (CIOH, 2018). In addition, low salinity barriers can act as natural barriers to larval dispersion near to the coast, since low salinities are lethal for this pectinid. As a consequence, it is possible that self-fertilization might be the rule rather than the exception for A. nucleus, due to the occurrence of self-fertilization during the gametes release, and considering that low densities of natural populations could imply a low chance of encounter between gametes released by different individuals. It has been estimated that oocytes must be fertilized within 2 hr after release, and for sperm, the time lapse would be 4 hr as maximum (Velasco, 2008 A singular aspect of the present results is the high frequency of null alleles in most of the loci. Kimura and Ohta (1969) inferred that the number of generations required for the fixation by chance of a new selectively neutral mutation in a finite population depends on the effective population size. As a consequence, the combination of low densities populations and selfing in A. nucleus could favor the accumulation of selectively neutral mutations in a particular population, as it occurs when the population size decreases (Kimura, 1979).
Therefore, it can be inferred that if a set of new independent microsatellite markers is developed for this species, the frequency of null alleles per locus will likely be similar to those observed in this study.
Assuming that A. nucleus have populations with very low density and high rates of self-fertilization, as the results of this study suggest, a remarkable feature is their high levels of polymorphism and allele richness. Genetic evidences suggest that individuals of A. purpuratus exhibiting more inbreeding have higher mortality rates than their less inbred sibs (Toro et al., 2010;Winkler et al., 2009).
The same phenomenon could be occurring in A. nucleus, thus contributing to preserve the genetic variability in wild populations. On the other hand, in a hermaphroditic species with high fecundity and low chance of cross reproduction, the effective population size in time will tend to be one. This implies that within-population genetic variability can be low, but total genetic variability could remain unaffected (Falconer & Mackay, 2006), although out of the H-W equilibrium due to the Wahlund effect and inbreeding. Aquaculture usually induces changes in the populations genetic structure and diversity due to founder effects, low effective number (Ne) of brooders, differences in genetic contribution of brooders in the reproductive process and domestic selection (Hedgecock & Sly, 1990;Li, Shu, Yu, & Tian, 2007;Liu, Zeng, Du, & Rao, 2011;Praipue, Klinbunga, & Jarayabhand, 2010;Rhode et al., 2012;Verspoor, 1988, among others). As a consequence, the genetic pool of wild populations might be negatively affected when populations generated through aquaculture are used for restocking wild populations, or gametes admixture occurs due to both wild and aquaculture populations share the same environment (Beaumont, 2006;Harada, Yokota, & Iizuka, 1998;Ryman, Jorde, & Laikre, 1995;Ryman & Laikre, 1991;Waples, Hindar, Karlsson, & Hard, 2016). However, the present results in A. nucleus seem to represent a paradox in this sense, because even when aquaculture populations and artificial reproduction can reduce the inbreeding and increase the Ne in comparison with wild populations, both factors can cause genetic loss in wild populations subject to supportive breeding or exposed to genetic introgression from cultured populations. To minimize the potential genetic impact of hatchery-produced scallops on wild populations, there are different alternatives, including the systematic use of wild brooders (Yokota, Harada, & Iizuka, 2003), the use of completely genealogized brooders to ensure low inbreeding during a controlled reproduction process (Evans, Bartlett, Sweijd, Cook, & Elliott, 2004), the use of genetic markers to avoid inbred crosses (Liu et al., 2011), and the culture of triploids to prevent reproduction and genetic introgression in wild populations (Piferrer et al., 2009). However, this last method is not completely safe if the triploidization is not complete or if the triploids are not completely sterile (Winkler, Concha, & Concha, 2019).
In summary, the first 8 microsatellites designed and standard-
CO N FLI C T O F I NTE R E S T
Authors declare that they have no conflict of interest.
AUTH O R CO NTR I B UTI O N S
The author's contributions to this study were as follows: J. Barros
E TH I C A L A PPROVA L
All applicable international, national, and/or institutional guidelines for the care and use of animals were followed.
DATA AVA I L A B I L I T Y S TAT E M E N T
Supporting information of microsatellite genotypes in Appendix S1: Dryad: https ://doi.org/10.5061/dryad.h9w0v t4f0. | 2020-04-09T09:17:20.000Z | 2020-04-02T00:00:00.000 | {
"year": 2020,
"sha1": "a69db8403744794f47ec954bdcf143ff17bf991b",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ece3.6080",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1f5d0742af204d098bb58fe38ad264a663cdd923",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
256218141 | pes2o/s2orc | v3-fos-license | Projected Water Levels and Identified Future Floods: A Comparative Analysis for Mahaweli River, Sri Lanka
The Rainfall-Runoff (R-R) relationship is essential to the hydrological cycle. Sophisticated hydrological models can accurately investigate R-R relationships; however, they require many data. Therefore, machine learning and soft computing techniques have taken the attention in the environment of limited hydrological, meteorological, and geological data. The accuracy of such models depends on the various parameters, including the quality of inputs and outputs and the used algorithms. However, identifying a perfect algorithm is still challenging. This study develops a fuzzy logic-based algorithm called Cascaded-ANFIS to accurately predict runoff based on rainfall. The model was compared against three regression algorithms: Long Short-Term Memory, Grated Recurrent Unit, and Recurrent Neural Networks. These algorithms have been selected due to their outstanding performances in similar studies. The models were tested on the Mahaweli River, the longest in Sri Lanka. The results showcase that the Cascaded-ANFIS-based model outperforms the other algorithms. The correlation coefficient of each algorithm’s predictions was 0.9330, 0.9120, 0.9133, 0.8915, 0.6811, 0.6811, and 0.6734 for the Cascaded-ANFIS, LSTM, GRU, RNN, Linear, Ridge, and Lasso regression models respectively. Hence, this study concludes that the proposed algorithm is 21% more accurate than the second-best LSTM algorithm. In addition, Shared Socio-economic Pathways (SSP2-4.5 and SSP5-8.5 scenarios) were used to generate future rainfalls, forecast the near-future and mid-future water levels, and identify potential flood events. The future forecasting results indicate a decrease in flood events and magnitudes in both SSP2-4.5 and SSP5-8.5 scenarios. Furthermore, the SSP5-8.5 scenario shows drought weather from May to August yearly. The results of this study can effectively be used to manage and control water resources and mitigate flood damages.
I. INTRODUCTION
Natural disasters often occur due to recent climate changes. Several studies have focused on climate change and its' effect detection where Remote sensing methods are highly used in these methodologies [71].
Floods are frequently observed in natural disasters. However, they are one of the direct outcomes of the rainfall-runoff (R-R) process [77]. Due to their severity and The associate editor coordinating the review of this manuscript and approving it for publication was Akin Tascikaraoglu . frequent occurrence, flood prediction has taken significant attention in R-R modelling [1]. Even though they are natural disasters, their severity has been impacted by anthropogenic activities. Flow hydrographs are drastically changed to have higher peaks quickly due to ongoing urbanization [2], [3], [4]. Flash floods are often in urbanized areas [5], [6]. Hence, urbanization is one of the most impacting factors in today's floods.
Therefore, accurate modelling of runoff-rainfall relationships to catchments is in high demand. It is important to note that each catchment has to be modelled to find its R-R relationship. Commercial and non-commercial hydrological computer packages are available to simulate the R-R relationships of catchments. However, these computer packages require various data related to digital elevation models, soil data, meteorological data, and discharge data. [17]. The accuracy of the catchment models is highly varied due to the quality of catchment data [39]. Only some catchments are gauged to have meteorological and discharge data and other catchment characteristics on a temporal and spatial basis. Thus, the catchment models always need help achieving the required accuracy to model the runoff and then predict the floods.
In the event of limited data, soft computing [38], [40], and machine learning techniques [19], [20], [21], [22] are helpful to model the R-R processes. R-R processes can be modelled only using the known rainfall and measured discharges and, importantly, without any catchment characteristics. Hence, numerous methodologies under soft computing and machine learning have been developed using various algorithms and study cases. One of these data-driven methods is the artificial neural network (ANN), which has been used in various fields, including hydrology and water resources. It has gained popularity because it can address, model, and forecast stochastic and nonlinear situations in the system [41], [42], [43], [44], [45], [46], [47] The algorithm does not replace conceptual watershed modelling of the impossibility of describing the catchment's internal structure and handling the data disseminated relating to the physical properties. Nevertheless, they have gained acceptance as a practical substitute for conceptual models for forecasting because of numerous benefits, such as the ability to produce simple and accurate models [48] and the computation speed [49]. Additionally, this study has demonstrated its strength and ability to mimic hydrological events. As a result, ANN models are suggested for rainfall-runoff modelling due to their straightforward designs and accuracy, enabling addressing the issues of managing water resources.
To create ANN models, most studies have used feedforward and backpropagation (FFBP) networks. Although relatively well known for their ability to anticipate floods, neither model's performance in a particular application has been determined [43]. Since several learning methods may be used to improve ANN, there is still a wide range of probability. Gradient descent (GD) is frequently used in neural network training at the backpropagation stage [50]. GD has been used in recent years to increase the potential of the backpropagation algorithm. However, the GD may experience problems with convergence, training technique slowdown, overfitting, and stocking inside local minima. The performance of the training algorithm can lower the performance when the structure of the model is complex, and the parameter set is significant [11], [51], [52].
Moreover, Feed-forward deep neural networks (FF-DNNs) have been used widely in climate change-related studies. A case study in Kastoria Lake in Greece used FF-DNN to predict dissolved oxygen. They have obtained maximum NSE efficiency of 0.89 [70]. Forecasting of dissolved oxygen was studied using three methods such as the Autoregressive integrated moving average (ARIMA) method, Transfer Function (TF) method, and NN method [72]. They concluded that the ARIMA method provides significant results compared to TF and NN. Additionally, A combination of tools such as remote sensing, weather forecasting, and Artificial Intelligence was used to improve irrigation management in Mediterranean Basins. This study suggests that comprehensively using these tools can enhance the irrigation system rapidly [73].
Recently, several novel evaluations of CNN models were implemented: the Extreme Gradient Boosting (XGBoost) and CNN-transformer. These algorithms have been widely tested for uncertain and nonlinear data. Many studies recommended ANFIS as a highly accurate algorithm for predictions [38], [40]. Xuan-Nam et al. [39] have proposed an ML model for blast-induced ground vibration predictions in quarries. They have employed several state-of-the-art algorithms, such as Moth-flame optimization-based ANFIS, XGBoost, ANN, and SVM. The study showcased that the ANIFS-based algorithm outperformed the other model with an accuracy of 98.62%. Moreover, two environmental types of research have been introduced by Hameed et al. [38] and Junliang et al. [40] employing ANFIS and XGBoost algorithms.
On the other hand, Genetic Algorithms (GA) in the hydrological sciences have been the subject of several investigations to train (ANN) rainfall-runoff models that are more accurate than backpropagation technique-based ANN models in anticipating the quotidian flow [59] using natural code GAs. In conjunction with intelligence approaches, the GA has developed into a potent tool for modelling and optimizing complicated processes [56], [57], [58]. It is commonly used in ANN to enhance efficiency by tuning the parameters [54], [55]. Roy and Singh [11] developed a novel hybrid metaheuristic method for simulating the rainfall-runoff process that integrates Biogeography-Based Optimization (BBO), Particle Swarm Optimization (PSO), and grey wolf optimizer (GWO) combining ANN and Adaptive Network-based Fuzzy Inference Systems (ANFIS). Moreover, three optimization algorithms integrated with ANFIS were introduced for rainfall-runoff predictions, namely, Differential Evolution algorithm based ANFIS (ANFIS-DE), Particle Swarm Optimization based ANFIS (ANFIS-PSO), and Genetic Algorithm based ANFIS (ANFIS-GA) [53]. Investigating and contrasting these models in hydrology is strongly advised because the different algorithms have various advantages and VOLUME 11, 2023 distinct methods for complex modelling phenomena. The investigations in hydrology, particularly rainfall-runoff modelling, are still in the early stages. Hence, the computational analysis has to be comprehensively conducted for a better outcome. Therefore, this research study aims to contribute to scientific society by achieving the following objectives.
The ANFIS system benefits from NN and FIS's collaboration by utilizing their advantages. The system's conversion to straightforward if-then rules is another crucial benefit of this network. ANFIS's if-then control structure gives it the capacity to handle non-linear functions. It is shown that ANFIS has been applied in several study fields and yields generally effective outcomes. It is usually known that ANFIS may be used with many algorithms to reduce training phase error. For instance, the least square approach and gradient descent can increase the efficiency of finding the optimal parameters. This research shows that ANFIS functions similarly to the fuzzy system that Takagi and Sugeno presented in 1985 [30]. The forward section's consequence factors are determined using a least-squares method.
Input, membership function, fuzzification, defuzzification, normalization, and output are the five general layers that makeup ANFIS. An additional explanation is based on Figure 1 and assumes that the ANFIS system has two inputs, x, and y, while the output is denoted as f in Equation (1) and (2).
Fuzzy sets A 1 and B 1 are used here, and design parameters p i , q i , and r i are used where i = 1, 2. The membership makes up the top layer of the ANFIS structure. This layer's nodes are adaptable. For each input, membership ratings are created in this layer. The following equations can be used to explain the functionality: Nodes' linguistic labels are shown as Ai and Bi when x and y are the inputs. The grades of the membership for a set A (A1, A2, B1 and B2) are µ Ai (x), and µ Bj (y), respectively, which are adaptive. For instance, the following equation is used when the bell-shaped is employed.
In this case, the bell-shaped function's corresponding parameters are a i , b i , and c i . Simple multiplication is carried out in the following layer, which comprises fixed nodes. Following is a presentation of the layer's mathematical expression.
A fixed node normalization layer comes after that. This layer is where the output from the second layer is normalized. The operation is demonstrated by the equation below.
Here, w i displays the firing power of node i. Creating the normalized output from the third layer can be more straightforward than the fourth. The outcome of this adaptive layer may be shown using the equation below.
Only one fixed node makes up the final layer. This node adds up all the inputs that are received. Ultimately, the complete result can be extracted by applying the equation below.
Since back-propagation and least squares techniques improve the method's accuracy and speed up convergence, ANFIS has a more substantial capacity for learning. As previously stated, this system uses six tunable parameters (while a bell shape is used). The primary goal of this ANFIS system is to tune these settings to get the lowest cost. The first layer's parameters will be adjusted through back-propagation, and the fourth layer's parameters will be adjusted by the least squares technique [34].
With two main inputs and one main output, the Cascaded-ANFIS algorithm is a repeated ANFIS algorithm. The critical difference between the Cascaded-ANFIS algorithm and the classic ANFIS algorithm is that the output of the conventional ANFIS technique is used as the input for future applications of the traditional ANFIS method. Figure 2 is a valuable tool for presenting the building of this method. The Cascaded-ANFIS algorithm comprises two major parts: the pair selection method and the training method. The Pair Selection module solves the first considerable issue with ANFIS. The Pair Selection module solves the first significant issue with ANFIS. However, the inner layers of the ANFIS model use fuzzy, merely like the standard ANFIS technique. Membership functions convert numerical data into fuzzy members and are used to achieve fuzzification.
However, the original method uses each characteristic to build a robust model, equally valid for noisy data sets. The novel Cascaded-ANFIS method manages computational complexity through its Training.
The pair Selection takes advantage of sequential feature selection (SFS). This technique employs a 2-input, 1-output ANFIS model to find the best match for each input. The training module also makes use of the 2-input ANFIS model. The ANFIS module may receive the input variables directly since they are connected to the preceding module's best match, which results in current outputs and Root Mean Squared Error (RMSE) for each data pair. The expected error is then contrasted with the RMSE. There is now an error with a pre-determined aim as well. The procedure can be finished if the goal error is attained. If not, the algorithm advances to the next iteration.
III. METHODOLOGY A. PROBLEM FORMULATIONS
The following relationship shown in Equation 10 was modelled using the Cascaded-ANFIS algorithm. The relationship was trained using the ground-measured rainfall at i th station and water level. Subscript t in Equation 10 denotes the time domain of the R-R relationship.
However, it is well noted that time domains can be shifted from rainfall to runoff from that rainfall due to the catchment characteristics like river length, catchment area, land use patterns, and soil type. The travel time of a particular rainfall event has to be clearly understood. Figure 3 develops the flowchart for the developed Cascaded-ANFIS model. As shown in the Figure, the rainfall data is used as the primary input of the system. Then the input data are re-arranged with a delay of one day and two days. The inputs were then removed based on the computation of the correlation between each input and the output of the flow level. A minimal correlation of 0.40 between an input and an output was used in this case. The selection methodology of inputs is discussed in later sections.
B. COMPARATIVE ANALYSIS TO IDENTIFY THE BEST ALGORITHM
Six regression algorithms (Linear Regression, Ridge Regression, Lasso Regression, Long Short-Term Memory (LSTM), Grated Recurrent Unit (GRU), and Recurrent Neural Network (RNN)) together with the Cascaded-ANFIS algorithm were used to formulate the R-R relationship. These ML algorithms were considered in this study due to a few specific reasons, such as algorithms being similar and easy implementation. Moreover, they are low in weight and can be processed in a general computer without GPU support. Table 1 shows the parameter values considered for tuning hyper parameters of Ridge, Lasso, LSTM, GRU, and RNN tuning. These parameters were selected based on trial and error methods. Each parameter is tested with the datasets used in this study and employs the optimum value.
The Cascaded-ANFIS used three Gaussian membership functions for each input in the system. The whole cascades were ten to achieve satisfactory accuracy and error value.
C. THE MAHAWELI RIVER SUB-CATCHMENT ANALYSIS
Localized floods can be observed in sub-catchments in Figure 5 without showcasing major floods downstream due to the catchment characteristics. Therefore, the downstream river gauge may not observe any flood situation. However, upstream sub-catchments might have experienced localized floods. Therefore, it is essential to cluster larger catchments into sub-catchments and then analyze them separately. This scenario was analyzed in this research work and formulated Equation 10 for sub-catchments.
D. FLOOD IDENTIFICATION
According to the desinventar dataset of natural disasters [78], there has been significant damage due to flooding in Sri Lanka. In most cases, the damage has increased due to unexpected heavy rainfall and poor irrigation management. The database reveals that in the past events from 2005 to 2018, there was at least one death due to flooding. The highest number of deaths, injured and missing personals were recorded in 2017, with 67, 73, and 63, respectively.
Historical water levels were analyzed to define threshold water levels to identify floods in the basin. Here, water levels were considered because the authorities recorded the data as water levels instead of the water flows. If the water levels or stream flows exceed the threshold, that flow may be a flood. However, this can be confirmed with the ground-measured discharge data and by comparing flood data to the catchment. Nevertheless, many countries do not have these flood databases, so there can be some issues with the accuracy [79].
E. SHARED SOCIO-ECONOMIC PATHWAYS (SSP) CLIMATE DATA EXTRACTION
IPCC's sixth report [60] presented a new set of scenarios based on greenhouse gas emissions to project the future climates until 2100. Practitioners who engage with future climate data may investigate climate changes across a range of quite diverse futures thanks to the availability of climate forecasts for numerous Shared Socio-Economic Pathways (SSPs). These SSPs are titled SSP1, SSP2, SSP3, SSP4, and SSP5 under several Socioeconomic Pathways. SSPs describe potential future growth pathways for human cultures. A set of models combine assumptions on the ambitions for reducing the impact of climate change with predictions about how population, education, energy usage, technology, and other factors may evolve over the next century. Various conceivable future climates, from a pessimistic high-carbon scenario to a low-carbon one that satisfies the goals of the 2015 Paris Agreement, are described in the climate change forecasts from these scenarios [25], [26].
The Representative Concentration Pathways, or RCPs, or earlier projections of greenhouse gas concentration, are improved upon by SSP-based scenarios. To investigate the consequences of various emission trajectories or emissions concentrations, RCPs were explicitly created for the community of climate modellers. It is challenging to relate social trends such as population growth, educational attainment, and government policies to climate objectives like limiting global warming to below 2 • C since the socioeconomic factors used to establish RCPs need to be standardized. To address this, SSPs outline how societal decisions might alter Radioactive Forcing towards the end of the century. As a result, SSPs were built on RCPs to enable a uniform comparison of societal decisions and the degrees of climate change they cause. These SSP data are used in various recent research studies such as flood forecasting [35], land use optimization [36], and prediction of air pollution for the future [37]. Climate change research [37]. According to these studies, the reliability of SSP data is much higher than the RCP data. Therefore, this study employed SSP projections for daily rainfall data acquisition [27], [28]. Here, two SSP scenarios have been used for the data acquisition, such as SSP2-4.5 and SSP5-8.5. SSP2-4.5 redevelops the low carbon impact globally, while SSP5-8.5 is the high carbon scenario.
F. BIAS CORRECTION
The extracted rainfall data under SSP2-4.5 and SSP5-8.5 were corrected using linear bias correction factors. Usually, the data extracted from climate models may have some systematic errors [61]. Therefore, the model's extracted climate data are corrected for bias using the ground-measured climate data. Various bias correction techniques are available [62]; however, the linear bias correction method was selected in this research work. Equation 11 gives the simple mathematical formulation for linear bias correction. More details on this can be found in Chaturanika et al. [63].
where RF, d, µ m , his, obs, and sim are rainfall, daily, longterm monthly mean, raw SSP data, observed/measured data, and raw RCM forecast. The symbol * denotes the biascorrected datasets.
G. PROJECTED WATER LEVELS AND FLOODS
Bias-corrected SSP rainfall data were fed to the developed R-R relationship in Equation 10. Based on these future rainfalls under two SSP scenarios, the stream flows in the means of water levels were predicted for future years. These predicted water levels for the whole catchment were tested for the extreme values in the time series and then identified localised and downstream floods. These predicted floods are given for the near future (from 2022-2030) and mid-future (2031-2050).
IV. CASE STUDY
Sri Lanka is a country blessed with water resources. However, heavy monsoon rainfall drives many rivers into floods, and annual floods are quite often [64]. Sri Lanka has many rivers, tanks and lakes, and these watersheds are flooded during the monsoon periods. Several deaths and excessive structural damage are annually reported due to extreme weather conditions. Sri Lanka has 103 rivers, and the total length of the rivers is around 4500 km. The longest river in Sri Lanka is the Mahaweli River. It is 335 km long and covers a 10488 km 2 river basin which covers almost one-fifth of the total area of the island [65], [66]. The river has several branches along the way to the sea. 40% of the total electricity demand of Sri Lanka is provided by the hydropower generated by the Mahaweli River. Nevertheless, the Mahaweli River is known to provide a vast water supply for the cultivation of crops such as rice and vegetables [67]. Several Mahaweli River developments have been for hydroelectric generation and irrigation purposes. Many dams were constructed along the river to enhance energy generation, which led to flood risk changes. Kothmale dam was one of those constructed to generate electricity; however, indirectly, it has mitigated the floods downstream [68]. The Mahaweli River was selected for this research study due to its importance in many utilities and its frequent floods in the northeastern monsoon period (from December to February).
A. STUDY AREA AND SUB-CATCHMENTS
The Mahaweli River starts from the central hills of Sri Lanka with several small creeks. Agra Oya from Horton Plains is one of the starting creeks of the Mahaweli River. The river reaches the Bay of Bengal on the southwestern side of Trincomalee Bay. The bay includes the first of several submarine canyons, making Trincomalee one of the finest deep-sea harbours in the world. As part of the Mahaweli Development program, the river and its tributaries are dammed at several locations to allow irrigation in the dry zone, with almost 1,000 km 2 (386 sq mi) of land irrigated. Figure 5 develops the primary catchment and sub-catchments, whereas Figure 4 shows the catchment of the Mahaweli River basin. Two sub-catchments were identified along two tributaries of the Mahaweli River. The catchment above Peradeniya (for Kothmala Oya and other parts upstream creeks of Mahaweli River) is given in sub-figure (a). In contrast, the catchment VOLUME 11, 2023 above Thaldena for Badulu Oya is given in sub-figure (b) in Figure 5. The sub-catchment at Peradeniya is in the wet zone of the country; thus, heavy rainfall can be experienced. However, the sub-catchment at Thaldena is in the wet and intermediate zone. Thus, the rainfall in that sub-catchment is not as high as that at Peradeniya. However, these two sub-catchments are essential in terrain, land use, and urbanization. In addition, two flow gauges can also be found in these two sub-catchments.
B. DATA Figure 4 shows rain gauges for the Mahaweli River basin. Due to the unavailability of complete data in most of the years, the daily rainfall data from 2000 to 2017 were purchased from the Department of Meteorology, Sri Lanka. The missing data percentage for the selected years was less than 1%. The rain gauges were selected to represent the whole catchment covering as much as its area. In addition, the stream flow gauge at Manampitiya was selected to model the R-R relationship. This is the most downstream stream flow gauge available. The water levels at the station were purchased from the Department of Irrigation, Sri Lanka. Furthermore, two water level measuring stations were identified for the selected two sub-catchments: Pereadeniya and Thaldena (refer to Figure 5). The water levels for these two stations were also purchased for the same period from the Department of Irrigation, Sri Lanka. A descriptive analysis of the dataset used in this analysis are shown in Table 2. There were 6207 data samples in the dataset. The selected dataset was divided at a ratio of 7:3 for the training and testing. These sub-dataset samples were used to train and test the algorithms used in this study. The water levels are presented in centimetres, whereas the rainfalls are millimetres. Moreover, several homogeneity tests were conducted, such as the Standard normal homogeneity test (SNHT), Buishand range (BR) test, Pettitt test, and von Neumann ratio (VNR) test to evaluate the dataset before employing it in training models.
Due to the missing data in a significant time frame, few rainfall stations were omitted in the evaluation of the case study. The missing data were presented in Huruluwewa, Dambuluoya, Ulhitiya, Minipe LB, and Rantembe. Therefore, as shown in Table 2, 13 rainfall station data were considered as the inputs.
The correlation calculation in subsection III-C is given in Table3. The selected outputs are highlighted with a minimum of 0.4 correlation. Twelve inputs were selected using the correlation method to train the R-R model. The trial and error method made the selection based on the correlation. At a correlation value of 0.40, the maximum accuracy was obtained. Then the general structure of the Cascaded-ANFIS was used to generate the final outputs of predicted water levels. Additionally, according to the literature, it is considered negligible if a correlation is 0.30 or below. Therefore, 0.40 and above values were considered safe marginal inputs in the system [80]. Figure 6 shows the annual water level measurements at each of the observation points, such as the primary catchment of Mahaweli River (Mannampitiya) and sub-catchments of Mahaweli River (Peradeniya and Thaldena). It can be seen that Mannampitiya water outlets record a higher level of water when compared with the sub-catchments. As Sub-catchments Pereadeniya and Thaldena showcased some higher water levels comparable to the higher water levels at Manampitiya; however, some differences can also be observed (refer to Table 4). Thaldena has not showed a significantly higher water level in 2012. Still, higher water levels were observed at Manampitiya during the same time (t 1 , t 2 , and t 3 in sub-figure (a) in Figure 6). Similar trends can be observed in Peradeniya too. Therefore, the analysis of sub-catchments for floods is highly justified. Comparable observations have led the authors to define flood thresholds for Peradeniya and Thaldena. The threshold for Peradeniya was considered 6 m, while 3 m was considered for Thaldena. The flood events were identified in Peradeniya and presented as t 1 , t 2 , and t 3 in sub-figure (b) in Figure 6 (6.7 m on 03/06/2013, 6.9 m on 14/09/2013, and 6.7 m on 26/12/2014). In comparison, two incidents were identified for Thaldena and presented as t 1 and t 2 in sub-figure (c) Figure 6 (3.1 m on 02/02/2011 and 3.5 m on 26/12/2014).
where u(t) is the predicted parameter,ū(t) is the mean of predicted parameterv(t) is the measured parameter, k is the population size, andv(t) is the mean of the measured parameter. The correlation coefficient (R) redevelops the goodness of fit. It varies from -1 to 1; the best is when it becomes 1. Bias tells the differences between predicted to measured values. The ideal bias value is 0, and 1 becomes the worst. NSE calculates the perfectness of the match between actual and prediction. The results of the NSE can vary between minus infinity being the worst and 1 being the ideal [75]. KGE is a combined calculation of three primary parameters: NSE, bias, and coefficient of variation. Recently it has been used rapidly in hydrological model performance calculations [74].
B. PERFORMANCE EVALUATION
The river in this case study is the longest in Sri Lanka. According to the geographical experts in Sri Lanka, it is considered that the maximum time duration of travelling water from the start to the end of the river is less than three days. However, there are several reservoirs and dams along the river. Hence, we have considered 1-day, 2-day, and 3-day lags to include all corresponding scenarios in the calculation.
1) CORRELATION OF COEFFICIENTS CALCULATION FOR THE PRIMARY CATCHMENT
The primary catchment of the Mahaweli River consists of 13 rain gauges, all of which were used to predict the water level at Manampitiya. As mentioned in the previous sections, the experiment was designed to identify the best R-R prediction algorithm. Figure 7 shows the coefficient of correlation VOLUME 11, 2023 of the predicted water to the ground-measured water level at the Manampitiya river gauge. Figure 8 develops the prediction accuracy under combined scenarios initially identified as per Table 3 for the predicted water levels at Manampitiya. Figure 9 and 10 shows the prediction accuracy of water levels for each algorithm for the sub-catchments Peradeniya and Thaldena.
2) CORRELATION OF COEFFICIENTS CALCULATION FOR SUB-CATCHMENTS
Additionally, other parameters were used to evaluate the results, such as Bias, NSE, RMSE, and KGE. The evaluation results are presented in Table 5. . This is very surprising. This can be due to several reasons, including the future data quality and bias correction technique. However, these strange results imply that the researchers conducted some extensive projected flood analysis based on the ground-measured flow situations. In addition, the R-R model can be implemented for Representative Concentration Pathways (RCPs) and then analyze the differences.
A. MODEL EVALUATIONS
According to the sub-figure (i) in Figure 7, it can be seen herein that the GRU algorithm with an R of 0.9301 performed the best prediction. In addition, the LSTM algorithm with 2-day back rainfall data (t-2 scenario) performed as the second best with 0.9265 (refer to the sub-figure (l) in Figure 7). Interestingly, as per sub-figure (b) in Figure 7, the Cascaded-ANFIS algorithm showcased its highest R-value at 0.9140 for 1-day back rainfall data (t-1 scenario). However, it can be clearly understood that three scenarios separately cannot be used to model the R-R relationship. In other words, the rainfall which occurs two days back for the most upstream location can reach Manampitiya on the current day. Similarly, rainfall received one day back in another location can reach Manampitiya on the current day. Therefore, a combination of these three scenarios has to be considered.
As in the selected rainfall gauge analysis, it was clear that the results were more consistent and accurate. The Cascaded-ANFIS algorithm-based prediction model had were outperformed by Cascaded-ANFIS. Therefore, the Cascaded-ANFIS algorithm can be used effectively to predictions of water levels. The sub-catchment correlation coefficient analysis in Figures 9 and 10 shows that the Cascaded-ANFIS algorithm has outperformed the other three algorithms in predicting water levels at the sub-catchment level. In Figure 10, the correlation coefficients were found to be 0.9188 for Cascaded-ANFIS, 0.8894 for LSTM, 0.9082 for GRU, and 0.8594 for RNN. Therefore, the water level prediction for the Thaldena sub-catchment also succeeded by the prediction model developed based on the Cascaded-ANFIS algorithm.
The proposed algorithm shows the least RMSE with 0.66. The proposed algorithm also scored the highest NSE and KGE values, with 0.87 and 0.90. The second-best performances were shown by the GRU algorithm having RMSE, NSE, and KGE as 0.79, 0.83, and 00.88. When considering the bias factor of the predicted outputs, the Cascaded-ANFIS model shows a significantly low value of 1.52. This low score for the bias provides a certification that the model can predict the water levels with higher accuracy and lower bias. The overall results are shown in Table 5. It is also clear that the Linear, Ridge, and Lasso algorithms' performances are significantly miniature compared to the other LSTM, GRU, RNN, and Cascaded-ANFIS algorithms.
B. FORECASTING OF THE RIVER WATER LEVEL
Let the predictions be accurate (assumed). Then, there is a severe issue in the water levels, thus the river flow at Manampitiya. The average water levels for Manampitiya are around 10 m (from its historical data). However, the projected water levels are around 6 m (60% of the average). Therefore, drought conditions can be projected. The predicted outcomes of the trained model can be a result of the dataset. The dataset provides a short range of rainfall data. Therefore, more than the sample size may be needed to train a perfect R-R model. However, this cannot be considered a conclusion of this study. Even though the prediction accuracy is good in the Cascaded-ANFIS model, future data quality is critical in a solid prediction. Therefore, Figures 11 cannot be treated as a conclusion of this study.
However, these water levels were presented in Figure 12 shows the forecasting of water levels at Manampitiya for the projected rainfalls. From the year 2031 to 2050, forecasting is shown in sub-figures (c) and (d) in Figure 12 respectively for SSP2-4.5 and SSP5-8.5. The X-axis contains 365 ticks representing days of the year, and the scale bar on the right side of Figure 12 showcases the intensity of the water level. During the northeaster monsoon (December to February), the water levels can be observed at higher levels, as predicted at 8932 VOLUME 11, 2023 Manampitiya. However, the SSP5-8.5 scenario has projected lower water levels for mid-year, reaching less than 1 m. These can be droughts. However, the SSP5-8.5 is a higher scenario for fossil-fueled development. This observation cannot be seen in the SSP2-4.5 scenario. The key observations are indicated using black and white squares where black being lower water level periods and white being higher water level periods. Nevertheless, as discussed, more research is needed for a solid conclusion on future water levels.
VII. CONCLUSION
An R-R prediction model was developed using the Cascaded-ANFIS algorithm for the Mahaweli River, the longest river in Sri Lanka. The R-R model was developed for the sub-catchment levels as well. The dataset used in the case study was well evaluated using four different methods of homogeneity tests Standard normal homogeneity test (SNHT), Buishand range (BR) test, Pettitt test, and von Neumann ratio (VNR) test. The algorithm was tested against six regression algorithms used in most past studies: Linear regression, Ridge regression, Lasso regression GRU, LSTM, and RNN. The results were comparatively studied using correlation coefficient, bias, RMSE, NSE, and KGE. The highest correlation coefficient was recorded by the Cascaded-ANFIS when utilizing the selected rainfall gauges to train the models having 0.933 where Linear, Ridge, Lasso, GRU, LSTM, and RNN showed the R values of 0.6811, 0.6811, 0.6734, 0.9133, 0.9120, and 0.8915, respectively.
Moreover, the bias value of the proposed algorithm is significantly low (1.52) compared with the other algorithms. The Cascaded-ANFIS model scored 0.66, 0.87, and 0.90 for RMSE, NSE, and KGE, respectively. These results outperformed the other algorithms used in this study.
According to the overall results, it can be concluded herein that the Cascaded-ANFIS algorithm-based prediction model has outperformed the other six algorithms. The second-best algorithm that performed well in prediction was the GRU algorithm. However, the Cascaded-ANFIS algorithm has advantages compared to the black-box regression models, such as lightweight, lower computational cost, easy real-time implementation, and efficiency. Therefore, the Cascaded-ANFIS algorithm can predict the water levels of various catchments under the requirement of measured rainfalls and water levels. More importantly, the model can be developed under mixed rainfall input along the timeline due to the upstream waterś travel time to the riverś downstream.
Overall the prediction model based on the Cascaded-ANFIS algorithm predicts accurate results using the groundmeasured rainfalls. The future water levels were projected under two SSP scenarios for the Manampitya station. However, promising results were only found under the near future and mid-future SSP rainfalls. None of the years was projected to have unacceptable floods (by looking at the records). Therefore, this research does not provide any conclusions about the future projected water levels. More research is needed for a solid outcome for future water levels. | 2023-01-25T16:13:12.624Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "5f0ec233c345443a9ba5568b3cd59ac41477f587",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1109/access.2023.3238717",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "dd319ca3bc1e863da4fc653d2974cb50c34deffd",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
219982587 | pes2o/s2orc | v3-fos-license | The NSD2/WHSC1/MMSET methyltransferase prevents cellular senescence‐associated epigenomic remodeling
Abstract Senescent cells may possess the intrinsic programs of metabolic and epigenomic remodeling, but the molecular mechanism remains to be clarified. Using an RNAi‐based screen of chromatin regulators, we found that knockdown of the NSD2/WHSC1/MMSET methyltransferase induced cellular senescence that augmented mitochondrial mass and oxidative phosphorylation in primary human fibroblasts. Transcriptome analysis showed that loss of NSD2 downregulated the expression of cell cycle‐related genes in a retinoblastoma protein (RB)‐mediated manner. Chromatin immunoprecipitation analyses further revealed that NSD2 was enriched at the gene bodies of actively transcribed genes, including cell cycle‐related genes, and that loss of NSD2 decreased the levels of histone H3 lysine 36 trimethylation (H3K36me3) at these gene loci. Consistent with these findings, oncogene‐induced or replicative senescent cells showed reduced NSD2 expression together with lower H3K36me3 levels at NSD2‐enriched genes. In addition, we found that NSD2 gene was upregulated by serum stimulation and required for the induction of cell cycle‐related genes. Indeed, in both mouse and human tissues and human cancer cell lines, the expression levels of NSD2 were positively correlated with those of cell cycle‐related genes. These data reveal that NSD2 plays a pivotal role in epigenomic maintenance and cell cycle control to prevent cellular senescence.
. Cellular metabolism likely plays a crucial role in regulating SASP because abnormal activation of mitochondria increases the expression of a subset of secretory proteins and promotes senescence (Correia-Melo et al., 2016;Kaplon et al., 2013;Quijano et al., 2012). Moreover, dysfunction of mitochondria also leads to senescence, accompanied by a distinct secretory phenotype . Thus, clarifying the effector mechanisms of senescence-associated metabolic remodeling is important to understand the senescence program.
Chromatin regulators play an essential role in gene expression and metabolic regulation (Anan et al., 2018;Nakao, Anan, Araki, & Hino, 2019). Numerous epigenetic factors are implicated in the process of senescence and aging, such as DNA methylation, histone modification, and higher-order chromatin structure. Thus, we hypothesized that (Booth & Brunet, 2016;Criscione, Teo, & Neretti, 2016;Sun, Yu, & Dang, 2018) certain chromatin regulators may be involved in the molecular basis of senescence-associated metabolic and epigenomic remodeling. We previously reported that the SETD8 methyltransferase functions in senescence-associated mitochondrial and ribosomal coactivation via histone H4 lysine 20 methylation (Tanaka et al., 2017).
NSD2 has also been closely associated with human diseases. As an oncogenic protein, NSD2 is overexpressed in a variety of cancer cells (Vougiouklakis, Hamamoto, Nakamura, & Saloura, 2015), and knockdown of NSD2 decreases proliferation in cancer cells at least in part by loss of H3K36me2 levels (García-Carpizo et al., 2016;Kuo et al., 2011).
In normal cell types, haploinsufficiency of NSD2 causes developmental growth delay, the so-called Wolf-Hirschhorn syndrome (Boczek et al., 2018;Nimura et al., 2009). Furthermore, heterozygous knockout of Nsd2 in mice impaired T-and B-cell development in an age-dependent manner (Campos-Sanchez et al., 2017). These reports suggest that NSD2 plays a fundamental role in cell proliferation and development.
However, the role of NSD2 in cellular senescence remains unknown.
Here, we performed an RNAi-based screen to identify chromatin regulators that affect metabolic and epigenomic functions and found that loss of NSD2 increased mitochondrial mass and oxidative phosphorylation and induced senescence in normal human fibroblasts.
Gene expression analyses revealed that loss of NSD2 inhibited cell cycle progression via the RB-mediated pathway. Chromatin immunoprecipitation (ChIP) and sequencing analyses revealed that NSD2 bound the gene bodies of actively transcribed genes and maintained the levels of H3K36me3. Our data shed light on the epigenomic role of NSD2 in preventing cellular senescence.
| RNAi-based screen revealed that loss of NSD2 induces cellular senescence
Senescent cells exhibit active metabolic remodeling characterized by increases of mitochondrial content and oxygen consumption compared with cells in the proliferating state (Takebayashi et al., 2015;. Using high content imaging analysis, we first confirmed the senescent phenotypes, an increase of mitochondrial and nuclear areas, in human IMR-90 fibroblasts undergoing oncogenic H-RAS G12V -induced senescence (OIS) and replicative senescence (RS) (Figure 1a). We then performed an RNA interference (RNAi)based screen in IMR-90 cells using a custom siRNA library against 79 chromatin-related factors that were predicted to have mitochondrial implications due to the existence of mitochondrial targeting signals and subcellular localization of proteins shown by published databases (Barbe et al., 2008;Claros & Vincens, 1996;Elstner, Andreoli, Klopstock, Meitinger, & Prokisch, 2009;Emanuelsson, Brunak, von Heijne, & Nielsen, 2007;Horton et al., 2007;Pagliarini et al., 2008). We found that knockdown of 23 genes significantly increased mitochondrial area while knockdown of 3 genes significantly decreased it (Table S3). Among the identified factors, SETD8 was previously shown to control senescent processes and senescence-associated metabolic remodeling by our group and another study (Shih et al., 2017;Tanaka et al., 2017). Notably, transfection of siRNA targeting NSD2 significantly augmented both mitochondrial and nuclear areas within a single cell compared with control siRNAs (ctr) (Figure 1b, Figure S1a). Using three independent siR-NAs, we confirmed an increase of mitochondrial content, nuclear area, and mitochondrial oxygen consumption rate (OCR) in NSD2 knockdown (NSD2-KD) cells compared with those in control knockdown (Ctr-KD) cells (Figure 1c,d, Figure S1b-e). Both long and short isoforms of NSD2 were decreased by each knockdown (Figure 1c), whose short isoform lacks the SET domain that is required for histone methyltransferase activity. NSD2-KD cells showed reduced proliferative activities, as indicated by the reduction of cell number and 5-ethynyl-2′-deoxyuridine (EdU) incorporation starting on day 3 after siRNA transfection (Figure 1f,g). Cell cycle analysis by propidium iodide staining revealed that the population of cells in G2/M phase was slightly increased on day 6 in NSD2-KD cells ( Figure S1h). Further, knockdown of the other top-ranked genes in our screen showed marked senescence features such as reduced EdU incorporation and increased SA-β-Gal staining ( Figure S1l,m). Collectively, our RNAi-based high content screen revealed that these genes such as NSD2 are important to prevent senescence in human fibroblasts.
To further demonstrate the involvement of NSD2 in senescence, we performed qRT-PCR and Western blot analyses to investigate the levels of NSD2 mRNA and protein expression in senescent cells. We found that NSD2 was downregulated at the mRNA and protein levels in both OIS and RS cells, while the mRNA levels of other known histone methyltransferases against H3K36, such as NSD1 and NSD3, were unchanged or only modestly downregulated ( Figure 1f,g, Figure S1n). We next performed overexpression of NSD2 to test whether overexpressed NSD2 acts to prevent senescence. As a result, overexpression of NSD2 did not affect the levels of the induction of SA-β-Gal and p16 expression in OIS cells, F I G U R E 1 RNAi-based screen revealed that loss of NSD2 induces mitochondrial activation and cellular senescence. (a) Immunofluorescence of mitochondria in IMR-90 human fibroblasts with oncogene-induced senescence (OIS) and replicative senescence (RS). OIS cells were induced by treating IMR-90 ER:Ras cells with 100 nM 4-OHT for 6-8 days. RS cells were prepared by repeated passages for 10-12 weeks (Tanaka et al., 2017). To assess mitochondrial signals, the total area of fluorescence signals per cell was calculated in growing, OIS, early passaged, and late passaged (RS) cells (each n > 600 cells). Data are shown as means ± SD; n = 3. Scale bar, 100 μm. (b) Scatter plot between mean mitochondrial area and mean nuclear area in each knockdown cell. Data are shown as means ± SD; n = 3. (c) Western blot analysis of NSD2 at 24 hr and day 3 in Ctr-and NSD2-KD IMR-90 cells. (d) OCR on day 4 in Ctr-and NSD2-KD IMR-90 cells. Data are shown as means ± SD; n = 4. Respiratory chain inhibitors were serially added to the culture at the indicated time points. Statistical analysis was performed between control siRNA and each NSD2 siRNA. (e) SA-β-Gal staining on days 1-12 in Ctr-and NSD2-KD IMR-90 cells (each n > 300 cells). Data are shown as means ± SD; n = 3. Scale bar, 100 μm. Statistical analysis was performed between control siRNA and each NSD2 siRNA. (f) qRT-PCR of NSD2 and other known H3K36 methyltransferases in growing, OIS, early passaged, and late passaged (RS) IMR-90 cells. Data are shown as means ± SD; n = 3. (g) Western blot analysis of NSD2 in growing, OIS, early passaged, and late passaged (RS) IMR-90 and MRC5 human fibroblasts. *p < .05, **p < .01, using Student's t test suggesting that gain of function of NSD2 is not sufficient to prevent senescence ( Figure S1o,p,q).
| Loss of NSD2 downregulates the expression of cell cycle-associated genes and DNA repair genes
To elucidate the role of NSD2 in protecting cells from senescence, Consistent with these results, the expression levels of SASP genes were not mostly increased in NSD2-KD cells, compared with those in SETD8-KD and OIS cells ( Figure S2e). These data indicate that loss of NSD2 decreases the expression of cell cycle-associated genes and DNA repair genes without inducing massive DNA damage and SASP gene expression.
| NSD2 protein is enriched at the gene bodies of actively transcribed genes
To identify the epigenomic contribution and target genes of NSD2, we performed ChIP-seq using antibodies targeting NSD2 in proliferating IMR-90 cells ( Figure S2f). NSD2 was remarkably enriched at the gene bodies and preferentially at the 3′ region rather than at the transcriptional start site (TSS) and 5′ region of genes (Figure 3a,b). In combination with our mRNA-seq data, we found that NSD2 was enriched at highly expressed genes compared with genes expressed at low levels ( Figure 3c). Further, 60% (11,597/19,473) of all protein-coding genes appeared to be positively enriched with NSD2, suggesting that NSD2 is widely distributed to actively tran- Figure 3b. Data are shown as means ± SD; n = 3. *p < .05, **p < .01, p-values were calculated using Student's t test Furthermore, these enrichments were decreased in NSD2-KD cells.
F I G U R E 2
We also observed a decrease of NSD2 enrichment at these gene loci in OIS cells (Figure 3g). Our data suggest that NSD2 is enriched at the gene bodies of actively transcribed genes possibly for maintaining their expression activities.
| Loss of NSD2 affects the levels of H3K36 trimethylation at NSD2-enriched gene bodies
By comparison with ENCODE datasets of histone modifications in IMR-90 cells, we found that the enrichment of NSD2 was positively correlated with transcriptionally active, gene body-enriched marks, such as H3K36me3, H3K4me1, H3K9me1, H4K20me1, and H3K79me1, and poorly linked with repressive marks, such as H3K27me3 and H3K9me3 (Figure 4a, Figure S3c). Among these marks, H3K36me3 was highly correlated with NSD2 in terms of the preferential distribution at the 3′ region of the gene bodies ( Figure 4b, Figure S3d). This is consistent with the role of NSD2 as a methyltransferase and a reader protein for H3K36me3 (Nimura et al., 2009;Vermeulen et al., 2010). To test whether loss of NSD2 alters the levels of H3K36 methylation at NSD2-target gene loci, we performed Western blot, immunofluorescence, and ChIP-qPCR analyses for H3K36 methylation marks. The total amounts of H3K36me3 and H3K36me2 did not change in NSD2-KD cells at 24 hr and slightly decreased on day 3 (Figure 4c). However, ChIP-qPCR revealed that the levels of H3K36me3 were significantly decreased at the gene body of both downregulated genes (MCM3 and LMNB1) and stably transcribed genes (FN1 and GAPDH) at 24 hr in NSD2-KD cells ( Figure 4d). In contrast, the levels of H3K36me2 and H3K36me1 were not changed at NSD2-target gene loci in NSD2-KD cells (Figure 4e,f). We further confirmed the stability of H3K36me2 levels using other antibodies ( Figure S3e,f). We also confirmed a decrease of H3K36me3 levels at NSD2-target gene loci in OIS cells, while the levels of H3K36me2 and H3K36me1 were not changed or even increased at these loci (Figure 4g, Figure S3g,h,i). Collectively, these data suggest that NSD2 is involved in maintenance of the levels of H3K36me3 at NSD2-target gene bodies.
| Loss of NSD2 downregulates the promoter activities of RB-associated genes
To investigate the specificity of expression changes in NSD2-target genes (Figures 3 and 4), we examined the promoters of NSD2-target genes. Interestingly, loss of NSD2 resulted in decreased levels of (Figure 5d). In contrast, loss of RB did not fully restore the reduced expression of RBL2-associated, RB-nonassociated genes, such as AURKB and CCNA2, in OIS and NSD2-KD cells (Figure 5d, Figure S4d). In addition, knockdown of RBL2 did not restore the reduced expression of RBL2-associated, RB-nonassociated genes in NSD2-KD cells ( Figure S4e). Interestingly, knockdown of RB decreased the amount of SA-β-Gal in NSD2-KD cells ( Figure S4f).
These results indicate that loss of NSD2 downregulates cell cycleassociated genes and promotes senescence at least in part via the RB-mediated pathway.
| NSD2 is controlled in a cell cycle-dependent manner and is required for expression of late serum response genes
NSD2 is highly expressed in several types of cancers, and depletion of NSD2 causes growth retardation during development in mice (Nimura et al., 2009;Vougiouklakis et al., 2015). To elucidate the physiological role of NSD2, we analyzed the correlation of expression levels between NSD2 and NSD2-target genes in 37 human normal tissues and 1,019 human cancer cell lines using the Human Protein Atlas (HPA) and Cancer Cell Line Encyclopedia (CCLE), respectively (Barretina et al., 2012;Uhlén et al., 2015). The expression levels between NSD2 and the downregulated genes in NSD2-KD cells were positively correlated in both normal tissues and cancer cell lines (Figure 6a, Figure S5a). We also observed a negative correlation between the levels of NSD2 and CDKN1A/ p21 in cancer cell lines ( Figure S5b,c), whereas most of the upregulated genes in NSD2-KD cells showed no negative correlation with NSD2.
We further found a positive correlation between Nsd2 and cell cycleassociated genes in mouse normal tissues by qRT-PCR (Figure 6b). NSD2 was highly expressed in testis, thymus, seminal vesicle, and spleen and 36B4 was used for normalization. Data are shown as means ± SD; n = 3. *p < .05, **p < .01, calculated using Student's t test expressed at lower levels in skeletal muscle, skin, and heart ( Figure S5d).
We further found that the expression of Nsd2 was decreased in spleen of aged mice compared with levels in young mice ( Figure S5e).
The correlated expression levels between NSD2 and cell cycle-associated genes indicate that the expression of NSD2 is regulated under growth signaling. To examine the expression of NSD2 during cell cycle progression, we used serum stimulation after serum starvation in IMR-90 cells ( Figure S5f). Notably, the levels of NSD2 were remarkably decreased in serum-starved, quiescent cells compared with cells in growing conditions (Figure 6c). Furthermore, NSD2 was upregulated after serum addition (during S and G2 phases) in parallel with cell cycle-associated genes. Importantly, knockdown of NSD2 dampened the upregulation of cell cycle-associated genes without affecting the induction of immediate early genes such as FOS and JUN (Figure 6c, Figure S5g). Consistent with the mRNA expression levels, NSD2 protein was enriched during S and G2 phases compared with levels in G1 phase ( Figure S5h). In contrast, in RS cells (late), NSD2 was not further decreased by serum starvation, and serum stimulation did not induce expression of NSD2 and cell cycle-associated genes ( Figure S5i). Taken together, these results suggest that the expression of NSD2 is induced by serum stimulation and that NSD2 is required for the induction of cell cycle-related genes.
| D ISCUSS I ON
Here, we demonstrated that NSD2 plays a pivotal role in preventing senescence-associated epigenomic remodeling in human fibroblasts through maintaining H3K36me3 levels and RB-mediated cell cycle regulation. Our initial RNAi screen revealed that loss of NSD2 increased mitochondrial content, which is a hallmark of senescent cells . Consistent with this finding, depletion of NSD2 increased mitochondrial OCR and eventually induced cellular senescence ( Figure 1). Transient depletion of NSD2 was sufficient to reduce the expression levels of cell cycle-associated genes, at least in part by the RBmediated pathway (Figures 2 and 5). We previously reported that loss of another histone methyltransferase SETD8 activated mitochondrial respiration in a RB-dependent manner during senescence in human fibroblasts (Tanaka et al., 2017). Knockdown of RB abolished mitochondrial activation during OIS (Takebayashi et al., 2015). Here, we showed that loss of NSD2 also contributes to the remodeling of mitochondrial activities during senescence by reinforcing RB function. Together, our data revealed a novel link between NSD2 and RB in preventing senescent transition and senescence-associated metabolic remodeling.
How loss of NSD2 activates RB function remains unclear. RB is activated by p21 and p16 due to inhibition of the cyclin-dependent Figure S2d,e). In contrast, a previous report suggested that NSD2 can regulate p53 protein stability by adding methylation on Aurora kinase A (AURKA) (Park, Chae, Kim, Oh, & Seo, 2018). Methylated AURKA interacts with p53 and accelerates proteasome-mediated degradation of p53. Thus, depletion of NSD2 might stabilize p53 via loss of methylation of AURKA, resulting in p21 induction. NSD2 was also dynamically induced during S and G2 phases and was essential for the induction of cell cycle-associated genes followed by serum stimulation (Figure 6c, Figure S5h). Taken together, our data suggest that NSD2 acts as a cell cycle regulator by cooperating with p53, p21, and RB. Interestingly, the overexpression of NSD2 did not affect the levels of the induction of SA-β-Gal and p16 expression in OIS cells. Further studies are required to clarify whether the NSD2 contributes to the prevention of senescence via methylation of nonhistone proteins.
NSD2 was preferentially enriched at the gene bodies of actively transcribed genes, concordant with the enrichment of H3K36me3 in human fibroblasts (Figure 3). The correlation of NSD2 and H3K36me3 was also observed by ChIP-seq in K562 human leukemia cells (Ram et al., 2011). Notably, NSD2 directly binds H3K36me3 possibly via its PHD or PWWP domain (Sarai et al., 2013;Vermeulen et al., 2010).
We observed a significant decrease of H3K36me3 levels at the NSD2enriched gene bodies including at stably expressed genes in NSD2-KD cells (Figure 4). Similarly, the levels of H3K36me3 were decreased at these loci in OIS cells. NSD2 was previously reported to be capable of adding trimethylation on H3K36 (Nimura et al., 2009). Furthermore, loss of NSD2 caused a reduction of H3K36me3 levels at its target gene bodies (Martinez-Garcia et al., 2011;Sarai et al., 2013;Yang et al., 2012).
Although the capability of NSD2 to directly confer trimethylation on H3K36 is still controversial (Kuo et al., 2011;Li et al., 2009;Poulin et al., 2016), our ChIP and ChIP-seq data strongly suggest that NSD2 is required for the maintenance of H3K36me3 levels at the gene bodies of actively transcribed genes by directly acting on the chromatin. Notably, previous evidence suggested that NSD2 might contribute to the persistence of pre-existing H3K36me3 levels rather than the establishment of new H3K36me3 at interferon response genes (Sarai et al., 2013).
Given that the expression levels of NSD2 are dynamically regulated during the cell cycle (Evans et al., 2016) and that NSD2 is overexpressed in various types of cancer cells (Vougiouklakis et al., 2015), the role of NSD2 might vary in cell cycle-and dose-dependent manners as well as in a genomic locus-specific manner.
Misregulation of H3K36me3 is one of the hallmarks in aged model organisms (Pu et al., 2015;Sen et al., 2015;Wood et al., 2010). We observed a marked change in expression levels at only a part of the NSD2-enriched genes in NSD2-KD cells (Figures 2 and 3). Therefore, what is the biological significance of H3K36me3 maintenance by NSD2? H3K36me3 is recognized by DNA methyltransferase 3B (DNMT3B) and protects genes from spurious RNA polymerase II entry and cryptic transcription initiation (Neri et al., 2017). Furthermore, DNA repair-associated proteins, such as human MutS homolog 6 (hMSH6) and lens epithelium-derived growth factor (LEDGF), interact with H3K36me3 and facilitate DNA repair at gene bodies Daugaard et al., 2012;Li et al., 2013;Pfister et al., 2014). p52, a short isoform of LEDGF, and MORF-related gene 15 (MRG15) also bind H3K36me3 and mediate alternative splicing (Luco et al., 2010;Pradeepa, Sutherland, Ule, Grimes, & Bickmore, 2012). Thus, there is the possibility that NSD2 functions for epigenomic maintenance and gene regulation to protect from cellular senescence by preserving H3K36me3 levels.
Global correlation of gene expression levels between NSD2 and the cell cycle-associated genes in various tissues and cancer cell lines suggested that NSD2 is implicated in cell cycle regulation in diverse cell types ( Figure 6). Indeed, haploinsufficiency of Nsd2 gene in mice resulted in developmental growth retardation (Nimura et al., 2009) and defects in long-term maintenance of B and T lymphocytes during aging (Campos-Sanchez et al., 2017). Interestingly, we also observed a decrease of Nsd2 expression levels in aged spleen tissue in mice ( Figure S5e). These observations emphasized the importance of precise control of NSD2 expression to protect cells from aging as well as cancer.
In summary, our results show that NSD2 has an epigenomic role together with RB: NSD2 maintains H3K36me3 at the bodies of actively transcribed genes and cell cycle-related genes to prevent cellular senescence.
| E XPERIMENTAL PROCEDURE S
Full experimental procedures are included in the Supporting Information.
| ChIP-qPCR and ChIP-seq analyses
For ChIP-qPCR analysis, cells were crosslinked with PBS containing 1% formaldehyde for 10 min. Cells were lysed, and the lysates were sonicated using a Bioruptor (Cosmo Bio) with 10-30 sonications of 30 s each with 30 s intervals. Sonicated samples were then incubated with 2-4 μg of each antibody at 4°C overnight, followed by pull-down assay using protein A/G-conjugated agarose beads (Millipore). After decrosslinking and RNase and Proteinase K treatments, DNA was extracted by phenol-chloroform extraction and subjected to qPCR using the primers listed in Table S1.
For genome-wide NSD2 distribution analysis, sonicated samples were incubated with antibodies conjugated with a Dynabeads M-280 sheep anti-mouse IgG (Invitrogen, 11201D) at 4°C overnight, followed by pull-down assay using a magnetic stand. Extracted DNA was subjected to adaptor ligation using the NEBNext Ultra II DNA Library Prep Kit for Illumina (New England Biolabs). Sequencing was performed on a NextSeq 500 (Illumina) with 75-bp single-end reads, and data analyses were performed on the Galaxy platform. The reads were trimmed using Trimmomatic v.0.36.3 and mapped to the hg19 reference genome using BWA v.0.7.15.1. After removing duplicate reads using Picard MarkDuplicates v.1.136.0, the reads were normalized to those of input by deepTools bamCompare v.2.5.0.0 and visualized with an Integrative Genomics Viewer. Distributions around gene loci were calculated and visualized using deepTools computeMatrix and plotProfile, respectively. The number of reads in each gene was calculated by featureCounts v.1.4.6.p5. For correlation analyses between NSD2 and histone modifications, the reads were calculated with deepTools multiBigwigSummary at 10 kb bin size and visualized with plotCorrelation. The peak detection of RB and RBL1 was performed by MACS v.1.0.1. All histone modification ChIP-seq data in IMR-90 cells were obtained from the ENCODE project (https://www.encod eproj ect.org) (Consortium, 2012). RB and RBL2 ChIP-seq data in IMR-90 cells were obtained from GSE19899 (Chicas et al., 2010). Global Run-On (GRO)-seq data were obtained from GSE43070 (Jin et al., 2013). The ChIP-seq data were deposited in the GEO database under accession code GSE138067.
| Assessment of mitochondrial activities
Real-time monitoring of cellular OCR was performed by a XF24 extracellular flux analyzer (Seahorse Bioscience) as previously described (Tanaka et al., 2017). For determination of mitochondrial mass, cells were stained with 5 μg/ml JC-1 in culture medium for 30 min at 37°C, followed by flow cytometric analysis.
ACK N OWLED G M ENT
We thank Dr. Kiyoe Ura (Chiba University, Japan) and the members of our laboratory for discussions and technical assistance,
CO N FLI C T O F I NTE R E S T
None declared.
DATA AVA I L A B I L I T Y S TAT E M E N T
The data that support the findings of this study are openly available in GEO at [https://www.ncbi.nlm.nih.gov/geo/], reference numbers [GSE86546, GSE43070, GSE60652, GSE19899, and GSE16256]. | 2020-06-24T13:07:00.098Z | 2020-06-22T00:00:00.000 | {
"year": 2020,
"sha1": "0ef3c243bec914b380e994a8ae2db69590116ad0",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/acel.13173",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "a329090b6a98eb3e64a399343ffb397c346da77a",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
49333848 | pes2o/s2orc | v3-fos-license | Application of long single-stranded DNA donors in genome editing: generation and validation of mouse mutants
Background Recent advances in clustered regularly interspaced short palindromic repeats (CRISPR)/CRISPR-associated protein 9 (Cas9) genome editing have led to the use of long single-stranded DNA (lssDNA) molecules for generating conditional mutations. However, there is still limited available data on the efficiency and reliability of this method. Results We generated conditional mouse alleles using lssDNA donor templates and performed extensive characterization of the resulting mutations. We observed that the use of lssDNA molecules as donors efficiently yielded founders bearing the conditional allele, with seven out of nine projects giving rise to modified alleles. However, rearranged alleles including nucleotide changes, indels, local rearrangements and additional integrations were also frequently generated by this method. Specifically, we found that alleles containing unexpected point mutations were found in three of the nine projects analyzed. Alleles originating from illegitimate repairs or partial integration of the donor were detected in eight projects. Furthermore, additional integrations of donor molecules were identified in four out of the seven projects analyzed by copy counting. This highlighted the requirement for a thorough allele validation by polymerase chain reaction, sequencing and copy counting of the mice generated through this method. We also demonstrated the feasibility of using lssDNA donors to generate thus far problematic point mutations distant from active CRISPR cutting sites by targeting two distinct genes (Gckr and Rims1). We propose a strategy to perform extensive quality control and validation of both types of mouse models generated using lssDNA donors. Conclusion lssDNA donors reproducibly generate conditional alleles and can be used to introduce point mutations away from CRISPR/Cas9 cutting sites in mice. However, our work demonstrates that thorough quality control of new models is essential prior to reliably experimenting with mice generated by this method. These advances in genome editing techniques shift the challenge of mutagenesis from generation to the validation of new mutant models. Electronic supplementary material The online version of this article (10.1186/s12915-018-0530-7) contains supplementary material, which is available to authorized users.
Background
Classical gene targeting employing embryonic stem cells has long been the principal method to introduce complex alleles into the mouse genome [1]. More recently, microinjection of an RNA-guided engineered nuclease (RGEN) together with a single-stranded oligodeoxynucleotide (ssODN) has revolutionized our ability to direct mutations in vivo [2]. However, clustered regularly interspaced short palindromic repeat (CRISPR)/CRISPR-associated protein 9 (Cas9)-aided knock-ins of larger cassettes or loxP sites directly into one-cell mouse embryos [3,4] were breakthroughs that have remained technically very challenging [5]. Equally, CRISPR/Cas9 reagents and ssODNs have become widely used for the introduction of point mutations in one-cell embryos (see examples in [6][7][8]). However, particular locations within genomes, including sequences that are highly conserved and/or repeated, regions with a low number or absence of -NGG tri-nucleotides or sequences without active single guide RNA (sgRNA) close to the target can represent a barrier to the generation of specific mutants [9].
Miura and colleagues [10] first proposed long single-stranded DNA (lssDNA) molecules, larger than standard chemically synthesized oligonucleotides, as an efficient alternative donor template for RGEN-aided homologous recombination (HR). The authors recently extended the method to the creation of conditional alleles and tag insertions, showing the generation of sequence-perfect alleles [11]. We and others documented that CRISPR/Cas9-aided genome editing can give rise to unexpected allele rearrangements ("illegitimate repairs" [7], "KI + indels" [9,12]); therefore, thorough validation of new models is essential to ensure reproducibility of the studies employing these models [12][13][14][15]. However, limited data are available on unexpected events arising from the use of lssDNA and the associated requirements for the quality control (QC) of new models. With our extensive experience in the generation of conditional alleles through large-scale mouse model production [16,17], we have developed a strategy for validation of these alleles.
Here, we have extended the application of lssDNA to the generation of more conditional knock-out (cKO) alleles directly in the embryo. We also produced point mutations where the desired nucleotide change is remote from active CRISPR cutting sites, which so far had proved technically challenging with the available protocols. Although not all attempts were successful, we confirm that new designs employing lssDNA indeed facilitated mutant production for cKOs and particular point mutations that had previously been challenging to generate. Furthermore, we show that novel point mutations and imperfect and/or off-target donor integration(s) can occur in the process of mutagenesis. This work emphasizes the importance of a comprehensive strategy for the QC of new mutants. We conclude that the utilization of lssDNA donor templates shifts the challenge of mutagenesis from generation to the validation of new mutant models.
Results
Generation of a conditional knock-out allele Production of F 0 animals Proof of principle for the RGEN-aided generation of conditional alleles employing two CRISPR/Cas9 cuts and two separate ssODN templates as donors was published in the early days of CRISPR/Cas9-aided mutagenesis [3]. However, the use of this strategy for allele generation has not flourished in the literature in the same way as other CRISPR-directed mutagenesis applications [18]. This is most likely because its success requires two concurrent events of homology-directed recombination occurring on the same allele, which remain less frequent than non-homologous end joining (NHEJ) events [5]; this is in keeping with our own experience of the approach (see examples below). We therefore decided to pilot the use of lssDNAs as a possible alternative to ssODN donors.
As a first test case, we aimed to generate a conditional allele in Syt7 by flanking the critical exon ENSMUSE00000225700 with loxP sites (Fig. 1a). This exon was chosen as defined by Skarnes and colleagues [19]. Specifically, the exon is common to the majority of coding transcripts in the gene, and its ablation results in frame-shift transcripts. Two pairs of sgRNAs were designed, centred on each of the genomic sequences to be interrupted by loxP (Fig. 1a), and synthesized to enhance the likelihood of simultaneous cuts on both sides of the same allele. A lssDNA donor corresponding to the floxed allele was generated as per Miura and colleagues ( [10], and see Methods). Specifically, a double-stranded DNA template including a T7 transcription promoter followed by the 1149 bp sequence of the donor was obtained commercially (gBlock®, Integrated DNA Technologies (IDT); Fig. 1). A lssDNA was synthesized by in vitro transcription (IVT) and reverse transcription (detailed in Methods). The sgRNAs and lssDNA (the sequences are provided in Additional file 1: Table S1) were co-injected with Cas9 mRNA into one-cell embryos. One hundred thirty-eight injected embryos were re-implanted in pseudopregnant females. Seventeen pups were weaned and ear biopsies taken for screening of new alleles (the numbers are summarized in Additional file 1: Table S2, Syt7).
Screening of F 0 generation and genotyping of F 1 animals
As animals of the F 0 generation were likely to be mosaic, we analyzed them by screening for the presence of the allele of interest [13]. Polymerase chain reaction (PCR) amplicons were produced from genomic DNA with primers flanking the homology arms and external to the donor (Syt7 primers R1 and F1, Fig. 1a). Their analysis on agarose showed two founders (Fig. 1b, Animals Syt7-1 and Syt7-6) containing deletions. The PCR products from founder animals were purified and sequenced by Sanger sequencing. The sequencing showed that a total of 10 animals out of 17 were mutated on target (Syt7, Table 1). Among them, five pups had indels at either or both 5′ and 3′ guide target sites. Three other animals (Syt7-1, Syt7-6 and Syt7-9) carried alleles with deletions of the sequence flanked by the two pairs of sgRNAs corresponding to non-cKO alleles. The remaining two mutants (Syt7-4 and Syt7-8) were carriers of the designed cKO allele, with sequencing traces suggesting Syt7-8 to be homozygous and Syt7-4 compound heterozygous with one cKO allele and one allele including the 3′ loxP and an indel in 5′ (Additional file 2: Figure S1).
Positive founders Syt7-4 and Syt7-8 were mated to wild-type (WT) animals, and the progeny (F 1 ) were analyzed. In contrast to the analysis of mosaic F 0 animals, sequencing of PCR fragments amplified from F 1 individuals allowed for definitive characterization of the edited alleles [13]. The outcome of the analysis of F 1 animals by PCR and sequencing, employing the same primers used for screening F 0 animals, is summarized in Table 2. Sequencing showed successful transmission of the correctly mutated sequence (cKO allele) by both founders to their progeny (individuals Syt7-4.1d and Syt7-8.1c, e, f and g).
Screening of mutants obtained by co-injection of transcription activator-like effector nuclease (TALEN) and ssODNs showed that random integration of ssODNs can occur when using such a mutagenesis approach [20], illustrating the requirement of further validation of positive animals by a method allowing copy counting. We therefore checked for the presence of additional copies of the lssDNA donor sequence in the genome of F 0 and F 1 animals using digital droplet PCR (ddPCR) and a TaqMan™ assay centred on the critical exon present in the donor sequence run against a known two-copy reference assay (Syt7 exon 7, Dot1l reference assay, as per [13]). Table 2 shows the copy number of the donor sequence in each individual, Fig. 1 Generation of a Syt7 floxed allele. a Diagrammatic representation of the genomic sequence with the Syt7 critical exon highlighted, the corresponding template for lssDNA synthesis and the position of sgRNAs for in vivo delivery together with the primer locations used for reverse transcription and for genotyping. Note loxP sites in the lssDNA prevent reprocessing of repaired alleles by CRISPR-Cas9 complex. Diagram shows the process for the generation of lssDNA through in vitro transcription and reverse transcription. HA homology arm. b PCR products amplified from genomic DNA extracted from the 17 F 0 born from the microinjection session using Syt7-F1 and Syt7-R1 primers. L1 = 1 kb DNA molecular weight ladder (thick band is 3 kb). L2 = 100 bp DNA molecular weight ladder (thick bands are 1000 and 500 bp). Sequence trace data derived from animals Syt7-4 and Syt7-8 are displayed in Additional file 2: Figure S1. Table 1 are shown in Additional file 2: Figure S1, Additional file 3: Figure S2, Additional file 4: Figure S3, Additional file 5: Figure S4, Additional file 6: Figure S5, Additional file 7: Figure S6, Additional file 8: Figure S7, Additional file 9: Figure S8, Additional file 10: Figure S9 and Additional file 11: Figure illustrating the presence of additional copies in some F 0 (Syt7-8) and F 1 individuals (Syt7-8.1c, d, g and h). In particular, copy counting for founder Syt7-8 (which was suggested as a potential homozygous for the cKO allele by PCR and sequencing) also revealed additional integrations of the lssDNA donor (close to 2.8 copies per genome, Table 2). The copy number obtained in the founder is not a clear integer number, which is not impossible in a mosaic animal. Analysis of the F 1 progeny confirmed the presence of an additional integration (Syt7-8.1c, d, g and h) and strongly suggested that this event was not physically linked to the targeted allele in the founder, as this integration could be segregated from the mutated allele in other F 1 progeny (Syt7-8.1e and f ).
Copy counting of the critical exon also confirmed deletions of the target region in some F 0 (Syt7-4) and F 1 individuals (Syt7-4.1a, b and c; Syt7-8.1a). The ddPCR analysis also showed a reduced copy number of exon 7 in F 1 animals initially thought to be WT as an exon deletion had not been detected by standard PCR with external primers (Syt7-4.1a, b and c; Syt7-8.1a) Table 2. This suggests that these animals were bearing a deletion larger than the segments flanked by the genotyping primers.
In summary, the delivery of lssDNA donor together with CRISPR/Cas9 reagent to a modest number of one-cell embryos produced mosaic animals that transmitted a conditional allele. Some of the transmitting progeny were excluded upon further validation steps due to additional integrations of donor sequence.
Other conditional alleles Production of F 0 animals
The pilot was next extended to include a further eight genes with the same design principles (Table 1 and Additional file 1: Table S2): Two sgRNAs were selected on each side of a critical exon in the genomic sequences to be interrupted by the loxP sites (details of sequences are given in Additional file 1: Table S1, designs in Additional file 4: Figure S3). Refining our strategy in the process of extending the pilot, we introduced standard sequences flanking the loxP sites in the designs, thus allowing us to re-use established diagnostic tests for the validation of alleles (restriction enzyme sites or LoxP-F and LoxP-R primers in Additional file 4: Figure S3). This facilitated the analysis of animals. CRISPR/Cas9 reagents and lssDNA were delivered to C57BL/6NTac one-cell embryos by pronuclear injection.
Screening of F 0 generation and genotyping of F 1 animals F 0 and F 1 animals were analyzed according to the same strategy as that used for the Syt7 conditional allele: PCR The Revealed by copy number, on or off target c Deletion including at least one external genotyping primer site using primers external to the donor homology arms (or two PCRs bridging the homology arms, depending on PCR efficiency) and a PCR amplifying the region flanked by the two loxP sites, all of which were analyzed by Sanger sequencing (Additional file 5: Figure S4, Additional file 6: Figure S5, Additional file 7: Figure S6, Additional file 8: Figure S7, Additional file 9: Figure S8, Additional file 10: Figure S9, Additional file 11: Figure S10 and Additional file 12: Figure S11). A total of 279 F 0 animals were analyzed, and 129 animals were identified as bearing mutations. Seven out of nine projects yielded founders bearing the conditional allele, with an additional one yielding a floxed allele with an unwanted point mutation. One project (Rapgef5) only yielded one founder bearing a conditional allele, that died before mating age. Correct conditional alleles were transmitted to the F 1 generation for four out of the seven projects where founder progeny were analyzed (Table 1). However, in at least three out of nine projects, other alleles were detected which contained unexpected point mutations identified at the F 0 generation (Inpp5k project, Additional file 12: Figure S11h; 6430573F11Rik project, Additional file 13: Figure S12a; Cx3cl1 project, Additional file 13: Figure S12b and c).
It is also noteworthy that illegitimate repairs [7] or partial integration(s) of the donor were detected frequently (in eight out of nine projects analyzed, see example in (Additional file 12: Figure S11d), highlighting the requirement of extensive allele validation by PCR and sequencing. These events-point mutations, partial and/or rearranged integrations-are reported as illegitimate repairs in Table 1.
Interestingly, F 0 animals with exon deletions were generated in all but one project as a by-product. Whenever null animals were required for ongoing research, these founders were also mated (numbers in brackets, Table 1). So far, germline transmission (GLT) of this additional allele was obtained in five out of six projects where positive founders were bred.
It is noteworthy that two out of these nine projects (Ikzf2 and Usp45) had been previously attempted employing ssODNs or plasmids without yielding founders with conditional alleles, in contrast to subsequent attempts with lssDNA donors (Additional file 1: Table S3). F 0 and F 1 animals containing the cKO alleles were further validated by copy counting with a TaqMan™ assay centred on the floxed region. Importantly, copy counting of the floxed region in combination with the outcome of the targeted allele validation showed additional integrations in four out of seven projects analyzed (Table 1).
Point mutations remote from active sgRNA cutting site Production of F 0 animals Finally, we assessed whether the production of a point mutation distal from an active sgRNA cutting site, the generation of which has so far been unsuccessful by repeated attempts using other methods, could also be facilitated by the use of lssDNA. The first target for this pilot was the generation of the Gckr P446L point mutation in C57BL/6NTac mouse embryos (sequence change illustrated in Additional file 15: Figure S14). We initially designed a strategy according to the standard approach, employing a ssODN and one efficient and specific sgRNA cutting as close as possible to the targeted nucleotide. However, some factors limited options for design, such as the close proximity of the target to the exon-intron junction and splice sites that should not be altered. Furthermore, the poor specificity of the target sequence (sequence conserved and repeated at two additional locations in the mouse genome; GRCm38.p5:10:82265447-82265469/12:21568953-21568975) rendered many guides unspecific. The closest sgRNA to the target nucleotide (sgRNA_20 (Fig. 2a)) was shown to be inactive by a Guide-it™ assay, where the CRISPR/Cas9 nuclease activity is assessed on a target DNA fragment in vitro (Fig. 3). This was subsequently confirmed by the fact that no mutagenesis was detected in microinjection session 1 where this sgRNA was used. Therefore, the closest efficient (as confirmed by Guide-it™ assay) and specific sgRNA that could be selected was cutting 34 nt away from the targeted base pair (sgRNA_3, Figs. 2a and 3). Thus, our next strategy employed sgRNA_3 and a ssODN donor, although a distance larger than 30 bp between the target sequence and the cutting site of the sgRNA can represent a barrier to the generation of a specific point mutation [9]. In addition to the targeted nucleotide mutation, a silent mutation was included in the ssODN donor template in order to abolish the protospacer adjacent motif (PAM) of the selected sgRNA and prevent re-processing of the mutated allele by the CRISPR/Cas9 system (Fig. 2a). The sgRNA activities were checked in vitro (Fig. 3), and each RNA was co-injected with Cas9 mRNA and the ssODN, as per the designs shown in Fig. 2a and Additional file 1: Table S1.
We anticipated that generating the desired mutation would be challenging, as the target base is a sub-optimal 34 base pairs away from sgRNA_3's cut site. We therefore performed multiple injection sessions with two different ssODN designs (Gckrdonor_2 and Gckrdonor_3, centred or offset towards the targeted mutation, respectively; sequences in Additional file 1: Table S1) to enhance the likelihood of obtaining the desired point mutation. The outcome of these microinjections was analyzed by PCR and sequencing of the region of interest in a total of 90 pups and is summarized in Table 3. Although the silent mutation was detected in F 0 animals on five occasions, it was not accompanied by the mutation of interest (Table 3 and example in Fig. 4a, ssO-Gckr P446L -54). Sequencing data from founders are shown in Additional file 16.
We subsequently designed an alternative strategy employing a larger (339 bases) lssDNA sequence and two sgRNAs flanking the region containing the targeted nucleotide. The sgRNAs were selected to introduce double-stranded breaks on each side of the target (40 and 98 nt away in 5′ and 3′, respectively), and their activity was checked in vitro. We consequently selected sgRNA_5.2 and sgRNA_3.1 as they were shown to be most active in vitro (Figs. 2b and 3). The donor sequence was designed with 100 nt homology arms flanking the cut sites, silent mutations that modify the seed sequences of the selected sgRNAs to prevent re-processing and the targeted base change (Fig. 2b). The lssDNA was synthesized in accordance with prior experiments and co-injected with Cas9 mRNA and the two sgRNAs in a single session, the outcome of which is shown in Table 3. Twenty-two pups were weaned, and ear biopsies were taken to screen for new alleles.
Screening of F 0 generation and genotyping of F 1 animals Primers were designed in genomic regions flanking, but external to, the donor sequence to span the donor integration (Gckr P446L -F2 and Gckr P446L -R2 primers, Additional file 1: Table S1 and Fig. 2b). PCR Sequencing showed that 14 animals out of 22 were mutated on target. Among them, eight individuals carried the designed knock-in (KI) allele (Table 3), with sequencing traces suggesting that four animals were homozygous for the KI (Fig. 4b). Three other individuals showed illegitimately repaired alleles (Table 3 and silent mutation only Fig. 4b).
Two of the four apparently homozygous positive F 0 s (lss-Gckr P446L -11, lss-Gckr P446L -19) were mated to WT animals for GLT of the mutated allele. The analysis of F 1 animals (summarized in Table 4) showed the successful transmission of the correctly mutated sequence by both founders (i.e. lss-Gckr P446L -11.1f, Fig. 4b).
Further model validation
We also checked for the presence of additional copies of the donor sequence in the genome of F 0 and F 1 animals using ddPCR and a TaqMan™ assay centred on the donor sequence (as per [13]). Table 4 shows the copy number of the donor sequence in each individual, illustrating a deletion likely spanning a fragment larger than the segments flanked by the genotyping primers (individuals lss-Gckr P446L -11.1a, b, d, e and h, Table 4). Although both founders appeared homozygous for the point mutation by Sanger sequencing, lss-Gckr P446L -11 also transmitted a deletion allele to its progeny, confirming mosaicism in this individual.
We next attempted to employ lssDNA donors for the generation of a mouse line bearing a point mutation in the Rims1 gene, which also had not been achieved with standard ssODN donors (Additional file 17: Figure S15 and Additional file 18: Figure S16; Additional file 1: Table S4, 1 positive founder/155 animals born (0.6%); this founder did not yield GLT, Additional file 1: Table S5). The new design employing lssDNA (Additional file 17: Figure S15) yielded founders bearing the correct mutation at a much higher frequency (4 positive founders/39 animals born (10%) with lssDNA donors), one of which achieved GLT of this second challenging point mutation (Additional file 1: Tables S4 and S5; Additional file 19: Figure S17; sequencing data in Additional file 20). Sequencing data from all founders for the point mutation (with ssODNs and lssDNA donors) are shown in Additional file 20.
Discussion
Novel strategy for challenging point mutations Standard methods employing chemically synthesized oligonucleotides had not permitted the introduction of the Gckr P446L point mutation (Table 3), although The table shows the numbers of embryos and animals involved in mutagenesis attempts employing the injection of CRISPR/Cas9 reagents and oligonucleotides or lssDNA donors. The percentage of transferred embryos yielding live animals at weaning is shown in parentheses. The outcome of these attempts is also summarized. Note that sgRNA_20 was employed for the first microinjection session with ssODN_20 and substituted to sgRNA_3 and relevant donor ssODNs for subsequent sessions, as it was confirmed to be inactive. Sequencing data from this project are displayed in Fig. 4 (additional raw sequencing data are provided in Additional file 16) MS microinjection session, n.d. not determined SM silent mutation evidence of partial integration of the donor (silent mutation) was recorded in five animals. This is likely due to the distance between the available sgRNA and the target sequence (34 bp). We have extended the pilot to a second challenging point mutation and also found that the use of a lssDNA donor yielded the generation and GLT of the point mutation (Additional file 1: Tables S4 and S5; Additional file 19: Figure S17), reinforcing the proposition that the use of lssDNA can rescue such unsuccessful projects. This study is the first proof of principle that the use of lssDNAs can lift the barrier to the introduction of hitherto challenging point mutations into the mouse genome, where no active and/or specific sgRNA is available in the immediate vicinity of the target site. Extending our capacity to generate point mutations further away from available optimal sgRNA target sites is of crucial importance, as it will enable the generation of thus far challenging mutants, including those models essential for the validation of candidate mutations causing human disease arising from whole genome sequencing (WGS) or quantitative trait locus (QTL) analysis [21].
Alternative methods for production of lssDNA donor We chose IVT followed by reverse transcription as a method to obtain lssDNAs [10]. Alternative methods employing combined nickase and nuclease digestion of a plasmid [22], use of a biotin-labelled primer [23], conversion of double-stranded DNA to ssDNA by nucleases (Guide-it™ Long ssDNA Production System, Takara) or chemical synthesis [11] have been proposed. However, synthesizing lssDNA donor molecules remains a challenge: the IVT-based method is both lengthy and expensive; the use of nucleases can give limited yield and requires DNA of impeccable quality; and chemical synthesis is expensive and also has size limitations. It will be Figure S14. a ssODN donors only yielded introduction of the intended silent mutations, while (b) lssDNA yielded the desired mutation in some individuals (F 0 11 transmitting to 11.f) and only the silent mutations in others (F 0 10). Note that founders appeared homozygous (ssO-Gckr P446L -54, lss-Gckr P446L -11 and lss-Gckr P446L -10) when analyzed by Sanger sequencing, but also could contain deletion alleles in trans, as suggested by copy counting (lss-Gckr P446L -11 in Table 4). A summary of the microinjection session outcomes is detailed in Table 3, and raw sequencing data are provided in Additional file 16 important to refine or replace these methods to facilitate access to high-quality donors.
Efficiency of model generation
Many advancements in the rapidly evolving genome editing field have been published on the basis of a small number of experiments, and these have sometimes proven to be difficult to reproduce [24,25]. Our results support the view that lssDNAs facilitate the production of complex alleles, suggesting that the method as described by Quadros and colleagues [11] is sufficiently robust for reproducibility between laboratories. Two of these projects (Ikzf2 and Usp45) were initially attempted employing ssODNs or plasmids as donors, but only the switch to lssDNA has yielded founders with conditional alleles, suggesting it is a more successful method (previous approaches and their outcomes are summarized in Additional file 1: Table S3). We note that other labs have encountered some successes with ssODN donors and otherwise very similar methods for the generation of cKOs ( [3], this issue, Lanza et al. [18]). However, the use of lssDNA as donors has proven more efficient in our hands than that of ssODNs, when compared for the generation of the same mutations (Ikzf2 conditional allele and Gckr and Rims1 point mutations). In particular, it alleviates the challenge of integrating both loxP sites in the same allele when generating cKOs and facilitates the introduction of point mutations away from active sgRNA active sites.
It is not yet clear why lssDNAs are proving to be superior donor molecules in this context, but their particular efficiency is likely not due to the length of homology arms used in lssDNA donors (up to 100 bases), as much larger homologous sequences were present in plasmid donors.
However, not all projects were successful. The efficiency of this method is likely to be reliant on sufficiently active sgRNAs on both sides of the sequence to be integrated (i.e. the Acvr2b project did not yield conditional alleles or any deletions). It is therefore prudent to check the activity of sgRNAs in vitro and design the donor sequence according to which sgRNAs are the most active. Also, GLT of the floxed allele relies on the viability and fertility of mosaic founders, as illustrated by the failure so far of the Rapgef5 project to yield a conditional allele. Finally, some failures were due to unwanted single nucleotide changes (examples in Additional file 13: Figure S12), most likely picked up during the lssDNA generation process. It is our prediction that some of these failures, but not all, will be reversed by further repeat attempts.
In summary, our data support efficiency, but not all models were achieved. Interestingly, the process also produced exon deletion alleles as a by-product of the generation of cKOs, allowing rapid access to null alleles.
Mutant validation
Mutant validation was performed by PCR, employing genomic primers external to the donor sequence and systematic sequencing of the integration, as well as copy counting of the donor sequence.
Validation of mutated allele
We and others have previously described that imperfect alleles can be generated when using ssODNs as donors ("illegitimate repairs" [7], "KI + indels" [9]). Further, rearranged alleles have also been detected when no donor is included in the mutagenesis strategy [7,12,26].
Here we show that rearrangements also occur in the presence of lssDNA donors (Table 1 and example in Additional file 14: Figure S13). As such, the use of lssDNA does not lessen the requirement for allele validation by full sequencing, as rearrangements (including indels and partial integrations) may occur during the double-strand break repair event. In addition, the synthesis of lssDNA itself can be a source of errors [27], potentially introducing unwanted sequence changes early in the process that will require monitoring by full sequencing of the allele. The use of new high-fidelity enzymes (including a replacement of standard reverse transcriptase) might contribute to reducing the frequency of sequence errors in the edited alleles. Inclusion in the donor of sequences of known primers that are specific and efficient in PCR or restriction enzyme sites can simplify screening for mutated loci but does not replace QC by sequencing. Alternative methods for validation of new alleles, involving string sequencing for example, could further facilitate QC.
Additional integrations
Our results show that additional donor integrations are common (five out of six projects; this was also found in [18]). Even when there is no evidence of such an event in the founder generation, it is essential to check for their presence at the F 1 stage, as there is a clonal event at the point of GLT. Furthermore, if the mutant-specific genotyping assay used in subsequent generations is internal to the donor sequence, it will not discriminate between on-target and unidentified additional integrations. Copy counting can be performed by quantitative PCR (qPCR) or most easily by ddPCR, employing an assay centred on the donor that will recognize both WT and mutant alleles (universal) or a mutation-specific assay in correlation with sequencing of a locus-specific amplicon (amplified with primers external to the donor). The locations of random integrations were not identified, so it is unclear whether they were associated with CRISPR/Cas9 off-target activity.
Standards for quality control
We found examples of sequence changes, indels, locus rearrangements or random insertion of lssDNA donors in all projects attempted, showing that mutagenesis artefacts are very common. Full model validation at the F 1 stage is therefore essential, and it constitutes a labor-intensive exercise involving the sequencing of large or several overlapping amplicons and copy counting of donor insertions. The need for extensive model validation is not specific to the use of lssDNA in genome editing [9,13,20], but it is not alleviated by the use of this new donor type.
Publications reporting proof-of-principle cases for using the CRISPR/Cas9 system for genome engineering focus on the novelty of methods and often do not include the intricacies of QC of mutants [2,3,11]. However, thorough validation of new models is essential to the reproducibility of research employing mutated laboratory animals. This can be a complex exercise, as genome editing can yield many unpredicted events, both on-target and in other loci. There are profound consequences in using mouse lines harbouring additional mutations in ongoing research, including misleading results, erroneous interpretations of study and avoidable animal wastage. Therefore, the dissemination of good practice for QC is just as essential as the distribution of efficient protocols for mutagenesis. Also, an extensive validation of mouse mutants is indispensable to providing a complete documentation of animals used in research [14].
Conclusion
Prior to the use of lssDNA, the reliable generation of complex alleles and some point mutations remote from efficacious sgRNA target sequences was out of reach. Here, we have shown the application of lssDNA to both the generation of cKO alleles and challenging point mutations. However, the technique can also produce a variety of artefacts: point mutations, indels, locus rearrangements and additional donor integrations. A comprehensive mutant validation strategy involving sequencing of the locus and copy counting of the donor is therefore essential. The utilization of lssDNA as a donor sequence lifts the barrier to the generation of complex alleles and shifts the challenge of the exercise from the production of founders bearing these new alleles towards the validation of these new mutants.
Methods sgRNAs
Guide sequence selection was carried out using the following online tools: CRISPOR [28] and Wellcome Trust Sanger Institute (WTSI) Genome Editing (WGE) [29]. sgRNA sequences were selected with as few predicted off-target events as possible, particularly on the same chromosome as the intended modification. sgRNAs used in this study are shown in Additional file 1: Table S1. sgRNAs were synthesized directly from gBlock® (IDT, Skokie, IL, USA) templates containing the T7 promoter using the HiScribe™ T7 High Yield RNA Synthesis Kit (New England BioLabs®, Ipswich, MA, USA) following manufacturer's instructions. RNAs were purified using the MEGAclear Kit (Ambion). RNA quality was assessed using a NanoDrop spectrophotometer (ThermoScientific) and by electrophoresis on 2% agarose gel containing ethidium bromide (Fisher Scientific). A Guide-it™ assay was performed as per manufacturer instructions (Takara, Kyoto, Japan).
Templates for lssDNA synthesis
Templates for lssDNA synthesis were either assembled by cloning in a plasmid or, when possible, were obtained from IDT as a single gBlock®. Additional file 1: Table S1 details the generation of the lssDNA employed in this study.
Donor sequences
Donor ssODNs (desalted grade) were obtained from IDT. Donor lssDNAs were initially generated following a method adapted from [10]. Briefly, templates for IVT (donor sequence flanked by the T7 promoter) were obtained as a gBlock® (IDT) or cloned in a plasmid that was subsequently linearized. Typically, 150 ng of double-stranded gBlock® template or 2 μg of plasmid template was transcribed using the HiScribe T7 High Yield RNA Synthesis Kit (New England BioLabs®). At the end of the reaction, DNase I was added to remove the DNA template. RNA was purified employing the MEGAclear Transcription Clean-Up Kit (Ambion). Single-stranded DNA was synthesized by reverse transcription from 20 μg of RNA template employing SuperScript III Reverse Transcriptase (Invitrogen), treated with RNAse H (Ambion) and purified employing the QIAquick Gel Extraction Kit (Qiagen, Hilden, Germany). Donor concentration was quantified using the NanoDrop (Thermo Scientific), and the integrity was checked on 1.5% agarose gel containing ethidium bromide (Fisher Scientific).
Mice
All animals were housed and maintained in the Mary Lyon Centre, MRC Harwell Institute under specific-pathogen-free (SPF) conditions, in individually ventilated cages adhering to environmental conditions as outlined in the Home Office Code of Practice. Mice were euthanized by Home Office Schedule 1 methods. Colonies established during the course of this study are available for distribution and are detailed in Additional file 1: Table S6.
Pronuclear microinjection of zygotes
All embryos were obtained by superovulation. Pronuclear microinjection was performed as per Gardiner and Teboul [30], employing a FemtoJet (Eppendorf AG, Hamburg, Germany) and C57BL/6NTac embryos for all projects shown here, apart from Rims1, which was performed with C57BL/6J embryos. Specifically, the injection pressure (P i ) was set between 100 and 700 hPa, depending on the needle opening; the injection time (T i ) was set at 0.5 s and the compensation pressure (P c ) was set at 10 hPa. Mixes were centrifuged at high speed for a further minute prior to microinjection. Injected embryos were re-implanted in CD-1 pseudopregnant females. Host females were allowed to litter and rear F 0 s.
Breeding for germline transmission
F 0 animals where the presence of a desired allele was detected were mated to WT isogenic animals to obtain F 1 animals to assess the GLT of the allele of interest and permit the definitive validation of its integrity.
Genomic DNA extraction ear biopsies
Genomic DNA from F 0 and F 1 animals was extracted from ear clip biopsies using the DNA Extract All Reagents Kit (Applied Biosystems) according to the manufacturer's instructions. The crude lysate was stored at − 20°C.
PCR amplification and sequencing
New primer pairs were set up in a PCR reaction containing 500 ng genomic DNA extracted from a WT mouse, 1× Expand Long Range Buffer with 12.5 mM MgCl 2 (Roche), 500 μM PCR Nucleotide Mix (dATP, dCTP, dGTP, dTTP at 10 mM, Roche), 0.3 μM of each primer, 3% dimethyl sulfoxide (DMSO) and 1.8 U Expand Long Range Enzyme mix (Roche) in a total volume of 25 μl. Using a T100 thermocycler (Bio-Rad, Hercules, CA, USA), PCRs were subjected to the following thermal conditions: 92°C for 2 min followed by 40 cycles of 92°C for 10 s, a gradient of annealing temperatures between 55 and 65°C for 15 s and 68°C for 1 min/kilobase and a final elongation step for 10 min at 68°C. The PCR outcome was analyzed on a 1.5-2% agarose gel, depending on the amplicon size, and the highest efficient annealing temperature was identified for the primer pair. If no temperature allowed for an efficient and/or specific PCR amplification, the assay was repeated with an increased DMSO concentration (up to 12%). Using optimized conditions as defined above, PCRs for each project were run and an aliquot analyzed on agarose gel. The PCR products were purified employing a QIAquick Gel Extraction Kit (Qiagen) and sent for Sanger sequencing (Source Bioscience, Oxford, UK). Genotyping primers were chosen to be at least 200 bp away from the extremity of donors, depending on available sequences for design.
Sequencing data analysis
Sequencing data were analyzed differently depending on whether they were obtained from F 0 s or F 1 s (as per [13]). At the F 0 stage, animals were screened for evidence of the expected change, i.e. the presence of loxP sites for conditional allele projects or the presence of the expected base change for the Gckr P446L point mutation project. F 0 animals should be considered mosaic animals. All F 1 animals are heterozygous containing one WT allele and one allele to be determined, as they are obtained from mating F 0 animals with desired gene edits to WT animals. The F 1 stage enables definitive characterization of the new mutant.
Sub-cloning of PCR products
PCR products amplified from F 0 DNA showing complex sequencing traces were sub-cloned using a Zero-Blunt PCR Cloning Kit (Invitrogen). The appropriate number of clones (usually 12-24) per founder were picked and grown overnight in accordance with the complexity of the traces observed prior to sub-cloning. Plasmids were isolated using a QIAprep Miniprep Kit (Qiagen) and analyzed by Sanger sequencing (Source Bioscience) using the M13R oligonucleotide or gene-specific primers.
ddPCR Copy number variation experiments were performed as duplex reactions, where the sequence employed as a donor was amplified using a fluorescein amidite (FAM)-labelled assay (sourced from Biosearch Technologies, Petaluma, CA, USA), in parallel with a VIC-labelled reference gene assay (Dot1l, sourced from ThermoFisher) set at two copies (CNV2) on the Bio-Rad QX200 ddPCR System (Bio-Rad) as per Codner and colleagues [31]. Reaction mixes (22 μl) contained 2 μl crude DNA lysate or 50 ng of phenol/ chloroform purified genomic DNA, 1× ddPCR Supermix for probes (Bio-Rad), 225 nM of each primer (two primers per assay) and 50 nM of each probe (one VIC-labelled probe for the reference gene assay and one FAM-labelled for the ssODN sequence assay). These reaction mixes were loaded either into DG8 cartridges together with 70 μl droplet oil per sample and the droplets generated using the QX100 Droplet Generator or loaded in plate format into the Bio-Rad QX200 AutoDG and the droplets generated as per the manufacturer's instructions. Post droplet generation, the oil/reagent emulsion was transferred to a 96-well semi-skirted plate (Eppendorf), and the samples were amplified on a Bio-Rad C1000 Touch thermocycler (95°C for 10 min, followed by 40 cycles of 94°C for 30 s and 58°C for 60 s, with a final elongation step of 98°C for 10 min, where all temperature ramping was set to 2.5°C/s). The plate containing the droplet amplicons was subsequently loaded into the QX200 Droplet Reader (Bio-Rad). Standard reagents and consumables supplied by Bio-Rad were used, including cartridges and gaskets, droplet generation oil and droplet reader oil. Copy numbers were assessed using the QuantaSoft software using at least 10,000 accepted droplets per sample. The copy numbers were calculated by applying Poisson statistics to the fraction of end-point positive reactions, and the 95% confidence interval of this measurement is shown.
Additional files
Additional file 1: Table S1. Sequences of reagents used in the study. The table shows the sequences of the oligonucleotides and lssDNA donors, primers and TaqMan assays employed in this study. LoxP sites (for all conditional projects) and point mutations (for Gckr and Rims1 project) are underlined. Sequences added for diagnostic (for all conditional projects except Syt7) and silent mutations (for Gckr and Rims1 project) are shown in italics. For the plasmids, sequences flanked by and including homology arms are shown. The ddPCR reference copy counting assay is labelled with VIC. All other ddPCR copy counting assays are labelled with fluorescein amidite (FAM). Copy counting assays labelled as UNIV ddPCR assays recognize both WT and engineered alleles; MUT ddPCR assays recognize engineered allele only. Table S2. Production of founders for conditional alleles. The table shows the numbers of embryos and animals involved in mutagenesis attempts employing the injection of CRISPR/Cas9 reagents and lssDNA donors. Table S3. Generation of conditional alleles employing different donor types. The table shows the numbers of embryos and animals involved in mutagenesis attempts employing the injection of CRISPR/Cas9 reagents and oligonucleotides, plasmids or lssDNA donors. The results of the analysis of the founders obtained from these attempts are also summarized. Table S4. Generation of a Rims1 R655H point mutation. Further genotype screening data for this project are shown in Additional file 18: Figure S16 and Additional file 19: Figure S17. Table S5. Analysis of the Rims1 R655H project. The table details the results of screening of five positive F 0 animals obtained for the generation of a Rims1 R655H point mutation and the subsequent characterization of the F 1 animals obtained from mating of these F 0 animals to WT mice. Table S6. Nomenclature of new mouse lines established in the course of the study. (XLS 81 kb) Additional file 2: Figure S1. Screening by Sanger sequencing of animals for the generation of a Syt7 conditional allele. The figure shows the sequencing traces from PCR products amplified from founder Syt7-4 (a) and founder Syt7-8 (b) that reveal the integration of two loxP sites in both animals. Note that Syt7-8 appears to be homozygous (a single trace detected), while Syt7-4 appears to contain at least two different alleles. The PCR products from which the sequence traces were derived are shown in Fig. 1
. (PNG 377 kb)
Additional file 3: Figure S2. Additional animal analysis information. (DOCX 19408 kb) Additional file 4: Figure S3. The figure shows the designs of reagents employed for the generation of conditional alleles. Red triangles mark loxP sites. RNA is transcribed in vitro from a double-stranded DNA template containing the T7 promoter and the donor sequence. The resulting RNA is reverse-transcribed employing a primer that is specific to the donor sequence. Additional sequences (orange boxes, marked as universal) were added to the design for the purpose of facilitating initial screening of animals employing restriction enzyme sites and/or validated primer pairs, with the exception of the Syt7 conditional allele (described in Fig. 1). (PNG 91 kb) Additional file 5: Figure S4. Analysis of the Ikzf2 project. PCR amplification of the genomic region of interest from (a, b) F 0 animals and (f, g) Ikzf2-2's offspring with (a, f) Ikzf2-F3 and Ikzf2-3R2 primers (1594-bp amplicon) and (b, g) LoxPF and LoxPR primers (906-bp amplicon) from biopsies. (a, b, f, g) Animals' IDs are shown. + is positive control amplified from an unrelated (a) WT, (b) plasmid template. Sequencing of PCR amplicon from (c) the founder Ikzf2-2, (h) Ikzf2-2.1f and (i) Ikzf2-2.1 h with Ikzf2-F3 and Ikzf2-3R2 primers. LoxP sequences are highlighted in blue. (d) ID and outcome of PCR analysis of the region of interest and the conclusion for each F 0 individual. (e) ID, outcome of sequencing and copy counting of the region of interest as well as the conclusion for each individual of the first litter obtained by mating Ikzf2-2 with a WT mouse. *Animal mated; **deletion not picked up by Ikzf2 PCR, likely encompassing at least one primer sequence; ***allele detailed in Additional file 14: Figure S13. Evidence of deletion is highlighted in blue. L1 = 1 kb DNA molecular weight ladder (thick band is 3 kb). Sequencing data showing a correct conditional allele are shown in Additional file 3: Figure S2d Additional file 10: Figure S9. Analysis of the 6430573F11Rik project. PCR amplification of genomic DNA of (a) F 0 animals, (f) 6430573F11Rik-11's offspring or (i) 6430573F11Rik-28's offspring with (a, f) 6430573F11Rik-F3 and 6430573F11Rik-R2 (1721-bp amplicon) and (b, f) LoxPF and LoxPR (999-bp amplicon). Sequencing of PCR amplicons from (c) 6430573F11Rik-11 and (g) 6430573F11Rik-11.1a with 6430573F11Rik-F3 and 6430573F11Rik-R2. LoxPs are in blue. ID, outcome of PCR analysis and conclusion for (d) each F 0 animal and (e) the first litter obtained by mating 6430573F11Rik-11 with a WT mouse. Two founders were mated for cKO GLT. *Mated; ⁑no evidence of loxP in 6430573F11Rik amplicon, suggesting donor integrated randomly (6430573F11Rik-28 sequence trace in Additional file 3: Figure S2q). (g) Only WT sequence is found, indicating random donor insertion. (f, i) Animal IDs are shown. + is positive control from unrelated WT and conditional floxed animal for 6430573F11Rik and LoxP PCR, respectively. L1 = 1 kb DNA molecular weight ladder (thick band is 3 kb). (h) First litter obtained by mating 6430573F11Rik-28 with a WT mouse. ID, outcome of sequencing and copy counting of the region of interest and the conclusion for each individual. (j) Sequencing of amplicons obtained with 6430573F11Rik-F3 and 6430573F11Rik-R2 and 6430573F11Rik-28.1a. Only WT sequence is found, indicating random donor insertion. Sequencing of deletion allele in founder 6430573F11Rik-6, summary of analysis of F 1 animals derived from 6430573F11Rik-6 and transmitted deletion allele are shown in Additional file 3: Figure S2r | 2018-06-22T06:59:09.429Z | 2018-06-21T00:00:00.000 | {
"year": 2018,
"sha1": "1f929159476982ebb728befee4680422e1b9075e",
"oa_license": "CCBY",
"oa_url": "https://bmcbiol.biomedcentral.com/track/pdf/10.1186/s12915-018-0530-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1f929159476982ebb728befee4680422e1b9075e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
221112353 | pes2o/s2orc | v3-fos-license | Compactness of the $\bar{\partial}$-Neumann problem on domains with bounded intrinsic geometry
By considering intrinsic geometric conditions, we introduce a new class of domains in complex Euclidean space. This class is invariant under biholomorphism and includes strongly pseudoconvex domains, finite type domains in dimension two, convex domains, $\mathbb{C}$-convex domains, and homogeneous domains. For this class of domains, we show that compactness of the $\bar{\partial}$-Neumann operator on $(0,q)$-forms is equivalent to the boundary not containing any $q$-dimensional analytic varieties (assuming only that the boundary is a topological submanifold). We also prove, for this class of domains, that the Bergman metric is equivalent to the Kobayashi metric and that the pluricomplex Green function satisfies certain local estimates in terms of the Bergman metric.
Introduction
This paper is motivated by work of Fu-Straube [FS98] who showed that for convex domains, compactness of the∂-Neumann operator on (0, q)-forms is equivalent to the boundary containing no q-dimensional analytic varieties. Our goal is to define a class of domains which contains the bounded convex domains, is invariant under biholomorphisms, and where we can prove the same result about compactness of the∂-Neumann operator. We define such a class as follows.
Definition 1.1. A domain Ω ⊂ C d has bounded intrinsic geometry if there exists a complete Kähler metric g on Ω such that (b.1) the metric g has bounded sectional curvature and positive injectivity radius, (b.2) there exists a C 2 function λ : Ω → R such that the Levi form of λ is uniformly bi-Lipschitz to g and ∂λ g is bounded on Ω.
The above conditions on the Kähler metric g are intrinsic and hence having bounded intrinsic geometry is invariant under biholomorphism. Property (b.2) is motivated by Gromov's definition of Kähler hyperbolicity [Gro91], McNeal's results on plurisubharmonic functions with self bounded complex gradient [McN02b], and vanishing results for L 2 cohomology [DF83,Don94,Don97,McN02a].
Many domains have bounded intrinsic geometry, including (1) strongly pseudoconvex domains, (2) finite type domains in C 2 , (3) convex domains or more generally C-convex domains which are Kobayashi hyperbolic (with no boundary regularity assumptions), (4) simply connected domains which have a complete Kähler metric with pinched negative sectional curvature, (5) homogeneous domains, and (6) the Teichmüller space of hyperbolic surfaces of genus g with n punctures. Further, by definition, any domain biholomorphic to one of the domains listed above also has bounded intrinsic geometry. In Section 2, we will describe these examples in more detail and give references.
A domain Ω ⊂ C d has several standard invariant Kähler (pseudo-)metrics. For instance, if B Ω denotes the Bergman kernel on a domain Ω ⊂ C d , then the Bergman (pseudo-)metric is defined by The Kähler metric in Definition 1.1 does not apriori have to be one of the standard invariant Kähler metrics, but we will prove that a domain has bounded intrinsic geometry if and only if the Bergman metric satisfies the conditions in Definition 1.1.
Theorem 1.2. (see Theorem 10.1) If Ω ⊂ C d is a domain, then the following are equivalent: (1) Ω has bounded intrinsic geometry, (2) the Bergman metric g Ω satisfies Definition 1.1. Moreover, in this case sup z∈Ω ∇ m R gΩ < ∞ for all m ≥ 0 where R is the curvature tensor of g Ω .
The "moreover" part says that the Bergman metric on a domain with bounded intrinsic geometry has bounded geometry in the standard Riemannian sense.
As a corollary to Theorem 1.2 and a result of Bremermann [Bre55] we see that every domain with bounded intrinsic geometry is pseudoconvex. Corollary 1.3. A domain with bounded intrinsic geometry is pseudoconvex.
Remark 1.4. We will actually establish that a domain with bounded intrinsic geometry is pseudoconvex before proving Theorem 1.2. In particular, in Theorem 7.3, we will show that the Kobayashi distance on such a domain is Cauchy complete and hence by a result of Wu [Wu67], domains with bounded intrinsic geometry must be pseudoconvex.
In the context of Definition 1.1, we should mention the following well-known properties of the Bergman metric. If bounded pseudoconvex domain has Lipschitz boundary, then the Bergman metric is complete [Che99,Her99]. Also, the holomorphic sectional curvature of the Bergman metric is always bounded from above by 2 [Ber48,Kob59] and the sectional curvatures are determined by the holomorphic sectional curvatures, so having bounded sectional curvature is equivalent to having holomorphic sectional curvature bounded below.
For domains with bounded intrinsic geometry, we will characterize the compactness of the∂-Neumann operator in terms of the growth rate of the Bergman metric. In particular, given a d-by-d complex matrix A let denotes the singular values of A. We will then prove the following.
Theorem 1.5. (see Theorem 11.1) Suppose Ω ⊂ C d is a bounded domain with bounded intrinsic geometry. Then the following are equivalent: (1) N q is compact.
If, in addition, ∂Ω is C 0 , then the above conditions are equivalent to: (3) ∂Ω does not contain any q-dimensional analytic varieties.
Remark 1.6. To be precise: (1) We say that ∂Ω is C r (respectively C r,α ) if for every point x ∈ ∂Ω there exists a neighborhood U of x and there exists a linear change of coordinates which makes U ∩ ∂Ω the graph of a C r (respectively C r,α ) function.
(2) We say that ∂Ω contains a q-dimensional analytic variety if there exists a holomorphic map ϕ : D q → ∂Ω where ϕ ′ (0) has rank q.
Convex domains always have C 0,1 boundary and, as mentioned above, have bounded intrinsic geometry. In this special case, (2) ⇔ (3) follows from estimates of Frankel [Fra91] while (1) ⇔ (3) was established by Fu-Straube [FS98]. We also note that by earlier results of Henkin-Iordan [HI97] and Sibony [Sib87], the∂-Neumann operator is compact for every 1 ≤ q ≤ d on a B-regular domain, a class of domains which includes bounded convex domains whose boundaries do not contain any 1-dimensional analytic varieties.
As an example, Theorem 1.5 implies the following extension of Fu-Straube's result.
Corollary 1.7. Suppose Ω ⊂ C d is a bounded domain with C 0 boundary. If Ω is biholomorphic to a C-convex domain (e.g. a convex domain), then the following are equivalent: (1) N q is compact, (2) ∂Ω contains no q-dimensional analytic varieties.
1.2. Geometric properties. We will also establish some geometric properties of domains with bounded intrinsic geometry. Our main result in this direction is that the Bergman metric and Kobayashi metric are equivalent.
Theorem 1.8. (see Theorem 7.3 and 10.1) If Ω ⊂ C d is domain with bounded intrinsic geometry and k Ω is the Kobayashi metric on Ω, then there exists C > 1 such that for all z ∈ Ω and v ∈ C d .
Remark 1.9. This equivalence of metrics is a key part of the proof that (2) ⇒ (3) in Theorem 1.5.
We will also establish the following uniform local estimate for the pluricomplex Green function in terms of the Bergman distance.
Theorem 1.10. (see Theorem 6.4) Suppose Ω ⊂ C d is a domain with bounded intrinsic geometry, dist Ω is the Bergman distance on Ω, and G Ω is the pluricomplex Green function on Ω. There exist C, τ > 0 such that: 1.3. Potentials for the Bergman metric. Theorem 1.2 says there is no loss in generality in considering only the Bergman metric in Definition 1.1 and so it seems natural to wonder if one can simply consider the standard potential for the Bergman metric in Property (b.2). Unfortunately, as the next proposition shows, this is not the case.
Proposition 1.11. (see Proposition 12.2) There exists a bounded domain It is easy to verify that D × D, and hence also Ω in the Proposition, has bounded intrinsic geometry. So the above Proposition justifies the complicated formulation of Property (b.2).
1.4. Motivation for Definition 1.1. The definition of bounded intrinsic geometry is partially motivated by results of Catlin [Cat89] for finite type domains in C 2 and McNeal [McN94,McN92] for finite type convex domains. A central component of their work is the construction of certain embedded polydisks and associated plurisubharmonic functions.
In particular, given such a domain Ω they show, essentially 1 , that for every ζ ∈ Ω there exists an affine embeddings Φ ζ : D d → Ω of the form Then the plurisubharmonic functions φ ζ are used to reduce global problems on Ω to local problems on Φ ζ (D d ).
1 In the d = 2 case, the affine maps are defined in terms of holomorphic coordinates on C d which depend on ζ, see [Cat89, Section 1].
In Section 5 we will show that domains with bounded intrinsic geometry have similar embeddings and plurisubharmonic functions.
Theorem 1.12. (see Theorem 5.1) Suppose Ω ⊂ C d is a bounded pseudoconvex domain and g is a complete Kähler metric on Ω.
(1) If g has Property (b.1), then there exists A 1 > 1 such that: For every ζ ∈ Ω there exists a holomorphic embedding Φ ζ : B → Ω with Φ ζ (0) = ζ and (2) If g has Property (b.2), dist g is the distance induced by g, and r > 0, then there exists A 2 = A 2 (r) > 0 such that: For every ζ ∈ Ω there exists a plurisubharmonic function φ ζ : The existence of the embeddings in part (1) will follow from classical work of Shi [Shi89] concerning regularization of Riemannian metrics and recent work of Wu-Yau [WY20] concerning Kähler manifolds with bounded geometry. The plurisubharmonic functions in part (2) will be constructed in a direct way from the potential in Property (b.2). As in the work of Catlin and McNeal, Theorem 1.12 will allow us to reduce global problems to local ones.
Acknowledgements. I would like to thank Nessim Sibony and Sai Kee Yeung for a number of helpful comments. I would also like to thank Xieping Wang for pointing out a mistake in an earlier version of this paper. This material is based upon work supported by the National Science Foundation under grants DMS-1942302 and DMS-1904099.
Examples
In this section we give precise references for the examples of domains with bounded intrinsic geometry listed in the introduction.
2.1. Finite type domains in dimension two. Suppose Ω ⊂ C 2 is a smoothly bounded pseudoconvex domain of finite type. Since Ω has smooth boundary, the Bergman metric is complete [Che99,Her99]. McNeal [McN89] proved that the Bergman metric has bounded holomorphic sectional curvature and hence bounded sectional curvature (the holomorphic sectional curvatures determine the sectional curvatures). Catlin established precise estimates for the Bergman metric near the boundary [Cat89]. Using these estimates, the fact that the sectional curvature is bounded, and Proposition 2.1 in [LSY05] one can show that the injectivity radius of the Bergman metric is positive. Thus Property (b.1) holds. Donnelly [Don97], using Catlin's estimates, proved that ∂ log B Ω (z, z) gΩ is uniformly bounded and thus Property (b.2) holds.
Theorem 2.1. Suppose Ω ⊂ C d is a simply connected domain and there exists a complete Kähler metric g on Ω with −a 2 ≤ sec(g) ≤ −b 2 < 0 for some constants a, b > 0. Then Ω has bounded intrinsic geometry.
Proof. Since Ω is simply connected and g is negatively curved, the injectivity radius of g is infinite by the Cartan-Hadamard theorem. So g satisfies Property (b.1).
We will establish Property (b.2) using comparison theorems from [GW79]. Let dist g be the distance induced by g. Fix o ∈ Ω. Since g is negatively curved and Ω is simply connected, dist g (·, o) is C ∞ on Ω \ {o} (this also follows from the Cartan-Hadamard theorem). Then let ρ be a smooth real valued function on Ω such that ρ(z) = dist g (z, o) when dist g (z, o) ≥ 1. By the Hessian comparison theorem there exists C > 1 such that the Levi form of ρ satisfies where B ⊂ C d is the unit ball.
Remark 2.3. In the literature, HHR domains are sometimes called domains with the uniform squeezing property, see for instance [Yeu09].
A general result of Yeung implies that Property (b.1) holds on any HHR-domain.
If Ω ⊂ C d is a HHR-domain, then the Bergman metric g Ω on Ω is complete, has bounded sectional curvature, and positive injectivity radius.
For certain classes of HHR-domains it is possible to verify Property (b.2).
Proposition 2.5. Suppose Ω is a domain biholomorphic to either a (1) a strongly pseudoconvex domain, (2) a C-convex domain (e.g. a convex domain) which is Kobayashi hyperbolic, (3) a bounded homogeneous domain, or (4) the Teichmüller space of hyperbolic surfaces with genus g and n punctures, then the Bergman metric g Ω on Ω has Property (b.2) and hence Ω has bounded intrinsic geometry.
Proof. For strongly pseudoconvex domains, it is possible to show that ∂ log B Ω (z, z) gΩ is uniformly bounded, see for instance [Don94,Proposition 3.4]. We will consider the C-convex case in Proposition 4.12 below. A stronger form of Property (b.2) for the Bergman metric on a homogeneous domain was established by Kai-Ohsawa [KO07].
Preliminaries
3.1. Notations. In this section we fix any possibly ambiguous notation.
The Bergman metric, kernel, and distance: We will use the following notations.
(1) Let B Ω denote the Bergman kernel on Ω, (2) let g Ω denote the Bergman metric on Ω, (3) let dist Ω denote the distance induced by the Bergman metric, and (4) for ζ ∈ Ω and r ≥ 0 let denote the open ball of radius r centered at ζ in the Bergman distance.
Approximate inequalities: Given functions f, h : Often times the set X will be a set of parameters (e.g. m ∈ N). The Levi form: Given a domain Ω ⊂ C d and a C 2 -smooth real valued function f : Ω → R, the Levi form of f is Notice that f is plurisubharmonic if L(f ) ≥ 0 and, by definition, Norms and inner products on (p, q)-forms: Given a (p, q)-form α = α I,J dz I ∧dz J on a domain Ω, we will let α denote the function Similarly, we will let ·, · denote the pointwise inner product on (p, q)-forms, that is So α = α, α . Finally, we will use to denote the norm on L 2 (p,q) (Ω). 3.2. A sufficient condition for compactness. In this section we recall McNeal's sufficient condition for compactness.
Remark 3.4. In the second part of the definition, we are identifying L(λ) with the d-by-d matrix ∂ 2 λ ∂zi∂zj and σ j (L(λ)) is the j th largest singular value of this matrix.
Theorem 3.5 (McNeal [McN02b, Corollary 4.2]).
If Ω ⊂ C d is a bounded pseudoconvex domain satisfying condition ( P q ), then the operator N q is compact.
Condition ( P q ) is a generalization of Catlin's condition (P q ) where the estimate ∂λ L(λ) ≤ 1 is replaced by |λ| ≤ 1, see [Cat84,McN02b] for more detail. We also refer the reader to [Sib87] for additional details about domains satisfying condition (P 1 ).
3.3. Solutions to∂. We will use the following existence theorem for solutions tō ∂.
Theorem 3.6. Suppose Ω ⊂ C d is a bounded pseudoconvex domain, λ 1 : Ω → R has self bounded complex gradient, and λ 2 : assuming the right hand side is finite. Definition 3.7. A d-dimensional Kähler manifold (M, g) is said to have bounded geometry, if there exist constants r 2 > r 1 > 0, C > 1, and a sequence (A q ) q∈N of positive numbers such that: for every point m ∈ M there is a domain U ⊂ C n and a holomorphic embedding ψ : U → M satisfying the following properties: where (ψ * g) ij is the component of ψ * g in terms of the canonical coordinates z = (z 1 , . . . , z d ) on C d and µ, ν are multiple indices with |µ| = µ 1 + · · · + µ d .
We will use the following theorem of D. Wu and S.T. Yau. if and only if (M, g) has positive injectivity radius and for every integer q ≥ 0, there exists a constant C q > 0 such that the curvature tensor R of g satisfies Moreover, one can choose the constants r 1 , r 2 , C, (A q ) q≥0 in Definition 3.7 to depend only on {C q } q≥0 and d.
The (complex) convex case
The primary purpose of this section is to verify that the Bergman metric on a convex domain or more generally a C-convex domain satisfies Definition 1.1. By Theorem 2.4, it is enough to verify that the Bergman metric satisfies Property (b.2).
We will also provide a proof of Theorem 1.5 in the special case of convex domains. In this case the proof is similar to the argument for general domains with bounded intrinsic geometry, but has less technicalities.
The key tool in the convex case is a result of Frankel which says that any Cproperly convex domain can be normalized via an affine map. In what follows, we will let Aff(C d ) denote the group of affine automorphisms of C d . Any T ∈ Aff(C d ) can be written as Theorem 4.1 (Frankel [Fra91]). For any d ∈ N there exists ǫ d > 0 such that: if Ω ⊂ C d is a C-properly convex domain and ζ ∈ Ω, then there exists T ζ ∈ Aff(C d ) with T ζ (ζ) = 0 and Remark 4.2. Notice that this implies that every C-properly convex domain is an HHR-domain.
Frankel used the normalizing maps to estimate the Bergman metric in terms of the Euclidean geometry of the domain. Given a domain Ω ⊂ C d , z ∈ Ω, and v ∈ C d non-zero define for all z ∈ Ω and non-zero v ∈ C d .
Standing assumption: For the rest of this section let Ω ⊂ C d be a properly convex domain and for each ζ ∈ Ω let T ζ be an affine map satisfying Theorem 4.1.
We will show that the Bergman metric on Ω has Property (b.2) and then prove Theorem 1.5 for Ω. (1) There exists C 1 > 1 such that Proof. Fix some δ ∈ (ǫ d , 2ǫ d ). From the monotoncity property of the Bergman kernel and the explicit formulas for the Bergman kernel on 2ǫ d B and H d , there exists C > 1 such that is holomorphic in the first variable and anti-holomorphic in the second variable. Then, since δ > ǫ d , Cauchy's integral formulas imply uniform estimates for the derivates on ǫ d B ×ǫ d B.
We will also use the following corollary to Theorem 4.3.
Corollary 4.5 (to Theorem 4.3). There exists C 2 > 1 such that Using these estimates we can prove that the Bergman metric on a convex domain satisfies Property (b.2).
Proof. Fix ζ ∈ Ω. Notice that where L ζ is the linear part of T ζ . So by Lemma 4.4 and Corollary 4.5 for all X ∈ C d . So ∂ log B Ω (z, z) gΩ is uniformly bounded.
Finally we provide a proof of Theorem 1.5 for the special case of convex domains. As mentioned at the start of this section, the proof in this case is similar to the proof in the general case (and also similar to Fu-Straube's original proof), but in this special case many technicalities can be avoided.
Remark 4.8. The proof below is similar to Fu and Straube's original argument that (1) ⇔ (3), but with three modifications that will allow us to extend the result to domains with bounded intrinsic geometry.
• The first is the observation that the estimates in Theorem 4.3 imply that (2) ⇔ (3). This allows us to work with the Bergman metric instead of the boundary of the domain. • In their proof that (2/3) ⇒ (1), Fu and Straube directly construct bounded plurisubharmonic functions which satisfy Catlin's property (P q ). This construction seems to rely on the convexity of the domain. In contrast, we will use Proposition 4.6 which directly shows that (2) implies property ( P q ) and hence compactness. • In their proof that (1) ⇒ (2/3), Fu and Straube consider a linear slice of the convex domain and use the Ohsawa-Takegoshi extension theorem to pass from the slice to the full domain. The fact that linear slices are well behaved again seems to rely on the convexity of the domain. Our argument that (1) ⇒ (2/3) is similar, but by using Frankel's normalizing maps we can avoid this reduction to a lower dimensional domain.
Proof. Theorem 4.3 implies that (2) ⇔ (3). If (2) is true, then Proposition 4.6 implies that Ω has Property ( P q ) and hence N q is compact by Theorem 3.5. We prove that (1) ⇒ (2) by contradiction. Suppose for a contradiction that (1) is true and (2) is false. Then there exist C 3 > 0, a sequence (ζ m ) m≥1 in Ω converging to ∂Ω, and a sequence (V m ) m≥1 of q-dimensional linear subspaces such that For each m, let T m = T ζm be the recentering map from Theorem 4.1 and let L m be the linear part of T m . Corollary 4.5 and Equation (2) imply that Since Ω is bounded, Theorem 4.3 implies that there exists C 4 > 1 such that g Ω ≥ C −2 4 g Euc . Then Using the singular value decomposition we can write L −1 m = k 1,m D m k 2,m where k 1,m , k 2,m are unitary matrices and Then consider the (0, q)-form on Ω. Then α m Ω = 1 and∂α m = 0. So h m :=∂ * N q α m satisfies∂h m = α m and {h m : m ≥ 1} is relatively compact in L 2 (0,q−1) (Ω) (see the discussion proceeding Theorem 11.1).
By passing to a subsequences we can suppose that h m converges in L 2 (0,q−1) (Ω). Then for any ǫ > 0 there exists a compact subset K ⊂ Ω such that We will derive a contradiction by showing that BΩ(ζm;r) h m 2 dz is uniformly bounded from below. Since ζ m → ∂Ω and the Bergman metric is complete, this will contradict Equation (4). Consider the (0, q)-form on T m (Ω) defined by Using Lemma 4.4 and Equation (3), we can pass to a subsequence such that α m converges uniformly on ǫ d B to a smooth (0, q)-form α with α| 0 = 0.
Since α = 0, there exists a smooth compactly supported (0, q)-form χ : where ϑ is the formal adjoint of∂. By Cauchy Schwarz and Equation (3) for any r > C 2 ǫ d . Thus we have a contradiction.
4.2.
The C-convex case. A domain Ω ⊂ C d is called C-convex if for every complex affine line L ⊂ C d the intersection Ω ∩ L is either empty or simply connected. Clearly, every convex domain is C-convex. Further, as in the convex case, we say that a domain is a C-properly C-convex domain if it is C-convex and every complex affine map C → Ω is constant. As in the convex case, a C-convex domain is Kobayashi hyperbolic if and only if it is C-properly C-convex, see for instance [NPZ11]. For C-convex domains, we have the following recentering result established by Nikolov-Andreev using results from [NPZ11]. if Ω ⊂ C d is a C-properly C-convex domain and ζ ∈ Ω, then there exists T ζ ∈ Aff(C d ) such that T ζ (ζ) = 0 and To show that the Bergman metric satisfies Property (b.2), we will need the following estimates.
for all z ∈ Ω and non-zero v ∈ C d .
Lemma 4.11. If D C is simply connected, then Proof. Fix z ∈ D and let ψ : D → D be a biholomorphism with ψ(z) = 0. Then The Koebe 1/4 theorem applied to ψ −1 says that Hence g Ω has Property (b.2).
Proof. For each ζ ∈ Ω, fix T ζ ∈ Aff(C d ) an affine map satisfying Theorem 4.9. Fix δ ∈ (ǫ d , 2ǫ d ). Using Lemma 4.11 there exists A > 1 such that for all ζ ∈ Ω and w ∈ δ B. Then using Cauchy's integral formulas and increasing A one can prove that for all ζ ∈ Ω, w ∈ ǫ d B, and X ∈ C d . Then the rest of the proof is identical to the proof of Proposition 4.6.
Local charts from bounded geometry
The following constructions are fundamental for everything else in the paper.
Suppose Ω ⊂ C d is a domain, g is a complete Kähler metric on Ω, and dist g is the distance induced by g.
Suppose g has Property (b.1). Since g is complete and has bounded sectional curvature, by a result of Shi [Shi89] there exist C 0 > 1 and a complete Kähler metric h on Ω such that where R(h) is the curvature tensor of h (this metric is obtained by applying the Ricci flow to g for a small amount of time).
Proof. Since g has bounded sectional curvature and positive injectivity radius, the Rauch comparison theorem implies that there exists r 1 > 0 and C 1 > 1 such that for every ζ ∈ Ω, see for instance [Gro07, Section 8.7]. Then and so [LSY05, Proposition 2.1] implies that h has positive injectivity radius. Now applying Theorem 3.8 to the Kähler manifold (Ω, h) yields constants C 2 > 1, r 1 , r 2 > 0, and holomorphic embeddings F ζ : U ζ → Ω such that F ζ (0) = ζ, We claim that Φ ζ | B satisfies part (1) of the theorem with A 1 : To establish the bounds on the distance, recall that the length of a piecewise C 1 curve σ : [a, b] → Ω with respect to g is defined by length g (σ) = b a g σ(t) (σ ′ (t), σ ′ (t))dt and the distance induced by g is defined by where the infimum is taken over all peicewise C 1 curves joining u to w. Now fix u, w ∈ B. Since Φ * ζ g ≤ A 1 g Euc , we clearly have To establish the lower bound, consider some piecewise C 1 curve σ : Otherwise, there exists sequences (a n ) n≥1 , Then since σ was an arbitrary piecewise C 1 curve joining Φ ζ (u) to Φ ζ (w) we have 5.2. Part (2). Suppose g has Property (b.2). Then, by definition, there exist C > 1 and a C 2 function λ : Ω → R such that 1 C g ≤ L(λ) ≤ Cg and ∂λ g ≤ C.
We start by observing that the function λ can be used to construct negative plurisubharmonic functions.
The pluricomplex Green function
In this section we establish a local estimate for the pluricomplex Green function on domains with bounded intrinsic geometry. We will use this estimate to study the Kobayashi metrics in Section 7 and to establish an extension result in Section 8.
where the supremum is taken over all negative plurisubharmonic functions u such that u − log z − w is bounded from above in a neighborhood of w.
Remark 6.2. In the definition, we assume that u ≡ −∞ is a plurisubharmonic function.
We will frequently use the following basic fact.
The main result in this section is the following.
Theorem 6.4. Suppose Ω ⊂ C d is a domain with bounded intrinsic geometry, g is a complete Kähler metric on Ω satisfying Definition 1.1, and dist g is the distance induced by g. Then there exist C, τ > 0 such that: Proof. By Theorem 5.1 there exist A > 1, holomorphic embeddings Φ ζ : B → Ω, and plurisubharmonic functions φ ζ : B → Ω such that Fix δ ∈ (0, 1/2) and a smooth function χ : B → [0, 1] with χ(z) = 1 when z ≤ δ and χ(z) = 0 when z ≥ 2δ. Then pick C > 0 such that Next fix M > AC and define the functions We claim that each u ζ is plurisubharmonic. Since φ ζ is plurisubharmonic and the support of the first term is contained in Φ ζ (B), it suffices to consider the functions Then since M > AC, each v ζ is plurisubharmonic. Hence u ζ is plurisubharmonic. Since Φ −1 ζ is well defined and smooth in a neighborhood of ζ, u ζ − log z − ζ is bounded from above in a neighborhood of ζ and so when z ∈ Φ ζ (δ B). Since {z ∈ Ω : dist g (z, ζ) < τ } ⊂ Φ ζ (δ B) when τ < A −1/2 δ this completes the proof.
The Kobayashi metric
In this section we use the estimates on the pluricomplex Green function in Theorem 6.4 to bound the Kobayashi metric on a domain with bounded intrinsic geometry.
We will frequently use the following basic fact.
Observation 7.2. If Ω 1 ⊂ C d1 , Ω 1 ⊂ C d1 , and f : Ω 1 → Ω 2 is a holomorphic map, then The main result in this section is the following.
Theorem 7.3. Suppose Ω ⊂ C d is a domain with bounded intrinsic geometry and g is a complete Kähler metric on Ω satisfying Definition 1.1. Then there exists C > 1 such that for all z ∈ Ω and v ∈ C d . In particular, the Kobayashi metric induces a Cauchy complete distance on Ω.
Remark 7.4. We will establish the lower bound on the Kobayashi metric using the estimates on the pluricomplex Green function in Theorem 6.4 and the monotonicity of the pluricomplex Green function under holomorphic maps. Alternatively, it is possible to obtain this estimate using the Sibony metric [Sib81]. In particular, the Sibony metric is smaller than the Kobayashi metric and one can modify the functions u ζ constructed in the proof of Theorem 6.4 to obtain a lower bound on the Sibony metric.
Before proving Theorem 7.3 we establish a corollary.
Corollary 7.5. A domain with bounded intrinsic geometry is pseudoconvex. Proof of Theorem 7.3. Let dist g denote the distance induced by g. By Theorem 5.1 there exist A > 1 and holomorphic embeddings Φ ζ : B → Ω such that By Theorem 6.4 there exists C > 0 and τ ∈ (0, 1) such that So we can define a holomorphic map ϕ : D → Ω by Then For the other direction, fix m ∈ N and let ϕ : D → Ω be a holomorphic map with ϕ(0) = ζ, v = ϕ ′ (0)ξ, and .
Extending holomorphic functions defined on local charts
Suppose Ω ⊂ C d is domain with bounded intrinsic geometry and g is a complete Kähler metric on Ω satisfying Definition 1.1. Then let Φ ζ : B → Ω be holomorphic embeddings satisfying Theorem 5.1. |f | 2 dz.
The following argument is based on the proof of [GW79, Proposition 8.9] which itself is based on work of Hörmander [H65]. See also [Cat89,Section 6], [McN94,Theorem 3.4] and the discussion in Section 1.4.
Proof. By Theorem 5.1 there exist A > 1 such that Since Ω has Property (b.2), we can increase A and further assume that there exists a C 2 function λ : Ω → R such that 1 A g ≤ L(λ) ≤ Ag and ∂λ g ≤ A.
Since L(λ) ≥ 1 A g, we have . Then, since ∂ (χ) ≡ 0 on a neighborhood of 0, Theorem 6.4 implies that there exists C > 0 (independent of ζ, f ) such that Then after possibly increasing C (while remaining independent of ζ, f ), Theorem 3.6 implies the existence of u ∈ L 2,loc (Ω) such that∂u = α and |f | 2 dz.
Consider F = χ ζ f − u. Then∂F = 0 and so F is holomorphic. Then, since χ ζ f is a smooth function, u is also smooth. Further, since dist Ω is locally Lipschitz, Theorem 6.4 implies that (the constant in the big O notation depends on ζ). Then, since Ω |u| 2 e −λ2 dz is finite, we must have ∂ |β| u ∂z β (ζ) = 0 for all multi-indices β with |β| ≤ m. Then, since χ ≡ 1 on a neighborhood of 0, for all multi-indices β with |β| ≤ m.
Finally, note that
and so the proof is complete.
Local estimates on the Bergman kernel
Suppose Ω ⊂ C d is domain with bounded intrinsic geometry and let g be a complete Kähler metric on Ω satisfying Definition 1.1. Then let Φ ζ : B → Ω be holomorphic embeddings satisfying Theorem 5.1. Using these functions we introduce the following "local Bergman kernels." For ζ ∈ Ω, define . We will prove the following local estimates on these functions.
The rest of the section is devoted to the proof of the Theorem. By Theorem 5.1 there exist A > 1 such that Since Ω has Property (b.2), we can increase A and assume there exists a C 2 function λ : Ω → R such that 1 A g ≤ L(λ) ≤ Ag and ∂λ g ≤ A.
Proof. We use the following interpretation of the Bergman kernel: if D ⊂ C d is a domain and z ∈ D, then where H(D) is the space of holomorphic functions D → C and · D is the L 2 norm on D. Then Theorem 8.1 implies that there exists c 0 > 1 such that for all ζ ∈ Ω.
In the next two lemmas we identify g z with the d-by-d complex matrix g z ( ∂ ∂zi , ∂ ∂zj ) . Lemma 9.3. There exists c 1 > 1 such that for all ζ ∈ Ω.
Proof. Notice that So the lemma follows from Lemma 9.3 and the fact that Lemma 9.5. For every δ ∈ (0, 1) and multi-indices a, b there exists C = C(δ, a, b) > 0 such that ∂ |a|+|b| β ζ ∂u a ∂w b (u, w) ≤ C for all ζ ∈ Ω and u, w ∈ δ B.
Proof. Notice that on B × B. Further, β ζ is holomorphic in the first variable and anti-holomorphic in the second variable. So these estimates follow from Cauchy's integral formulas.
The Bergman metric
In this section we prove Theorem 1.2 from the introduction.
If Ω ⊂ C d is a domain with bounded intrinsic geometry, then the Bergman metric g Ω on Ω satisfies Definition 1.1 and The rest of the section is devoted to the proof of Theorem 10.1. Let Ω ⊂ C d be a domain with bounded intrinsic geometry and let g be a complete Kähler metric on Ω which satisfies Definition 1.1.
By Theorem 5.1 there exist A > 1 and holomorphic embeddings Φ ζ : Lemma 10.2. There exists C > 1 such that Hence g Ω is complete and satisfies Property (b.2).
Proof. We use the following interpretation of the Bergman metric: if D ⊂ C d is a domain, z ∈ D, and X ∈ C d , define where H(D) is the space of holomorphic functions D → C and · D is the L 2 norm on D. Then g D,z (X, X) = 1 B D (z, z) η D (z; X).
By Theorem 8.1 there exists C > 1 such that for all ζ ∈ Ω and X ∈ C d . By Lemma 9.2 and possibly increasing C > 1 we can also assume that for all ζ ∈ Ω. Further, So n + 1 AC g ζ ≤ g Ω,ζ ≤ AC(n + 1)g ζ for all ζ ∈ Ω.
where R is the curvature tensor of g Ω .
Proof. As in Section 9, for ζ ∈ Ω define So the corollary follows from Theorem 9.1, Lemma 10.2, and expressing the curvature tensors in local coordinates.
11. The proof of Theorem 1.5 In this section we prove an extension of Theorem 1.5 from the introduction, but first some general remarks.
When Ω is a bounded pseudoconvex domain, a bounded linear operator S q : L 2 (0,q) (Ω) ∩ ker∂ → L 2 (0,q−1) (Ω) is said to be a solution operator for∂ if∂S q (u) = u for all u ∈ L 2 (0,q) (Ω) ∩ ker∂. The operator∂ * N q is such a solution operator and it is well-known that the compactness of N q implies the compactness of∂ * N q , see for instance [FS01, Lemma 1].
Theorem 11.1. Suppose Ω ⊂ C d is a bounded domain with bounded intrinsic geometry. Then the following are equivalent: (1) Ω satisfies condition ( P q ).
If, in addition, ∂Ω is C 0 , then the above conditions are equivalent to (5) ∂Ω contains no q-dimensional analytic varieties.
(2) Recall that denotes the singular values of a d-by-d matrix A and so where the minimum is taken over all q-dimensional complex linear subspaces.
Summarizing our discussion so far, we know that We will complete the proof by showing that (3) ⇒ (4) and (4) ⇔ (5). For the rest of the section let Ω ⊂ C d be a bounded domain with bounded intrinsic geometry. By Theorems 5.1 and 10.1 there exist A > 1 and for each ζ ∈ Ω a holomorphic embedding Φ ζ : B → Ω such that Proposition 11.4 ((5) ⇒ (4)). If ∂Ω contains no q-dimensional analytic varieties, then Proof. Suppose not. Then there exist C > 0, a sequence (ζ m ) m≥1 in Ω converging to ∂Ω, and a sequence (V m ) m≥1 of q-dimensional linear subspaces such that By Montel's theorem and passing to a subsequence we can assume that Φ ζm converges locally uniformly to a holomorphic map Φ : B → Ω with Φ(0) ∈ ∂Ω. Since the Bergman metric on Ω is complete, Equation (5) implies that Φ(B) ⊂ ∂Ω.
Lemma 11.5 ((4) ⇒ (5)). Suppose ∂Ω is C 0 . If then ∂Ω contains no q-dimensional analytic varieties Proof. Suppose not. Then there exists a holomorphic map ψ : B q → ∂Ω where ψ ′ (0) has rank q and B q ⊂ C q denotes the unit ball. By applying a linear change of coordinates to C d , we may assume that ψ ′ (0)v = (v, 0) for all v ∈ C q .
Proposition 11.6 ((3) ⇒ (4)). If there exists a compact solution operator for∂ on (0, q)-forms, then The rest of the section is devoted to the proof of Proposition 11.6, which is similar to arguments of Catlin [Cat83, Section 2] and Fu-Straube [FS98,Section 4].
Assume S q is a compact solution operator for∂ on (0, q)-forms. We argue by contradiction: suppose there exist C > 0, a sequence (ζ m ) m≥1 in Ω converging to ∂Ω, and a sequence (V m ) m≥1 of q-dimensional linear subspaces such that For each m ≥ 0 let U m be a unitary matrix with and consider the (0, q)-forms on Ω. Then α m 2 = 1 and∂α m = 0. So h m = S q (α m ) is well defined and by passing to a subsequence we can suppose that h m converges in L 2 (0,q−1) (Ω). Since h m converges, for any ǫ > 0 there exists a compact subset K ⊂ Ω such that sup m≥0 Ω\K h m 2 dz < ǫ.
We will derive a contradiction by showing that is uniformly bounded from below. Since ζ m → ∂Ω and the Bergman metric is proper, this will contradict Equation (7). Let Φ m := Φ ζm . By precomposing each Φ m with a unitary transformation we may assume that Lemma 11.7.
(2) For any δ ∈ (0, 1) there exists C δ > 0 such that and so part (1) follows from the definition of the determinant. Part (2) is a consequence of the Cauchy integral formulas and the fact that the functions Φ m : B → Ω are uniformly bounded. Since Thus, all the singular values of L m are greater than (AC) −1/2 which implies that and notice that Lemma 11.8. After passing to a subsequence, we can assume that α m converges locally uniformly on B to a smooth function α and α(0) = 0.
Proof. Each α m is a product of a holomorphic function f m (w) := det (Φ ′ m (w)) B Ω (Φ m (w), ζ m ) B Ω (ζ m , ζ m ) and an anti-holomorphic function J m . Hence by Montel's theorem it is enough to show that the sequence α m is locally bounded on B and | α m (0)| is uniformly bounded from below.
We will finally obtain a contradiction by proving the following. where χ = ψdz 1 ∧ · · · ∧ dz q . Since where ϑ is the formal adjoint of∂. Now for w ∈ supp(χ) by Lemma 11.7. So Finally note that Φ m (B) ⊂ B Ω (ζ m ; √ A) and so 0 < lim inf m→∞ BΩ(ζm;r) h m 2 dz for any r ≥ √ A.
Potentials with bounded complex gradients
The purpose of this section is to justify the complicated formulation of Property (b.2). In particular, we consider a stronger, more natural property and then show that is not invariant under biholomorphism. Notice that Property ( * ) implies that the Bergman metric has Property (b.2) and is equivalent to: there exists C > 0 such that |∂ log B Ω (z, z)(X)| ≤ C g Ω,z (X, X) (9) for all X ∈ C d and z ∈ Ω. Property ( * ) seems more natural than Property (b.2), but unfortunately it is not invariant under biholomorphism. The proof requires one lemma.
Notice that in the last equality we used the fact that F * g Ω2 = g Ω1 .
Thus if we can find ψ : D → D −{0} such that the above quantity is unbounded, then Ω ψ does not have Property ( * ). Remark 12.5. One can make the above argument more concrete by directly using the explicit covering map D → D −{0} given by ψ(z) = exp − 1+z 1−z . With this choice, it is possible to explicitly compute the Bergman kernel on Ω ψ and then verify directly that Ω ψ does not have Property ( * ). | 2020-08-14T01:00:49.690Z | 2020-08-13T00:00:00.000 | {
"year": 2020,
"sha1": "0b8c1be543beaa879d1752b7a5ba0bbeb7559ff9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "82d4c8e6eaa4a367e3d59ec182261b154f995a22",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
18268129 | pes2o/s2orc | v3-fos-license | Interacting clusters and their environment
Central regions of superclusters are the ideal places where to study cluster merging phenomena: in fact the accretion activity is enhanced, as predicted by the cosmological simulations. In this paper I review the case-study of the Shapley Concentration, aimed to understand the effect of major mergings on the intracluster medium and the galaxy population of the involved clusters.
Introduction
Cluster mergings are known to be among the most energetic phenomena in the Universe, but until now studies at all wavelengths have not been extensively carried on: therefore it is still unclear how the collision energy is dissipated and which is the effect of merging on the emission properties of the galaxies and on the physics of the intracluster medium.
In cosmological N-body simulations the cluster accretion happens along specific directions defined by the density caustics and richer clusters form preferentially where the environment density is higher. Superclusters can be considered as the observational counterparts of these caustics and it is expected that at their centers the cluster accretion is still strongly active. Therefore, superclusters are the ideal places where to study merging phenomena, because the cross-section for cluster collisions is enhanced.
The best place for these studies is the central region of the Shapley Concentration supercluster, a huge concentration of clusters at z ∼ 0.05 (Raychaudhury 1989, Plionis & Valdarnini 1991, Raychaudhury et al. 1991, Zucca et al. 1993). This region is anomalously rich in clusters, considering that it has 25 members while the Great Attractor (with a similar mass overdensity) contains only 6 clusters (see Table 3 of Zucca et al. 1993): for some reasons, in this region the cluster formation efficiency has been enhanced also with respect to similar regions. Moreover, as noted by Raychaudhury et al. (1991), the fraction of clusters with substructures in this supercluster is higher than elsewhere, meaning that the process of cluster formation is still strongly active, suggesting that this region could be considered a "nursery" of rich clusters.
Large scale structure
Various redshift surveys have been conducted in order to study the large scale distribution of galaxies in the Shapley Concentration , Drinkwater et al. 2004) and its relation with the distribution of clusters. Up to now, there are a few thousands of galaxy redshifts in the supercluster, allowing not only to determine its geometry but also its overdensity and mass (see Figure 1). The distribution of inter-cluster galaxies is well described by a plane tilted with respect to the line of sight: the distribution of galaxies around this plane is a Gaussian with a dispersion of 3.8 h −1 Mpc . The huge overdensity in number of galaxies (∼ 11 2 S. Bardelli Ettori et al. (1997). Lower panel: ASCA hardness ratio map of the A3558 complex from Akimoto et al. (2003). Note the "hot spot" between A3562 and SC1329-313 (reproduced with the permission of the American Astronomical Society). on a scale of ∼ 10 h −1 Mpc ) found for this supercluster is consistent only with ΛCDM or open CDM cosmological scenarios ; for a more recent determination on larger scales see Drinkwater et al. 2004). The determination of the mass of the supercluster is more difficult because, as indicated by the overdensity, the region is far from virialization. Various methods have been applied (Ettori et al. 1997, leading to an estimate of ∼ 10 16 M ⊙ .
All these properties are extreme also for ΛCDM models and for this reason it is quite difficult to find a supercluster like the Shapley Concentration in numerical simulations.
As can be seen in Figure 2, two main groups of clusters ("cluster complexes") dominate the central region of this supercluster. The A3558 complex (A3558 is the richest cluster of the region) is a structure elongated for ∼ 7 h −1 Mpc in the East-West direction, comprising also A3562, A3556, SC1329-313 and SC1327-312. The A3528 complex extends for ∼ 7 h −1 Mpc along the North-South direction and is formed by two pairs (actually A3528 is double) of interacting clusters, including also A3530 and A3532.
Dynamical studies of Bardelli et al. (2000) and Reisenegger et al. (2000) concluded that these structures (with mass of a few 10 15 M ⊙ each) are in the collapse phase, while the entire central supercluster region already reached its turn-around radius and has started to collapse. In fact, the complexes represent a major merger at an advanced stage (the A3558 complex) and at an early stage (the A3528 complex). These two strucures are connected by a "bridge" of galaxies, resembling the Great Wall: within this wall the overdensity in number of galaxies is ∼ 4, consistent with the overall overdensity of 3.3 obtained by Drinkwater et al. (2004) after having eliminated the cluster regions.
Formally, there is also another complex, dominated by A3571 (the cluster visible on the South-East in the lower panel of Figure 2) which is connected with A3572 and A3575 (two poor concentrations) in the optical image. However, in the X-ray this cluster appears well relaxed and this could indicate that the merging is "old", in the sense that the gas had time to go to the equilibrium, while galaxies are still at the end of the relaxation process.
The A3558 complex
Clusters belonging to this structure are embedded in a continuous envelope of both hot gas (Kull & Böhringer 1999) and galaxies (Bardelli et al. 1998a) on a scale of ∼ 7 h −1 Mpc , i.e. surrounding the entire structure (see Figure 3). The hot gas has not had origin from the cosmological filament (or "wall") seen in the redshift survey, but probably it is intracluster gas expelled from the clusters by the merging (see the spatial analysis of Note the clumpy distribution. Right panel: optical isodensity contours; different symbols correspond to galaxies belonging to different substructures found in the α, δ and velocity space (see Bardelli et al. 1998b). A3562 in Ettori et al. 2000). Also the galaxy envelope has had the same origin, being formed by the less bounded cluster objects, shared by the whole structure after the merging.
ROSAT, ASCA and Beppo-SAX (Bardelli et al. 2002, Hanami et al. 1999, Akimoto et al. 2004, Ettori et al. 2000 X-ray studies of this region did not detect shocks, although the gas distribution shows clear signs of disturbance. Only between A3562 and SC1329-313, in the Eastern part of the structure, a hotter region is detected (see Figure 3). Furthermore, SC1329-313 has a gas distribution particularly disturbed with a comet-like shape (Bardelli et al. 2002): an ASCA analysis of its X-ray spectrum led Hanami et al. (1999) to claim the existence of significant turbulent motions or of a multiphase gas. This means that the merging is still at work and the clusters are far from equilibrium. As can be seen from Figure 4, most of the gas distribution features in the A3558 complex could be related with galaxy distribution substructures.
This spectacular major merging represents a unique opportunity to study the effect of merging at radio wavelengths (see also the contributions of Zucca and Giacintucci, this conference). In particular, a peculiar radio feature has been detected: it is formed by a radio halo and by a diffuse radiosource (see Figure 5); the only other known case is in the Coma cluster. The radio halo, detected at the center of A3562, is nearby an head tail radiogalaxy: we verified (Venturi et al. 2003) that this radiogalaxy furnished the electrons which, after the reacceleration by a merging 0.4 Gyrs ago, are responsible for the halo emission. Also the radio spectrum of the halo is consistent with the last electron acceleration happened 0.4 Gyrs ago. Moreover, we found that there is a significant lack of radiosources in this structure (Venturi et al. 2000): this signal is coming mainly from the cluster A3558 and could indicate that merging could switch-off, at least for a period, the radiosource activity. Moreover, a relic radiosource has been found in the Westernmost part of the A3558 complex: a geometrical and dynamical reconstruction of this part of the structure lead to speculate that this relic had origin on the shock front (up to now undetected in the X-ray), caused by a small group infalling onto the A3556 cluster (Venturi et al. 1998). A3556 itself presents peculiar characteristics, because of its very low X-ray surface brightness with respect to the optical richness: moreover, its optical luminosity function presents an unusual shape, with a pronouced excess of bright galaxies (Bardelli et al. 1998a).
The A3528 complex
A3528, the dominant cluster of the complex, is a double cluster formed by two twins subclumps separated by 0.9 h −1 Mpc and the other two clusters of the complex (A3530 and A3532) are a close pair, separated by ∼ 1 h −1 Mpc . Gastaldello et al. (2003) studied A3528 with XMM-Newton observations, obtaining surface brightness, temperature and abundance maps. Although a bridge of hot gas connecting the two clumps has been found, no shock is detected (see Figure 6): this fact is unexpected, given the estimated masses of the clumps (∼ 8 × 10 13 M ⊙ each) and their relative distance. The most reasonable explanation is that the merging was not headon but off-axis. After having subtracted a β model from the surface brightness of the two subclumps, we found emission excesses which can be used to determine the infalling direction (see right panel of Figure 6). The conclusion is that this system is in an off-axis post-merging phase, with the closest core encounter happened ∼ 1 − 2 Gyrs ago. The interesting point is that the optical blue luminosities of the two subclumps, which are twins for what regards the X-ray properties, differ by an order of magnitude. This could indicated that one of the two clumps suffered more than the other of the galaxy "pealing" process, probably induced by a larger path through the large scale environment. XMM-Newton data on the couple A3530/A3532 are presently in the reduction phase.
Our general conclusion is that, although the two single pairs of clusters (the two clumps of A3528 and A3530/A3532) are mergings at an advanced state, the A3528 complex as a whole is at an earlier moment of collapse with respect to the A3558 complex, and the masses involved here are probably lower.
Summary
Central regions of superclusters give the possibility to find cluster mergings at different phases and strengths and to study the consequences of cluster collisions on the intracluster medium and the galaxy population. The case study of the Shapley Concentration which I presented here shows that the multiwavelength approach is the best way to analyse merging clusters. | 2014-10-01T00:00:00.000Z | 2004-03-01T00:00:00.000 | {
"year": 2004,
"sha1": "4e7390434aa7c634f0d0a37bb0e501ddfe92020c",
"oa_license": null,
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/7B154C06441F06BF35812526137C69C7/S1743921304000158a.pdf/div-class-title-interacting-clusters-and-their-environment-div.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "4e7390434aa7c634f0d0a37bb0e501ddfe92020c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
247770209 | pes2o/s2orc | v3-fos-license | A comparative analysis of weight-based machine learning methods for landslide susceptibility mapping in Ha Giang area
ABSTRACT Landslide susceptibility maps (LSMs) are very crucial for planning policies in hazardous areas. However, the accuracy and reliability of LSMs depend on available data and the selection of suitable methods. This study is conducted to produce LSMs by combinations of machine learning methods and weighting techniques for Ha Giang province, Vietnam, where has limited data. In study area, we gather 11 landslide conditioning factors and establish a landslide inventory map. Computing the weights of classes (or factors) is very important to prepare data for machine learning methods to generate LSMs. We first use frequency ratio (FR) and analytic hierarchy process (AHP) techniques to generate the weights. Then, random forest (RF), support vector machine (SVM), logistic regression (LR), and AHP methods are combined with FR and AHP weights to yield accurate and reliable LSMs. Finally, the performance of these methods is evaluated by five statistical metrics, ROC and R-index. The empirical results have shown that RF is the best method in terms of R-index and the five metrics, i.e. TP rate (0.9661), FP rate (0.0), ACC (0.9835), MAE (0.0046), and RMSE (0.0350) for this study area. This study opens the perspective of weight-based machine learning methods for landslide susceptibility mapping.
Introduction
Landslides are considered as one of the most devastating geological hazards in mountainous regions across the world (Achour et al., 2017;Pham et al., 2021;Tien Bui, Pradhan, Lofman, Revhaug, & Dick, 2012;Yalcin, Reis, Aydinoglu, & Yomralioglu, 2011;Zhou et al., 2020).The majority of the mainland in Vietnam are hilly regions where landslides affect people through damage of property and loss of life (Hung, Batelaan, San, & Van, 2005;Long & De Smedt, 2012;Trinh et al., 2016).Ha Giang province is one of the most mountainous areas that occur landslides every year (Hung et al., 2016;Khien, Hung, & Long, 2012;Le et al., 2018).Hence, landslide susceptibility mapping is very crucial for the development planning of this province (Ha et al., 2020;Hung et al., 2005Hung et al., , 2017)).Landslide susceptibility can be defined as the probability of a given terrain to yield slope failures (Yalcin, 2008).In other words, landslide susceptibility is expressed as the probability of the landslide occurrence that is caused by a combination of different conditioning parameters in a specific zone (Chalkias, Ferentinou, & Polykretis, 2014;Hong, Pradhan, Xu, & Tien Bui, 2015).
Several landslide susceptibility mapping methods have been proposed (He, Hu, Sun, Zhu, & Liu, 2019;Jaafari, 2018;Mahalingam, Olsen, & O'Banion, 2016;Nguyen & Liu, 2019).Those methods can be classified into two types of methods, i.e. qualitative and quantitative methods (van Westen, Rengers, Terlien, & Soeters, 1997).Analytic hierarchy process (AHP) (Saaty, 1980), a famous example of qualitative methods, is used and developed by many researchers (Gudiyangada Nachappa, Kienberger, Meena, H¨olbling, & Blaschke, 2020;He et al., 2019;Kayastha, Dhital, & De Smedt, 2013;Nguyen & Liu, 2019;Vojtekov ´a & Vojtek, 2020).AHP method works based on field experiences and the physical process with factors relevant to landslide occurrences.Experts assign a suitable weight to each corresponding class (or factor) based on their views about the degree of impact of the class (factor).Quantitative methods are taken to build the relationship between observed landslides and relevant factors (Jaafari, 2018).Frequency ratio (FR) method (Lee & Pradhan, 2006;Rasyid, Bhandary, & Yatabe, 2016;Yalcin et al., 2011) is well-known for the quantitative methods.The likelihood of a given area to landslide is estimated based on the fraction of the number of landslides within the given area over the total landslides in the whole study area, and the ratio between the given area and the whole area.AHP and FR methods are named as weighting methods (He et al., 2019).However, the performance of these weighting methods depends on the quality and availability of data to yield accurate and reliable results.The limitation of recorded landslides and the shortage of relevant conditioning factors are unavoidable in any study area.
In this paper, we carry out a study with the objective to utilize the strengths of weighting methods and statistical machine learning methods to generate the best landslide susceptibility maps (LSMs) for Ha Giang province.Three wellknown methods, i.e. random forest (RF), support vector machine (SVM), and logistic regression (LR), are selected to accomplish LSMs with highly accurate and reliable results of the study area, where landslides occur frequently.
Landslide locations are verified in fieldwork, and the landslide inventory map is established.Eleven relevant factors are selected and provided by Vietnam institute of geosciences and mineral resources (VIGMR). 1 We first select AHP and FR methods to compute the weights of classes.Then, we combine RF, SVM, and LR methods with AHP and FR weights, respectively, to produce LSMs.
We call the combined methods as weight-based machine learning ones.And, AHP method is also used to generate LSMs.Finally, the performance of AHP and weight-based methods is first evaluated by five statistical metrics as well as the ROC curve.R-index is then used to evaluate the correlation between landslide occurrences and landslide susceptibility levels in different LSMs.The empirical results have shown that the FR weights and AHP weights decide the performance of methods, in which RF is the best one.And, other methods with AHP weights produce the worst results, i.e.LR and AHP.Hence, weight-based machine learning methods are very effective for landslide susceptibility mapping and efficient for landslide risk reduction.
The remainder of this paper is organized as follows.Section 2 briefly gives an overview of the study area and conditioning factors.The methodology of this work is presented in Section 3. The results and analysis are shown in Section 4. Discussions of this work are described in Section 5. Finally, Section 6 gives conclusions.
Study area
Ha Giang province is located in the northern region of Vietnam, which covers the mountainous area of approximately 7,900 km 2 .This study area is geographically bounded between the latitudes 22°10 ′ 00 " N and 23°25 ′ 00 " N, and between longitudes 104°20 ′ 00-" E and 105°35 ′ 00 " E, and placed at an altitude of about 37 m to 2427 m above sea level.This province has annual rainfall ranging from 2300 mm to 2400 m, and the humidity of around 85%.The annual temperature of this province fluctuates from 18° to 23°C.Ha Giang province has three main rivers and highly dense streams.Lo river runs from the northwest to the southeast and supplies water to the central of the study area.Chay river starts from Tay Con Linh mountain and provides water for the west of the study area.The east part of this study area is supplied by Gam river.Sedimentary rocks with the principal sedimentary-carbonate components distribute in the southeast edges and the north part of the study area.This part is famous for Dong Van Karst Plateau Geopark, including 90% carbonate areas and sharp reefs.Intrusive rocks distribute in the west and southwest, and metamorphic rocks with rich aluminosilicate appear in the central and southeast of this study area.
Due to the climatic and geo-topographical characteristics, there are two main types of natural hazards in this study area, including flash floods and landslides.According to the report of the disaster management office, dozens of landslide occurrences with different volumes have been recorded every year.This work is carried out to select the best method to yield high accurate and reliable LSMs that can be used for landslide mitigation and prevention, as well as development planning of Ha Giang province.
Data used
This section describes the preparation of the landslide inventory map and a list of landslide conditioning factors.
Landslide inventory map
The preparation of the landslide inventory map is the fundamental step of the process of establishing LSMs in any study area.
We first used Google Earth digitization to detect landslide locations in the study area, then a total of 324 landslide polygons were verified by fieldwork.These polygon sizes change from 90 m 2 to 218,651 m 2 .We also collected 894 recorded landslide points in fieldwork, which was conducted by VIGMR.The majority of these landslide points happened by human modification of slopes (or cut-slope), which occurred along roads.The field survey indicated that most landslides were of shallow, fall, and complex types.The landslide inventory map was generated from a combination of recorded landslide points and verified landslide polygons.This map is illustrated in Figure 1.
Figure 1a presents the landslide inventory and the study area.Figure 1a only contains landslide locations (points and polygons).For generating non-landslide locations, we randomly select locations in the study area that have slope angle with less than 5° (degree).The locations are very difficult to slide.We consider the locations as non-landslides.
Figure 1b and 1c illustrates the detection of one landslide polygon through Google Earth, and this landslide is verified by fieldwork.And, we can see that several houses were damaged by this landslide.Figure 1d describes the location of one recorded landslide occurred along roads.
The recorded landslide locations can provide valuable information to obtain the distribution of landslide occurrences in a specific area, and the locations evaluate a degree of impact from different conditioning factors.Hence, the landslide locations are used to establish a correlation between landslide occurrences and conditioning factors.In this study, we randomly divided occurred landslide locations into two parts: 70% for training (625 points and 226 polygons) and 30% (269 points and 98 polygons) for validation.We first created a number of non-landslides, which were equivalent to the total number of occurred landslide locations.And then, 70% of non-landslides were used for the training part, and the rest were used for the validation part.
Landslide conditioning factors
According to many previous works (Gudiyangada Nachappa et al., 2020;He et al., 2019;Park & Kim, 2019), selecting the conditioning factors depends on data availability and experts' knowledge.Hence, we selected 11 factors for the landslide susceptibility mapping in the study area, i.e. elevation, slope, aspect, curvature, geology, weathering crust, land-cover, distance to road, road density, distance to fault, and fault density factors.These 11 maps were provided by VIGMR.
The slope, aspect, and curvature factors were derived from a digital elevation model (DEM) with 20x20m cell size .The geology and fault factors were with the scale of 1:200,000.The weathering crust and land-cover factors were with the scale of 1:50,000.These factors and the road factors were converted into raster maps with a 20 m resolution.These 11 factors are shown in Figure 2.
Slope.
Slope angle (slope) is expressed as one of the most crucial features of slope stability analysis.In other words, slope is engaged with the occurrence of landslides (Yalcin, 2008).Slope is adopted as an essential factor to creating landslide susceptibility prediction models.The slope factor of this study area was first derived from DEM, and then the natural breaks method was used to classify this factor into five classes: 0 − 11. 87, 11.87-22.75, 22.75-32.31, 32.31-43.20, and 43.20-84.09degrees (Figure 2b).
Aspect.
Aspect factor is described as another important parameter of slope features that influences the occurrence of landslides.The factor is directly associated with the process of evapotranspiration and moisture in hilly areas (Meten, PrakashBhandary, & Yatabe, 2015).Thus, aspect factor is also treated as an important conditioning one.We classified this factor into nine classes as follows: Flat (−1° 2c).
Curvature.
Curvature factor is also included in slope analysis and often used in many studies of landslide susceptibility.Streams and rains erode the curvature by time; hence, this factor relates to the divergence and convergence of landslide materials as well as displacement (Carson & Kirkby, 1972;Kaur et al., 2019).We classified the curvature factor into three classes: concave (<−0.5),flat (−0.5 − 0.5) and convex (>0.5) (Figure 2d).
Land-cover.
Land-cover is a different conditioning factor for landslide occurrences.The absence or presence of vegetation layers affects the stability of slopes (Barlow, Martin, & Franklin, 2003).This factor was also selected in existing works (Nguyen & Liu, 2019;Wang et al., 2020).The land-cover factor was classified into seven classes, which were rivers and lakes, barren lands, populated lands, natural forests, planted forests, cultivated lands, bushes, and shrubs (shown in Figure 2e).
Road density and Distance to road.
The majority of the landslide inventory points in this work that were recorded occurred along roads.The majority of this study area is mountainous terrains and hilly reefs; hence, the road factors are treated as conditioning factors.Distance to road factor is considered as a causative factor in several works (Wang et al., 2020;Yalcin, 2008;Yalcin et al., 2011).Hence, distance to road factor is taken to be a contributive factor.This factor was categorized into five classes, i.e. 0 − 20, 20 − 40, 40 − 60, 60 − 80, and >80 m, and illustrated in Figure 2h.Road density factor was used to establish the correlation between landslide points and roads.This factor was demonstrated in Figure 2i with five classes: very low, low, moderate, high, and very high.
Fault density and distance to fault.
Faults are also considered as a factor that causes landslide occurrences (Hung et al., 2016;Nampak, Pradhan, & Manap, 2014).In this study, we obtained two factors, Distance to fault and Fault density, from the faults map.Distance to fault factor was generated with five classes as follows: 0-100, 100-200, 200-300, 300-400, and >400 m from fault lines.And, fault density factor was established with five classes: very low, low, moderate, high, and very high.These two factors are shown in Figures 2j and 2k.
Correlation between factors
We use Pearson coefficient to compute correlations between conditioning factors.
Methodology
In this section, we describe the methodology in our study.Figure 4 illustrates the flow chart of our methodology; it consists of five steps as follows.
Step 1: Data preparation.We prepare the landslide inventory map and select available conditioning factors.This step is described in Section 2.
Step 2: Weighting.Frequency ratio (FR) and analytic hierarchy process (AHP) methods are used to compute weights of classes, respectively.Step 2 is presented in more details in the following Section 3.1.
Step 3: Machine learning methods.Three well-known machine learning and one weighting method are selected to generate landslide susceptibility maps (LSMs) with high accuracy and more reliability.These methods are random forest (RF), support vector machine (SVM), logistic regression (LR), and AHP.The four methods are combined with AHP and FR weights (in Step 2) to produce LSMs.This step is detailed in Section 3.2.
Step 4: Validation.In this step, both training and validation parts are taken into account to evaluate the performance of comparison methods.Section 3.3 describes the details of elevation metrics.
Step 5: Landslide susceptibility maps.This step is discussed in Section 4.
Frequency Ratio (FR)
FR method is a simple weighting one that is generally applied to landslide susceptibility analysis (Mahalingam et al., 2016;Yalcin et al., 2011).This method is used to compute the fraction of the probability of landslide occurrences and that of landslide non-occurrences for each conditioning factor (Wang & Li, 2017).For each conditioning factor j, we calculate the weight for class i in the factor j with respect to the occurrence of landslides, denoted by FR j i , in the following Equation 1.
where n i j is the number of landslide pixels of class i in factor j, N is the whole number of landslide pixels,S i j is the whole pixels of class i and S is the total of pixels in the study area.If the value of FR j i is larger than 1, that means the class i is more susceptible to landslide occurrences.Otherwise, the lower value indicates that this class is not relevant to landslide occurrences.
In this study, FR is used to obtain the weights of classes of factors from the training part.And, FR weights are shown in Table 1, Table 2, Table 3.
Analytic hierarchy process (AHP)
AHP method was developed and improved by Saaty (Saaty, 1980;Saaty & Vargas, 1984).This method is described as a decision-making process to address complicated issues (Saaty, 1980).In other words, AHP method is also considered as a semi-quantitative one, which is based on the knowledge of experts who are involved to solve complicated problems.AHP method has been used in many studies about landslide susceptibility mapping (Kayastha et al., 2013;Nguyen & Liu, 2019;Trinh et al., 2016).To yield LSMs, AHP process is divided into five following steps: Step 1: Conditioning factors are included in the discussion process, which is conducted by experts.
Step 2: Constructing a hierarchical model.
Step 3: Establishing a pairwise matrix between factors (classes) bases on the comparison of factors.The pairwise comparisons are described in Table 1.
Step 4: The eigenvector of the pairwise matrix is first computed, then weights of factors are obtained.Step 5: In AHP process, the consistency ratio (CR) is used to validate the adjustment of the pairwise matrix.First, the consistency index (CI) is computed as Equation 2, then CR is calculated as Equation 3. If the value of CR > 0.1, the process will return to Step 2 to reconstruct the model and readjust the pairwise ranking between factors.Otherwise, the weights in Step 4 are used to assign to corresponding factors.
where λ max is the largest eigenvalue, and d is the order of the comparison matrix in Step 3.
where RI is the corresponding value of the random index to the order of the matrix, and obtained from Table 2.
In this study, we use AHP method to compute both the weights of classes and the weights of factors, and these AHP weights are shown in Table 3.
Machine learnings methods
FR weights and AHP weights of classes are used in three machine learning methods, which are described in the following sections.
Logistic regression
Logistic regression (LR) method is used to establish a multivariate regression relationship between a dependent variable and a set of independent variables (Atkinson & Massari, 1998).This method is very effective to predict the presence or absence of an object based on a set of values of independent variables (Bai et al., 2010).Hence, this method is applicable to the dependent variable that is binary or dichotomous.Logistic regression method has been widely used in landslide susceptibility mapping (Bai et al., 2010;Rasyid et al., 2016;Vojtekov´a & Vojtek, 2020).The dependent variable (y) is the absence (0) or the presence (1) of a landslide occurrence.Given a set of independent variables x, the conditional probability of the occurrence of a landslide occurs is denoted by P(y = 1|x).The logit of the logistic regression is expressed as Equation 4: where b 0 is the intercept of the equation, and b 1 , . ..,b n are the coefficients of independent variables x 1 , x 2 , . ..,x n .The probability P(y = 1|x) is computed in the LR method as follows: where e is the exponential value, and is 2.718.
Random forest
Random forest (RF) method proposed by Breiman (Breiman, 2001) is an ensemble learning method of supervised learning, which is used to address problems of classification, regression, and high dimensional data (Trinh et al., 2016).RF has been used to predict the occurrence of landslides in many studies (Kaur et al., 2019;Park & Kim, 2019).RF method can be briefly explained as follows: Given a training data set T = {(x i ,y i )} M i=1 , M is number of samples in the training set T, and x i is the set of n features (factors), y i ∈ Y = {0,1} which is the absence (0) or the presence (1) of a landslide occurrence.RF model is generated as following steps: Step 1: The bagging method (Breiman, 1996) is used to produce K subset bootstraps B.
Step 2: For each B, a corresponding decision tree is formed.At each node of the decision tree, this method randomly samples mtry features to partition B and selects the best split based on Gini measure to generate children nodes.This process continues until all leaf nodes are obtained, and the decision tree is constructed.mtry is given, and often set to (k) Fault density .The whole K trees form the random forest model.
Step 3: A new object comes to the RF model; the value of this object is computed by the average of all values of K individual trees.
In the empirical analysis, we set K = 100, and mtry = 4.
Support vector machine
Support vector machine (SVM) was proposed and developed by Vapnik (Cortes & Vapnik, 1995;Vapnik, 1995).SVM is a well-known supervised learning method that works based on the identification of an optimal separating hyperplane.SVM method has been applied to produce landslide susceptibility maps in may works (Hong et al., 2015;Nhu et al., 2020).
Given data set T = {(x i ,y i )} M i=1 , SVM method splits this data into a high-dimensional feature space x, then the optimal hyperplane will be determined to classify y as landslides or nonlandslides.This hyperplane is formed by a set of support vectors.In this paper, we use the radial basis function kernel (RBF) in the SVM model, RBF can be described as the following equation.
where γ is the gamma parameter.
Comparison methods
We first use two different weighting methods, FR and AHP, to compute the weights of classes of each factor.Then, four methods (AHP, RF, SVM, LR) are combined with FR and AHP weights, respectively.Hence, we obtain eight different methods which are used to generate eight different landslide susceptibility maps.The eight methods are listed below: The four methods use FR weights as follows: RF-FR: Random forest method with FR weights LR-FR: Logistic regression method with FR weights SVM-FR: Support vector machine method with FR weights AHP-FR: AHP method with FR weights The other four methods use AHP weights as follows: RF-AHP: Random forest method with AHP weights LR-AHP: Logistic regression method with AHP weights SVM-AHP: Support vector machine method with AHP weights AHP-AHP: AHP method with AHP weights Platform.ArcGIS 10.3 and RStudio softwares were used in experiments.All methods were implemented in R and executed on a Window 10 machine with 3.4 GHz dual-core CPU and 16 GB memory.
Evaluation metrics
Statistical metrics.Five statistical metrics are used to make the comparison of performance of the eight different methods.Those five metrics are true positive rate (TP rate), false positive rate (FP rate), accuracy (ACC), mean absolute error (MAE), and root mean squared error (RMSE).TP rate, FP rate, and ACC metrics are computed from the confusion matrix (shown in Table 4).These three metrics are defined as the following equations.
where TP, TN, FP, and FN are obtained in Table 4. P and N is the number of landslides and the number of non-landslides, respectively.TP is the number of occurred landslides, which are predicted correctly.FN is the number of occurred landslides, which are predicted incorrectly.FP is the number of non-landslides, which are predicted incorrectly.TN is the number of non-landslides, which are predicted correctly.
These other two metrics, MAE and RMSE, are widely used to measure accuracy for regression models.Therefore, MAE and RMSE are adopted to evaluate the differences between the real values and predicted values.These two metrics are defined in Equation 10 and Equation 11, respectively.where y i and y predicated i are the actual values and the predicted values.M is the number of samples.
Receiver operating characteristics and AUC.The receiver operating characteristics (ROC) is another measure that is widely used to validate the performance of machine learning models.ROC curve demonstrates the percentage of true positive rate and the percentage of false negative rate.The area under the ROC curve (AUC) is considered as a measure to compare the performance of classifiers (Bradley, 1997).
Relative landslide density (R-index).To verify the reliability of susceptibility levels with landslide occurrences, we use the index of relative landslide density, denoted by R-index.R-index is defined by the ratio between the density of landslide occurrences of a given susceptibility level and the area of this given level (Baeza & Corominas, 2001).This index is defined as Equation 12.
where l i is the percentage of occurred landslides within a given susceptibility level i and L i is the percentage of area occupied by the level i.Higher value of R-index indicates that the level i is more susceptible to landslide occurrences.
Results and comparative analysis
Experiments were conducted on both the training and the validation parts.
Comparison of five statistical metrics
The performance of models was evaluated based on the training and validation parts.The five statistical metrics are shown in Table 5.The performance of models is good, if the values of TP rate, ACC are close to 1, and the values of FP rate, MAE and RMSE are towards 0.
In the training part, the results of FP rate metric produced by eight methods are very precise and close to 0. RF-FR and RF-AHP methods produce the best values of TP rate, ACC, MAE, and RMSE metrics, and these four statistical results of the two methods are not much different.RF-FR takes the best values of TP rate with 0.9661 and ACC with 0.9831, and the In the validation part, all eight methods still yield the best values of FP rate metric.RF-FR and RF-AHP are still the best methods in terms of TP rate, ACC, MAE, and RMSE statistical metrics.However, the results of these metrics produced by RF-AHP and RF-FR are quite different.RF-AHP takes the first place of these four statistical metrics, followed by RF-FR.LR-FR and SVMAHP methods occupy the next places.Similar to training part, LR-AHP and AHP-AHP are still the worst methods in terms of the four metrics for validation part.
Comparison of ROC and AUC, t-test
The ROC and AUC of eight methods are illustrated in Figure 5.The performance of these methods is assessed by the accuracy of AUC.The AUC values change from 0.5 to 1.0.The values closer to 1 indicate a more accurate method.
We can observe that RF-AHP and RF-FR methods are the best methods for training and validation parts.In contrast, LR-AHP and AHP-AHP result in the worst performance.
We use t-test with the 0.05 significance level to evaluate the statistical significances between methods.Table 6 shows the p-value of t-test for each pairwise comparison.Through this table, we can see that all p-value between each pairwise comparison is less than the critical value of 0.05, except the pair RF-AHP and RF-FR.Through the comparison of t-test, we find out that there are statistical differences between all methods except the pair RF-AHP and RF-FR.
Comparison of R-index
R-index is used to evaluate the relative landslide density between landslide susceptibility maps (LSMs) and the number of observed landslides.Higher values of R-index indicate that LSMs are more accurate and reliable.The natural breaks classification method in ArcGIS is selected to classify landslide susceptibility maps into five susceptibility levels, which are very low, low, moderate, high, and very high.The LSMs with these five levels are demonstrated in Figure 6. Figure 7 demonstrates the percentage of areas that are occupied by the five susceptibility levels.Figure 8 illustrates the percentage of occurred landslide locations within the five different levels for both training part and validation part.The results of R-index are shown in Figure 9.
We can observe that the RF-FR and RF-AHP methods produce the highest R-index value of the very high susceptibility level compared to the other methods for both training and validation parts.The R-index value indicates that the very high level of LR-AHP is relevant to landslide occurrences.However, SVM-FR yields the lowest value of R-index for the very high level.It can be explained as follows: the percentage of area occupied by the very high level is the highest in SVM-FR (illustrated in Figure 7); and the percentage of landslide locations in the very high level is not much higher than that of the high level (shown in Figure 8).
We can conclude that higher levels of LSMs generated by RF method are very relevant to the occurred landslide locations.The R-index results of RF method are more accurate and reliable than those of the other methods for both training and validation parts.
Discussion
Identifying susceptible areas with landslide occurrences is one of the most critical issues in land management and plan makers for civil protection.The objectives of many works are to model landslide susceptibility, and evaluate the performance of models.Moreover, a method that is selected need to base on specific scientific objectives of the study (Elith & Leathwick, 2009), such as the accuracy and the reliability of LSMs.Many single and hybrid methods have been developed to model landslide susceptibility (Catani et al., 2013;Chen et al., 2018;Hong et al., 2015;Nhu et al., 2020;Pham et al., 2019;Tien Bui et al., 2019); however, there are room for improvement by utilize the strengths of weighting methods, particularly expert's knowledge, and effectiveness of machine learning techniques.
This paper aims to make a comparative study of four methods that are combined with weighting methods for landslide susceptibility mapping in Ha Giang province, Vietnam.The type of landslide occurrences in the study area are mostly shallow landslides.Furthermore, the landslide locations are recorded and verified along and not far to roads.Thus, weathering crust, geology, slope factors have the highest values of AHP weights due to experts' opinion.The comparisons of different methods in terms of TP rate, FP rate, ACC, MAE, RMSE, and AUC have shown that RF, SVM methods are very efficient and effective for landslide susceptibility mapping for this study area.RF method produces the best performance for landslide susceptibility mapping.RF is an ensemble method and computes the value of each object (landslide or nonl-andslide) by averaging all results from many trees in the forest.Therefore, this model can enhance the accuracy of predicted values.And this conclusion is also in agreement with the finding of the works (Catani et al., 2013;Chen et al., 2018;Kaur et al., 2019;Park & Kim, 2019).
The performance of SVM depends on the different values of features (or classes in each factor).If the values are much different, SVM works very effectively.The values of AHP weights range in 0 and 1, and the values of FR weights change from 0 to 5 (seen in Table 3).That is why the results of SVM-AHP are higher than those of SVM-FR.And the results of SVM model are reasonable in modeling landslide susceptibility (Goetz et al., 2015;Tien Bui, Tuan, Klempe, Pradhan, & Revhaug, 2016).
The performance of LR method also depends on features.However, LR try to predict the value of target classes (0 (non-landslide) and 1 (landslide)) by an equation, i.e.Equation 5. Thus, if the values of features are much similar, LR predicts more accurately.Hence, LR-FR performs better than LR-AHP.These things are also found in the work (Trigila et al., 2015).
AHP method results in worse outcomes.Because AHP method works based on the opinions of experts; therefore, the results of this method are in favour of factors that are assigned to higher weights.In contrast, RF, LR and SVM overcome this issue, except LR with AHP weights.
In our study, the strengths of expert's knowledge are utilized to compute the weights of classes of each conditioning factor.Because the distribution of collected landslides is often not equal and balanced in all classes of factors.
Moreover, the size of study area and the scale of factor maps are also common problems.Thus, these facts lead to a bias in results of models.Hence, the valuable knowledge is very critical to adjust the important degree of classes of each factor.In our work, we can solve these problems by a combination of the knowledge of experts and effectiveness of machine learnings methods (Trinh et al., 2020(Trinh et al., , 2016) ) in addressing noise and data over-fitting.Therefore, applying machine learning methods (for example, RF, SVM and LR) is necessary to get better results and to reduce the bias in favor of experts' opinions for factors.Moreover, the combinations of machine learning methods and weighting methods (FR and AHP) should be used to gain the best results.In addition, the landslide inventory map is the vital key for landslide susceptibility mapping.First, landslide locations that were collected and verified in the field trip are correct and relevant to highly susceptible level areas, which are illustrated in Figure 9. Second, investigating and collecting the information of landslide locations in fieldwork are the most critical steps for landslide susceptibility mapping.
This study has some limitations.First, the landslide inventory map contains landslide locations that are verified in accessible areas.Hence, these landslides can not represent all characteristics of all classes in each conditioning factor.Second, the total of verified landslide areas (or cells) is too small compared to the whole study area with 7,900 km 2 .
Moreover, the available datasets and the choice of landslide conditioning factors are limited and based on experts' knowledge.Although the results of R-index have shown that higher susceptibility level areas are more relevant to landslide occurrences, we still need to collect more historic landslides with different shapes across this study area to confirm the accuracy and reliability of LSMs.
Conclusion
In this paper, we presented a comparative study of machine learning methods with weighting techniques for landslide susceptibility mapping for the entire of Ha Giang province.We used landslide locations collected and verified from the fieldwork as the landslide inventory map, in which 70% of them were used for training part and 30% were treated as validation part.The 11 conditioning factors were selected for this study.AHP and FR methods were first used to compute weights of classes of each factor.Then, RF, SVM, LR, and AHP methods were integrated with AHP and FR weights to generate landslide susceptibility maps.The performance of these methods was evaluated by several metrics: TP rate, FP rate, ACC, MAE, RMSE, AUC, as well as R-index.The results of training and validation parts from these methods have shown that RF is very effective for landslide susceptibility mapping for this study area.AHP-AHP and LR-AHP produce the worst performance.As a result, we recommend RF method to generate LSMs to reduce and prevent the impact of landslides in this study area and other areas with similar contexts.
Thi Hai Van Nguyen received the MSc degree in ecological marine management from Free University of Brussels, Belgium.She is currently a principal researcher with the Center for Remote Sensing and Geohazards, Vietnam Institute of Geosciences and Mineral Resources.She has been a leader of many projects, including institute levels and ministry levels.
Figure 1 .
Figure 1.Location of the study area and the landslide inventory map.
Figure 3 illustrates Pearson correlations between these factors.If the value of Pearson is 1, that means two factors can represent to each other.We can observe that the maximum of Pearson values from Figure 3 is approximately 0.70.Hence, all factors they are independent to each other, and they can not represent to each other.
Figure 4 .
Figure 4.The flow chart of the methodology for landslide susceptibility mapping.
Figure 5 .
Figure 5. statistics for the methods used for landslide susceptibility mapping.(a) Training.(b) Validation.
Figure 8 .
Figure 8. Distribution of landslides percentage in five susceptibility levels in eight methods.
Figure 9 .
Figure 9. R-index in five susceptibility levels for training and validation parts.
Figure 7 .
Figure 7.The percentages of areas occupied by five susceptibility levels.
She published several papers on her interests.Her research includes climate change, natural disasters management, geohazards, and landslides.Khanh Quoc Nguyen received the PhD degree in remote sensing and geomatics from University of Greifswald, Germany.He is currently a Director of Center for Remote Sensing and Geohazards, Vietnam Institute of Geosciences and Mineral Resources.His research includes remote sensing, GIS, climate change, natural disasters management, and geohazards.Lien Thi Nguyen received the MSc degree in disaster management from Thuyloi University, Vietnam.She is currently a researcher with the Center for Remote Sensing and Geohazards, Vietnam Institute of Geosciences and Mineral Resources.Her research includes forecasting disasters and climate change. /orcid.org/0000-0002-6973-9749
Table 3 .
FR weights and AHP weights.
Table 5 .
Performance of comparison methods for training and validation parts.values of MAE (0.0045) and RMSE (0.0347) are occupied by RF-AHP.The other methods take the next places, in which LR-AHP and AHP-AHP produce the worst performance, especially, the values of TP rate are 0.3535 and 0.3839 for LR-AHP and AHP-AHP, respectively. best
Table 6 .
Pairwise comparison of methods in terms of t− test. | 2022-03-29T15:08:51.794Z | 2022-03-27T00:00:00.000 | {
"year": 2023,
"sha1": "9feca1e0a79c80e8350a955030de30364b2ed74f",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/20964471.2022.2043520?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "5d9cbc38c1083c05a094b0c6d4d422cc61ae9c93",
"s2fieldsofstudy": [
"Environmental Science",
"Computer Science",
"Geology"
],
"extfieldsofstudy": []
} |
258901318 | pes2o/s2orc | v3-fos-license | Treasury Management Training For Bank of China Employees
: Over the last several decades, corporate treasury management methods have evolved significantly. Treasury management's duties continue to expand. The Bank of China, which has been commercially active in Indonesia since 1938, is one of the international banks that has begun to recognize the importance of treasury management. As a management organization, Smart Indonesia Academy offers treasury management training to the Bank of China team as a means of empowering the community. The goal of this research was to conduct an analysis of the Smart Indonesia Academy program, which included delivering marketing training for treasury management to the Bank of China team in the context of community empowerment. This is a qualitative study using an analytical descriptive strategy. The findings indicated that the community service on March 30, 2022, was conducted efficiently and successfully. Additionally, the Bank of China team believes that similar community service initiatives may be sustained with a variety of different subjects or issues.
INTRODUCTION
Over the last several decades, corporate treasury management methods have evolved significantly. Treasury management's duties continue to expand. Treasury tasks started to be seen almost exclusively through the perspective of financial cash management or liquidity management. Recent advancements (the development of new information and communication technologies, the emergence and use of new financial instruments, and an organizational philosophy centered on increasing organizational value in all areas) have facilitated the development of new treasury management functions and increased the importance of treasury departments within businesses (San-Jose et al., 2011).
Treasury management's duties have evolved beyond monitoring monetary flows and holdings to include a broader variety of tasks. Centralization of treasury processes enables businesses to improve efficiency, increase transparency, and get real-time access to information across wide geographic regions, numerous time zones, and different organizations. There are many stages involved in centralizing treasury management, ranging from decentralized treasury management to completely centralized cash and treasury administration. Numerous businesses begin with centralized exchange rate and interest rate risk management as a first step toward centralizing treasury activities, progressing via cash and liquidity management to a completely centralized treasury (Petr Polak, 2010). This is also the case for banking institutions which have begun to think about the management of exchange rate and interest rate risk as the first step towards centralizing treasury activities by implementing treasury management. Bank of China (Hong Kong) Limited Jakarta Branch (formerly Bank of China Limited Jakarta Branch) (hereinafter referred to as "BOCHK Jakarta" or "Bank") is a commercial branch of BOCHK that has been in operation since 1938. It is one of the few foreign banks in Indonesia that began its operations with treasury management. Because this bank ceased operations in 1964, Thirty-nine years later, on January 13, 2003, the bank began operations according to the Governor of Bank Indonesia's Decree No. 5/1/KEP.GBI/2003. After resuming operations in Jakarta, the Bank of China Jakarta Branch changed its name to Bank of China Limited Jakarta Branch, which was approved by Bank Indonesia (Bank of China, 2022).
As a management company, Smart Indonesia Academy provides treasury management training to the Bank of China team as a means of empowering the community. This method of community empowerment seeks to strengthen the skills and potential of the Bank of China team in terms of risk management strategies and overall corporate efficiency. As a result, extensive treasury management training is required for the Bank of China staff from the Smart Indonesia Academy. The goal of this study is to look at the Smart Indonesia Academy program, which teaches treasury management to the Bank of China team in the context of empowering the people in the area.
LITERATURE REVIEW Community empowerment
Empowerment is a process of maturation, independence, and strengthening the lower class's bargaining position against oppressive forces in all spheres and sectors of life (Alwi et al., 2016). "Empowerment of communities" is a process that tries to maximize potential and resolve a variety of issues that arise in a society (Y Winoto, 2019). Training therefore becomes a tool for empowering the community, particularly in terms of increasing team awareness and being a vehicle for achieving business objectives (Sadu, 1998).
Treasury Management
Local banks need fresh management techniques in today's economic context. Bank management mechanisms have undergone substantial modifications in recent years, but the approach for evaluating their success has remained the same (Kostyuchenko et al., 2020). Treasury management is one of the approaches used in a dynamic profession that is always growing, striving for optimum efficiency and streamlining all treasury functions in banking (P. Polak et al., 2018). Treasury management is a term that refers to a person or group of people who are assigned a position in a corporation with the duty and tasks of ensuring the firm's liquidity (Afriani, 2020).
METHOD
In this qualitative research, analytical descriptive method is used to collect data and analyze and interpret the results (Sugiyono, 2016). Researchers used primary data, namely interviews with the Bank of China team and Smart Indonesia Academy. Secondary data are journal articles, books, news articles, etc. The object of this research is the training provided by Smart Indonesia Academy to Bank of China. The Bank of China acts as the main instrument in this qualitative research. The researchers used a three-stage technique for data collection: orientation, selection, and identification. The data were analyzed using the Miles and Huberman model. According to Miles and Huberman (Miles & Saldana, 2014), qualitative data analysis consists of the following steps: data reduction, data presentation (data presentation, data presentation and drawing conclusions).
RESULTS AND DISCUSSION
This community service activity was carried out by the Smart Indonesia Academy online on March 30, 2022, which was attended by the Bank of China team. The activity began with an opening filled with the executive and director of Smart Indonesia Academy (SIA) then continued with the main activity in the form of material presentation.
The presentation of material titled "An Introduction to Treasury Management" to 15 Bank of China staff. This material is written in compliance with the CIPFA TM Code of Practice for Treasury Management. Investment, cash flow management, banking, money market, and capital market activities are all examples of organizational management. The Director of SIA said that good risk management is important to make sure that the company can perform at its best in the face of these risks. The presentation contains information relating to the Local Government (Scotland) Act 1975, which the team learned from the Bank of China, including the authority to borrow, permitted sources, the ability to lend to other authorities, the ability to loan funds, and the authority to establish funds. Additionally, the SIA Director said that the Financial Circular to Scotland is as follows: (Smart Indonesia Academy, 2022a): -Scottish Ministerial approval for local authorities to invest money -Must comply with the provisions set out in this circular -Investment properties are included in the LA . investment portfolio -Any loan to a third party is an investmentexcept for loans to other authorities that are part of the Common Good under s.40 2003 Act -Pay attention to the TM Code of Practice and Prudential Code -Only make investments that are defined as permitted investments -Identify which investments are allowed in the coming financial year Limit of the amount that can be invested in each type of investment allowed -State the purpose of each type of investment The SIA Director explained why the CIPFA Treasury Management Code should be understood by the Bank of China team due to the following, among others, high-profile losses from investing with failed banks in the 1990s, declining trust back between Municipal financial institutions and local authorities, increasing inappropriate risk exposure, maintaining high and consistent standards in safeguarding public funds and debt, increasing cash balances held by local authorities and new investment instruments. In addition, the CIPFA Treasury Management Code has three main principles, namely the existence of formal and comprehensive objectives, policies, practices, strategies, & reporting arrangements for effective management and control of TM activities, risk control: security, liquidity, yield and value for money in the context of management. effective risk management (Smart Indonesia Academy, 2022b).
The director of SIA explained that the Prudential Code indicators are described as follows (Smart Indonesia Academy, 2022b): -Reviewed at the end of the year -Revised as neededfollowing the right process -Set for the coming year and the next 2 years -Approved by the same process as the budget The Director of SIA explained things that affect financial market results, which are described in his presentation as follows: Gap management is frequently emphasized as a vital component of financial institution management. Gap management is defined as an endeavor to manage and control the difference between assets and liabilities over a certain time period. The difference may be measured in terms of available cash, interest rates, or maturity. Additionally, the gap management objectives (Smart Indonesia Academy, 2022b) are as follows: -Minimize the possibility of losses from changes in interest rates (repricing structure) -Strive for maximum income with certain risks -Support liquidity management -Develop a balance sheet structure that can improve performance The presentation discussed why it is critical for the Bank of China team to understand and practice gap management. Bank of China is also anticipated to focus on numerous factors in gap management, including the following (Smart Indonesia Academy, 2022b): • Period (maturity) • There are differences in the timing of each asset component • and liabilities • Repricing • The length of time for determining interest rates for the components of assets/loans and components of liabilities, both before maturity and after • Interest rates • The amount of interest rate or price set for • assets and liabilities side • Acceleration of change • Speed of adjustments that can be made to assets and liabilities in the event of changes in interest rates so that the position remains profitable Along with the management gap, the Director of SIA discussed in the presentation how a management activity is used to establish the interest rate on the bank's products, both in terms of assets and liabilities. This is often referred to as "management pricing." One kind of policy that the Bank of China may explore is determining the lending rate, or the interest rate on loans supplied to clients. Covering all of the costs of a loan in order to get a good return is what this insurance is for.
Community service combined with this training may aid in the development of knowledge of treasury management via definition, complete end-to-end analysis, and the placement and implementation of a treasury management plan by the Bank of China team. As a result, the Smart Indonesia Academy is here to help the community through comprehensive treasury management training and to help the Bank of China team with real-world management ideas. The Bank of China team has also agreed that community service via SIA's treasury management training is efficient, effective, and beneficial in terms of increasing the Bank of China team's knowledge and capabilities. Additionally, the team believes that similar community service initiatives may be sustained with a variety of different themes or issues.
CONCLUSSION
A community service activity, which included this training, was organized on March 30, 2022. A total of 15 people participated. According to trainees, this training may aid in the development of knowledge of treasury management via definition, a comprehensive end-to-end analysis, and how a treasury management plan should be established, as well as the goals that the Bank of China must fulfill. As a result, Smart Indonesia Academy is here to empower the community via comprehensive treasury management training and to provide practical management solutions to the Bank of China team. Additionally, the Bank of China team acknowledged that community service via SIA's treasury management training is efficient, effective, and helpful in terms of expanding the Bank of China team's knowledge and skills. Additionally, the team believes that these kinds of community service activities may be sustained by including a variety of different themes or subjects. | 2023-05-26T15:06:51.334Z | 2022-05-11T00:00:00.000 | {
"year": 2022,
"sha1": "8077cd475f0b37bb756e76bbd22822684cbe7dce",
"oa_license": "CCBY",
"oa_url": "https://journal-nusantara.com/index.php/Joong-Ki/article/download/379/305",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "0689759ee85d6e9716532766a2df0dbe51810fa2",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
255067433 | pes2o/s2orc | v3-fos-license | Deconstructing Gender Differences in Experienced Well-Being Among Older Adults in the Developing World: The Roles of Time Use and Activity-Specific Affective Experiences
Due to declining fertility rates and increasing longevity, the world is growing older. Improving the quality of life of older adults, and not merely preventing deaths, is thus becoming an important objective of public policies. It is, therefore, urgent to understand the key dimensions of older adults’ subjective well-being as well as their main drivers. Women represent a large proportion of the older population, and existing evidence suggests that they may be particularly vulnerable, especially in the developing world. Analyzing potential gender differences in experienced well-being in older adults is hence crucial. We exploit information on time use and activity-specific emotional experiences from the abbreviated version of the day reconstruction method contained in the WHO Study on Global Ageing and Adult Health (SAGE), focusing on five developing countries. We first quantify gender differences in experienced well-being among older adults, which we then deconstruct into corresponding differences in time use and activity-specific net affects. Adjusting for age only, our results indicate a gender gap in experienced well-being in favor of men. Yet, adjusting for additional individual characteristics and life circumstances beyond age weakens this association. Illustrative counterfactual analyses further suggest that gender differences in activity-specific net affects appear more important than differences in time use for explaining the disadvantage of older women. Our results suggest that women’s lower affect in most activities is linked to the conditions under which these activities are performed, and in particular to the higher level of disability of older women compared to men of the same age.
Introduction
Subjective well-being is increasingly recognized as an indispensable complement to traditional indicators of economic performance and human development, such as Gross Domestic Product (GDP) or Human Development Indices (HDI), to comprehensively assess and track the welfare of societies as a whole as well as the well-being of different population groups (Stiglitz et al. 2009;Dolan et al. 2008). Besides striving to increase performance in terms of health, economic outcomes, and education, governments should therefore also take into consideration the impact of institutions and policies on individuals' subjective well-being. Arguably, policy-makers should at least in part be guided by the priorities of citizens themselves, and optimizing the subjective well-being of the population should thus constitute a meaningful policy objective in itself. Moreover, several studies suggest that subjective well-being may also influence more objectively measurable life circumstances, such as productivity and social behavior, as well as individuals' health and longevity (e.g., De Neve et al. 2013;Diener et al. 2017), which further highlights the importance of subjective well-being as a key goal of health, social, and economic policies. Support for using subjective well-being assessments as policy indicators is based on growing evidence that self-reports of subjective well-being are a valid way to measure individual welfare and happiness. For example, several neuroimaging studies have shown that subjective well-being reports are closely related to multiple cognitive-emotional brain regions (e.g., Luo et al. 2014;Sato et al. 2015;Ren et al. 2019).
The age structure of the world population is changing due to declining fertility rates and increasing longevity. Worldwide, the share of the population over the age of 65 years has increased from 6 to 9 percent between 1990 and 2019 and is projected to increase to 16 percent by 2050, reaching a total of 2 billion individuals falling into this age group (United Nations 2019a). Understanding the drivers of subjective well-being in older adults and thus the potential impacts that global aging will have on the overall well-being of the society is thus essential (National Research Council 2013). In line with this new demographic reality, global institutions are increasingly acknowledging well-being in old age as an important issue: Indeed, the 2030 Agenda for Sustainable Development states the promotion of wellbeing at all ages as one of its goals (United Nations 2019b) and the World Health Organization (WHO) defines Healthy Ageing as "the process of developing and maintaining the functional ability that enables well-being in older age" (World Health Organization 2015). Yet, much of the academic discussion regarding the subjective well-being of older adults focuses on what is known from experiences in high-income countries, while little is known about the situation in low-and middle-income countries, where 80 percent of the worlds' elderly will be living by 2050 (Shetty 2012).
Further examination of the current demographic situation reveals that the older population is predominantly female and is likely to remain so in the foreseeable future. On average, during the period 2010-2015, women outlived men by about 4.5 years. In 2017, women thus represented 54 percent of those aged 60 years and above and 61 percent of those aged over 80 (United Nations 2019c). However, although women may generally expect to live longer than men, there is evidence that they experience higher morbidity than men of the same age (e.g., Verbrugge 1985;Denton et al. 2004;Case et Paxson 2005). In addition, women's lower participation in the paid workforce throughout their life 1 bears negative consequences in older age, such as lower access to pensions and other economic resources resulting in greater poverty, lower access to healthcare and social care services, and higher risk of abuse (World Health Organization 2015). Finally, compared to men, women of all ages tend to spend more time on non-leisure activities such as unpaid housework (Miranda 2011). As highlighted in the 2012 World Development Report on Gender Equality and Development (World Bank 2012), in most countries, women allocate between one and three more hours per day to housework as compared to men, spend two to 10 times more hours on care-related activities, and up to four hours less on market activities. Several studies further analyze how discrepancies in the gender division of labor evolve over the life course. For example, the World Development Report (World Bank 2012) notes that the above patterns are often accentuated after marriage and childbearing but diminish with older age. While there are clear gender differences in time use, there is little evidence to date regarding the impact of these differences in time use on subjective well-being, especially on how men and women feel as they live their daily lives (experienced well-being).
Subjective well-being is a multifaceted concept, which is commonly divided into two constituent components: evaluative and emotional well-being (National Research Council 2013;OECD 2013 ) . 2 Measures of evaluative well-being on the one hand are more commonly available for analysis and are based on respondents' cognitive evaluations of their own life, often using questions such as "how would you rate your life overall these days". Measures of emotional well-being, on the other hand, aim to capture respondents' affective experiences as they live their lives such as feeling calm, relaxed, worried, stressed or angry. Boarini et al. (2012)-among others-argue that measures of evaluative and emotional well-being are both conceptually and empirically distinct: while life satisfaction seems to be more closely related to cognitive judgements of how individuals evaluate their own life and how they compare it to that of others, affective experiences seem to be strongly influenced by time use. In addition, Kahneman and Krueger (2006) claim that measures of emotional well-being may be less subject to individual reporting biases compared to measures of evaluative well-being. Similarly, Kahneman and Riis (2005) argue that measures of emotional well-being may be less influenced by cultural disposition, self-conceptualization, memory and introspection. Moreover, they emphasize that emotional well-being may be a more important determinant of future health due-for example-to the cumulative effects of stress. Consistent with this idea that the quality of peoples' daily experiences is linked to health outcomes, several authors show that emotional well-being is a strong predictor of mortality (e.g., Carstensen et al. 2011;Steptoe and Wardle 2012). Specifically, experienced well-being is a duration-weighted measure of emotional experiences as people live their everyday lives. It thus records and aggregates emotional experiences through time to obtain a measure of individual emotional well-being. This approach to measuring well-being also has a long-standing tradition in economics: In 1881, Edgeworth proposed to record utility as the quality of experiences at every instant in order to create what he called a "hedinometer" (Edgeworth 1881). As formulated by McFadden (2005), Edgeworth "envisioned the 1 Women represented 86 percent of individuals out of the labor force and 60 percent of unpaid workers in 2018 (World Bank Group 2018). 2 Occasionally, researchers further explicitly distinguish a third dimension of subjective well-being, eudaimonic well-being, which, however, shares many characteristics with evaluative well-being and is, therefore, also often subsumed in the broader category of evaluative well-being (OECD 2013). level of happiness associated with an experience as the integral of the intensity of pleasure over the duration of the event" (p. 3).
Our study investigates gender differences in experienced well-being as a conceptualization of emotional well-being. Most of the literature on gender differences in subjective well-being focuses on evaluative well-being while studies examining gender differences in emotional well-being are scarcer (see for example Batz and Tay (2018) for a review). As a consequence, little is known regarding gender differences in overall experienced wellbeing-especially in the developing world-and our paper therefore attempts to bring new evidence to this literature. Specifically, we explore the roles of time use and activityspecific affective experiences for these gender differences in experienced well-being using data on older adults from five developing countries. Our analysis exploits detailed data from an abbreviated version of the Day Reconstruction Method (DRM) that was administered in the first-and currently only available-wave (2007)(2008)(2009)(2010) of the World Health Organization's (WHO) Study on Global Aging and Adult Health (SAGE). The inclusion of DRM data in SAGE offers a unique opportunity to deconstruct experienced well-being into the two components already put forward in the context of Edgeworth's hedinometer: time use and affective experiences during each activity. To the best of our knowledge, no other survey provides harmonized DRM data in multiple countries, especially no aging survey in low-or middle-income countries. While we do not aim to conduct detailed cross-country comparisons in this paper, we use the multiple analyses of data from different study sites as an opportunity to validate our findings across different cultural and geographic settings from around the world.
To further deconstruct any gender differences in experienced well-being, we separately analyze potential gender differences in time-use on the one hand, and activity-specific net affects on the other. These analyses allow us to evaluate the relative importance of each of the two constituent parts of experienced well-being. We then isolate the relative contributions of gender differences in time-use ("time composition effects") and gender differences in activity-specific affective experiences ("saddening effects"), by comparing hypothetical levels of experienced well-being based on counterfactual thought experiments which eliminate existing gender differences in activity-specific net affects and gender differences in time use, respectively. These thought experiments are analogous to those reported in Flores et al. (2015) for analyzing differences in experienced well-being between older adults with and without disabilities, and are similar in spirit to analyses by Knabe et al. (2010), who compare the experienced well-being of employed and unemployed individuals in Germany.
Our study contributes to the existing evidence in several ways. Building on previous research on gender differences in various dimensions of subjective well-being among older adults in low-and middle-income countries (Kieny et al. 2020b), we provide a detailed analysis of gender differences in experienced well-being that isolates the relative roles of gender differences in time use vs. gender differences in activity-specific affective experiences to account for any differences in experienced well-being between men and women. Discussions on gender differences in subjective well-being in academia and policy often highlight gender differences in time use and "time poverty" (e.g. Wodon and Blackden 2006;Walker 2013;Sweet and Kanaroglou 2016), especially related to the generally female double burden of performing both professional and house work. Furthermore, we use two different statistical models to assess any gender differences in experienced well-being and deconstruct these differences into corresponding gender differences in time use and activity-specific net affects. Specifically, we first perform our analyses adjusting first for age only (age-adjusted models) before moving to richer statistical models that also adjust for a larger set of covariates related to respondents' socio-demographic characteristics, health status and community environment beyond age (fully-adjusted models). Comparing and contrasting gender differences based on age-adjusted and fully-adjusted models allows us to first assess age-adjusted sub-population differences in experienced well-being between men and women along with their sources in terms of corresponding gender differences in time use and activity-specific net affects, which may be most relevant for an overall assessment of gender inequalities in experienced well-being. Moving to the fully-adjusted models, in turn, allows us to be more specific about the potential roles of gender per se vs. gender differences in general life circumstances for resulting gender differences in experienced well-being, time use and activity-specific net affects. These analyses may provide important insights on the mechanisms underlying these gender inequalities and suggest avenues for potential policy levers and interventions aimed at closing the subjective well-being gap. Our approach is thereby inspired by mediation-type analyses commonly used to explore different mechanisms linking a specific outcome of interest (here: experienced well-being) with an independent variable of special interest (here: gender) and allows us to side-step the long-standing debate regarding the use of control variables in research on subjective well-being by performing our analyses in both ways, i.e., without and with a comprehensive set of control variables (Blanchflower and Oswald 2008;Glenn 2009;Blanchflower and Oswald 2009).
Data
We use data from the first wave of the World Health Organization's Study on Global Ageing and Adult Health (SAGE), collected between 2007 and 2010. SAGE is an internationally harmonized survey on aging in low-and middle-income settings, whose data collection activities are mostly focused on adults aged 50 and older. As SAGE only collected data from relatively small comparison samples of younger adults aged 18 to 49 years old, we focus our analysis on the relationship between gender and experienced well-being among adults aged 50 and older, which represented the main target population of SAGE. While SAGE data is collected in six low-and middle-income countries-China, Ghana, India, Mexico, the Russian Federation and South Africa-we exclude Mexico (2070 observations) from our analysis because close to 50 percent of the Mexican sample has missing information on our outcomes of interest from the well-being module due to incomplete interviews. Using SAGE data enables us to perform parallel analyses for countries in different regions of the world and across different cultural contexts based on fully harmonized data. Such parallel analyses allow us to determine whether our results are robust across multiple settings and, therefore, whether they represent a general pattern rather than some country-specific idiosyncratic associations due to specific cultural contexts or location. SAGE contains individual-and household-level data, including information about respondents' socio-demographic characteristics, their social environment, health and healthcare use, and well-being. A key asset of SAGE consists in the inclusion of comprehensive assessments of emotional well-being. Notably, the administration of an abbreviated DRM instrument to measure experienced well-being generates a combination of time-use data with corresponding reports of individuals' affective experiences during the reported activities. In the abbreviated DRM instrument of SAGE, individuals are randomly allocated into one of four groups, which are in turn asked to report on their time use and affective experiences over the course of the previous morning, afternoon, evening or entire day, respectively. We drop from our sample the 7649 individuals that were randomly assigned to the full-day group, as those respondents do not report the detailed time diary data and corresponding activity-specific affects needed to construct our measure of experienced well-being. Finally, we eliminate 1660 additional observations with missing information on any of the covariates used in the analysis. Following the above sample selection procedures, our final sample consists of 21,488 respondents, including 9106 observations from China, 3031 from Ghana, 4833 from India, 2513 from Russia, and 2005 from South Africa.
Experienced Well-Being
We use experienced well-being as a measure of emotional well-being. In order to construct this measure, we combine the data on time use and activity-specific affects provided by SAGE's abbreviated DRM instrument. 3 Individuals from each of the three randomly assigned DRM groups-the "morning", "afternoon", and "evening" groups-are asked to report their time use during their respective parts of the previous day, i.e., starting after waking up for the morning group, at noon for the afternoon group or at 6 pm for the evening group. Respondents report what they have been doing based on a list of 22 different activities. They then report how much time they spent on each specific activity and with whom they interacted during the activity. The abbreviated DRM module thereby elicits information for up to ten successive activities or until the interview time reaches 15 min for the DRM section. For each activity, respondents provide further information on the prevalence and intensity of two positive emotions (feeling calm or relaxed, and feeling enjoyment), and five negative emotions (feeling worried, rushed, irritated or angry, depressed, tense or stressed). The intensity of each positive and negative affect during an activity is measured on a three-point scale. We aggregate these reported intensities of positive and negative affective experiences into a single measure of "net affect". We simultaneously use the data from all three DRM groups in our estimations in order to ensure that our estimates of time-use and corresponding affective experiences represent those of the entire day in the target population of individuals aged 50 years and older (Miret et al. 2012).
The large number of activities included in the activity list implies that some of the 22 activities are reported with rather low frequencies. To address the issue of infrequent activities and to facilitate statistical estimation, we follow previous research (Flores et al. 2015;Kieny et al. 2020a, b) and aggregate the 22 activities into five broader activity groups 4 : work, housework, travel, leisure, and self-care. This reclassification of activities aims at striking a balance between grouping activities into relatively intuitive and easily interpretable categories while avoiding small prevalence rates for very specific and infrequent activity groups that would be challenging to integrate into our econometric framework. Note that we refer to "work" exclusively as paid work and subsistence farming, excluding any (unpaid) household-related tasks, which are part of the separate category "housework".
We define experienced well-being based on Kahneman's definition as the "integral of the stream of pleasures and pains associated with events over time" (Kahneman et al. 2004). Formally, experienced well-being can be represented as the duration-weighted sum of net affects for all activities performed during the day, that is: where u i,a represents individual i 's net affect during activity a , and ia = T ia T i represents the share of non-sleeping time that individual i spends on activity a , that is: T ia , the duration of activity a, over T i , the total time covered by the up to 10 successive activities that the respondent reported during her assigned time window.
Following Kahneman and Krueger's definition of net affect (Kahneman and Krueger 2006), individual i 's net affect during activity a , is defined as: where PA l is is the l'th positive affect and NA k is is the k'th negative affect that person i reports for each spell s of possibly multiple reports of activity a . We control for multiple occurrences of the same activity by taking the time-weighted average of positive and negative affect scores. The weight h is is defined as: where t is is the duration of one specific occurrence of activity a and T ia is the total time spent on activity a during the assigned period of time. In other words, the net affect of activity a is the weighted sum of positive and negative affects experienced during different occurrences of activity a over the assigned time period, whereby the weights correspond to the relative time share of each specific occurrence of activity a relative to the total time spent on activity a during the assigned reporting period.
To be able to more clearly highlight activities that generally result in above or below average affective experiences and to facilitate the comparative interpretation of our estimated coefficients across countries as multiples of the standard deviations of the countryspecific distribution of unstandardized experienced well-being, we standardize our measure of activity-specific net affect as follows: where U represents mean and U the standard deviation of the country-specific distributions of u ia . ũ ia , therefore, represents the utility that individual i derives from activity a over the randomly assigned time period, relative to the overall experienced well-being of all individuals across all activities in that country. This standardization allows a more straightforward interpretation, as the sign of ũ ia indicates whether the net affect associated with activity a is above or below the mean net affect across all activities in the respective country under consideration. (1) Finally, we construct a standardized version of the overall experienced well-being as follows: which measures the average experienced well-being of individual i over her assigned time period, standardized based on the overall distribution of experienced well-being of all individuals from the same country. This standardized measure represents the main outcome of interest for our overall analyses of gender differences in experienced well-being. This final standardization ensures that our estimates of the gender (and other) coefficients can be interpreted in standard deviation units, i.e., as relative to the country-specific distribution of unstandardized experienced well-being, which may enhance the comparability of our estimates across countries, especially if we suspect the unstandardized distributions of experienced well-being across countries to be different for reasons that are unrelated to actual experienced well-being, such as issues of survey design or country-specific response scales.
Explanatory Variables
The age-adjusted analysis controls only for age in addition to gender, while the fullyadjusted analysis also includes control variables related to respondents' sociodemographic and economic status, as well as health status and social cohesion, which could be correlated with both gender and experienced well-being. The inclusion of control variables allows us to identify and quantify potential channels underlying any potential subpopulation differences in experienced well-being between men and women. The sociodemographic and economic control variables include age, marital status, household composition (number of adults and children living in the household), whether respondents live in an urban or rural area, years of education and employment status. These variables are significantly correlated with gender in at least a subset of study countries (Table 1), and we hypothesize that they may also influence experienced well-being and thus represent potential mechanisms or confounders in the relationship between gender and experienced well-being. Although we cannot control directly for individual resources, we use household income quartiles based on SAGE's permanent income variable as a proxy for living standards of individual household members. Moreover, in order to account for potential differences in the withinhousehold income distribution, we also include an individual-level explanatory variable indicating whether respondents report having enough money to meet their own needs. The health variables include a WHO disability index and a measure of self-assessed pain. Specifically, we use the 12-item version of the WHO Disability Assessment Schedule (WHO-DAS 2.0), an index which captures different aspects of disability, following the definition of the International Classification of Functioning, Disability and Health (World Health Organization 2001). The WHODAS 2.0 concentrates on six life domains: cognition, mobility, self-care, getting along, life activities, and participation. Self-assessed pain measures the degree of pain or bodily discomfort that the respondent reported experiencing during the previous month, and whether this pain induced difficulties in everyday life. Finally, we use community involvement, trust in others, perceived safety in the neighborhood and having been a victim of a violent crime in the last 12 months as measures of social cohesion. Community involvement measures the level of participation to social activities, while trust in others measures how much the individual has confidence in different groups of people, such as co-workers, neighbors or strangers.
Experienced Well-Being
To assess whether there is an age-adjusted gender gap in experienced well-being, we first regress our standardized measure of experienced well-being Ũ i on a dummy for gender, only including age as an additional control variable. We thus estimate the age-adjusted overall experienced well-being gap as follows: In a second step, we explore how the partial association between gender and overall experienced well-being changes once we control for additional measures of individuals' life circumstances. We, therefore, perform the same regression, but this time including an expanded set of control variables into the model X i , estimating the fully-adjusted gender gap in experienced well-being as follows: We estimate these two regressions using OLS, adding sample weights, to ensure the correct estimation of the corresponding conditional means of experienced well-being across population groups (Solon et al. 2013).
In order to further deconstruct any gender differences in experienced well-being, we also analyze the two components of experienced well-being-time use and activity-specific net affect-separately.
Time Use
We estimate the partial association between gender and time use using weighted multivariate fractional logit models, which impose that the estimated time shares fall in the 0-1 interval ( ia ∈ [0, 1] ), as well as sum up to 1 ( ∑ 5 a=1 ia = 1 ). We start by evaluating whether there are any differences in the way men and women spend their time adjusting for age alone (age-adjusted models). Following Mullahy (2015), we use a multinomial logit functional form such that: where we impose 5 = 5 = 5 = 0 as a normalization for identification (Cameron and Trivedi 2005).
We then repeat this analysis, this time including the whole set of control variables into the model to assess how men and women's time use would differ if they had otherwise comparable life circumstances.
We estimate the above equations using quasi-maximum likelihood. However, the empirical distribution of a vector of shares conditional on a set of control variables may suffer from underdispersion (Mullahy 2015). Consequently, the quasi-maximum likelihood procedure may not yield consistent estimates of the covariance matrix. To address these issues, we use a bootstrapping procedure with 250 repetitions to estimate our standard errors.
Activity-Specific Net Affect
In order to assess whether men and women experience activities differently, we first estimate the age-adjusted partial associations between gender and activity-specific net affect. We use a weighted linear regression of the form: based on the sample that reported activity a , using sample weights as described earlier.
We then evaluate gender differences in activity-specific affective experiences, conditional on life circumstances in a similar fashion within the context of fully-adjusted model that incorporates our expanded set of covariates X , that is: It is worth emphasizing that we do not account for potential selection into activities, which does not allow for causal interpretation.
Time Use vs. Activity-Specific Affective Experiences
Finally, we combine the results from the two separate analyses of gender differences in time use and gender differences in affective experiences in order to assess the relative importance of these differences in time use vs. activity-specific affective experiences to account for the overall gender differences in experienced well-being. Our thought experiments for deconstructing the gender differences in experienced well-being are similar to those of Flores et al. (2015) who assessed the role of disability for experienced well-being. Like in the case of the above regression analyses, we perform these counterfactual thought experiments twice, once controlling only for age, and a second time using our full set of control variables. These analyses help in deconstructing the raw gender differences in experienced well-being (adjusting for age only) as well as to assess the relative contributions of gender differences in time use vs. activity-specific net affects for gender differences in experienced well-being once other differences in life circumstances are also accounted for.
To isolate the contribution of differences in time use, we estimate a so-called time composition effect as: where ũ a represents the average net affect during activity a and a = a Female are the partial effects of female on the proportion of time spent in activity a , as calculated in Eqs. (8) and (9) (for the age-adjusted model) or (10) and (11) (for the fully-adjusted model). The time composition effect describes how men and women's experienced well-being would differ if both genders had the same activity-specific affective experiences (activity-specific net affect being set at the overall country-average, irrespective of gender), but their time use would continue to differ by gender. In other words, would men or women have higher experienced well-being, if everyone would experience all activities in the same way, but gender differences in time use would remain as observed in the data?
To isolate the contribution of gender differences in affective experiences, we estimate the so-called saddening effect as: where ũ a =ũ a Female represent the partial effects of female on activity-specific net affect of activity a , as calculated in Eqs. (12) and (13), for the age-adjusted and fully-adjusted regressions, respectively. The saddening effect describes how men and women's experienced well-being would differ if both genders were not to differ in their activity patterns (time use being set at the overall country-average, irrespective of gender), but their activity-specific net affect were to remain gender-specific. In other words, would men or women have higher experienced well-being, should both genders spend their day in exactly the same way, while still having gender-specific affective experiences associated with these activities?
Although our analysis is broadly comparable to the decomposition analysis performed by Knabe et al. (2010) to study well-being differences between employed and unemployed individuals in Germany, our approach differs in two important ways. Firstly, their estimations of the saddening and time composition effects are based on unconditional group differences, while we control either for age alone or for a large set of control variables. Secondly, Knabe et al. (2010) define the time composition effect as a residual effect obtained by subtracting the saddening effect from the overall differences in experienced well-being between the two groups under consideration, while we define the two effects symmetrically, even if this implies that the two effects do not add up to the overall group differences in experienced well-being due to an omitted interaction term.
Finally, while our deconstruction of the gender differences in experienced well-being bears some similarities with other econometric decompositions techniques, such as the Oaxaca-Blinder decomposition, our aim is fundamentally different. Oaxaca-Blinder decompositions generally examine how unconditional mean differences in an outcome across groups may be attributed to group differences in explanatory variables on the one hand and their group-specific associations with the outcome of interest on the other. By contrast, our analyses aim to isolate and quantify the respective contributions of each constituent part of experienced well-being-time use and activity-specific net affectfor differences in the overall experienced well-being of older men and women. We, therefore, need to apply alternative techniques to construct meaningful counterfactuals for obtaining our saddening and time composition effects, as outlined in Flores et al. (2015). Table 1 presents key characteristics of our analytical sample, i.e., the country-and genderspecific averages of all explanatory variables used in our analyses.
Descriptive Statistics
As highlighted in the table, men and women in our sample have substantially different characteristics and life circumstances. In all countries, women are less likely to be married than men, a likely reflection of gender differences in life expectancy and a correspondingly higher prevalence of widowhood among women than men. Women also tend to live in smaller households in all but one country (South Africa). In addition, women work less often than men, with a 6-43 percentage points difference across the five countries. Except for Russia, where a majority of individuals have relatively high levels of education, irrespective of gender, women generally report a substantially lower education level than men. In particular, a much larger share of women than men have not completed primary school education. This gap is as high as a 40 and 23 percentage points for India and Ghana/ China, respectively. Women also tend to live in poorer households than men. Furthermore, in Ghana, India and South Africa, women are significantly less likely than men to report having enough money to meet their personal needs. In all countries, women report a higher level of self-assessed pain and suffer from higher levels of disability, which highlights poorer health status among older women than men. Finally, women generally report feeling less safe and tend to be less often involved in community activities. Table 2 shows country-and gender-specific descriptive statistics of standardized experienced well-being, time use and activity-specific net affect.
Panel A presents weighted averages of standardized experienced well-being for men and women in each country. While the overall weighted average of standardized experienced well-being across both genders is zero by construction, experienced well-being is significantly lower for women than for men in all countries. Since experienced well-being is standardized at country level using population-weighted means and standard deviations of country-specific distribution, the absolute magnitude of differences cannot be compared between countries. Panel B presents the unadjusted country-and gender-specific average time shares spent on each activity group. We observe the usual patterns of traditional gender roles. In addition, even when adding up the time spent working and traveling, the overall amount of time spent on work and housework combined is larger among women than men. Panel C shows country-and gender-specific estimates of activity-specific net affects for all activity groups. For both genders, the three activities associated with the worst affective experiences in all countries are work, travel and housework. In addition, work is nearly always rated as the activity leading to the lowest levels of net affect. While housework, work and travel yield strictly negative (i.e., below average) affective experiences for women in all countries, the situation is more nuanced for men. Indeed, men tend to have below average affective experiences when working, but this is not always the case for housework or travel. Self-care and leisure are always associated with positive (i.e., above average) net affect. If we consider work, travel and housework as part of a wider category Table 1 Life circumstances of men and women
2.73
The entries in each column are country-specific averages using population weights. The average under Women is bold whenever there is a significant difference between genders in a pairwise comparison (p < 0.10). Permanent income quartiles are country-specific and derived from an asset index. Trust is a score based on questions about perceived trust in neighbors, colleagues and strangers. Safety is a score based on information about perceived safety in the neighborhood and whether the respondent has been a victim of a violent crime. Community involvement measures the degree of participation in social activities such as attending clubs or public meetings, or socializing with co-workers of work-related activities, and self-care and leisure as part of more leisurely activities, the ranking of activities in all countries is consistent with a neoclassical utility function that assumes that individuals prefer leisure over work. Finally, the pairwise comparisons of net affect by gender show that, while there are no significant differences in terms of how much men and women "dislike" work, women have significantly worse affective experiences doing housework than men in all but one country. Compared to men, women also report significantly lower levels of net affect associated with leisure in three out of five countries. Table 3 presents country-specific population-weighted estimates of the partial associations of gender and experienced well-being in both the age-adjusted and fully-adjusted models. When controlling for age only, women are at a significant disadvantage compared to men in all countries, with corresponding gender gaps in experienced well-being ranging from 0.05 standard deviations in China to 0.2 standard deviations in Russia. However, when incorporating the larger set of individual characteristics and life circumstances into our model, any remaining gender differences in experienced well-being become statistically insignificant in spite of remaining negative in all countries but China.
Analysis
Looking at the coefficients of our control variables (Appendix 1), we observe that the most important factors associated with experienced well-being are disability and access to income (especially being part of the top quartile of the household income distribution). We therefore hypothesize that the experienced well-being gap observed between men and women is mostly due to women's individual characteristics and life circumstances, and in particular to the fact that their health status is often worse than that of men (higher WHO disability score) and that they generally live in poorer households. Table 4 presents population-weighted estimates of the partial associations between gender and time use, based on country-specific multivariate fractional logit models. By construction, all country-specific partial associations must sum up to zero as the activities considered are both exhaustive and mutually exclusive.
Panel A shows the results from models that control for age only while Panel B refers to models that control for a wide range of individual characteristics and life circumstances (see Appendix 2 for the detailed coefficients of the covariates). The results show a pattern that is roughly comparable to the descriptive statistics presented above: women spend significantly less time than men on work and travel, and more time on housework. This finding is consistent across all countries, and changes relatively little Table 3 Partial association of gender with experienced well-being U for individuals aged 50+ Standard errors are computed using 250 bootstrap replications. Sample weights are applied. The entries in each column are country-specific average partial effects of gender on experienced well-being. Average partial effects are based on a linear regression (Eq. (6) for the age-adjusted difference and Eq. (7) for the fullyadjusted). The age-adjusted difference controls only for age, while the fully-adjusted difference controls for a large set of control variables included in Table 1 *p < 0.10, **p < 0.05, ***p < 0.01 (8) and (9) and Panel B. Equations (10) and (11)). Panel A. controls only for age, while Panel B. controls for a large set of control variables included in Table 1 *p < 0.10, **p < 0.05, ***p < 0.01 in time spent doing housework, which indicates that women tend to spend more time on work and housework combined and less time on leisure activities compared to men. Table 5 presents country-specific population-weighted estimates of the partial associations between gender and activity-specific net affects, controlling first for age only (Panel A) and then for our entire set of individual control variables (Panel B).
All but two of the estimated coefficients in Panel A are negative, although many are not statistically different from zero at conventional levels of significance. In all countries but Ghana, women report significantly worse net affects associated with housework than men, with corresponding differences ranging from about 0.06 standard deviations in China to about 0.32 in India. In addition, in three out of five countries, women also report significantly worse levels of net affect during leisure activities than men. In general, when controlling for age only, it appears that women report worse net affects for all activities, even if the difference is not always statistically significant. Controlling for additional individual characteristics and life circumstances (Panel B) reduces the statistical significance of any gender differences in activity-specific net affect even further. More importantly though, many of the estimated coefficients change their sign in the fully-adjusted models. In China, all the estimated partial associations between being a woman and the activity-specific net affects are positive in the fully-adjusted models, and these associations are also statistically significant in the cases of work, leisure and self-care. By contrast, no clear pattern for gender differences in activity-specific net affects emerges across the other countries once additional control variables are incorporated into the models. The fact that incorporating further controls for individual characteristics and life circumstances into our model substantially attenuates the association between gender and activity-specific net affect suggests that other factors like health status and economic position may be able to largely explain the worse activity-specific affective experiences of women compared to men (Appendix 3).
We now combine the results from the above analyses within the framework of a hypothetical thought experiment aimed at assessing the relative importance of gender differences in time use (time composition effect) and activity-specific net affects (saddening effect) for gender differences in experienced well-being. Like in our earlier analyses, we Table 6 Counterfactual partial association of gender with experienced well-being and its time composition and saddening effects for individuals aged 50+ Standard errors are computed using 250 bootstrap replications. Sample weights are applied. The entries in each column are country-specific differences in standardized experienced well-being between men and women. The time composition effect is computed as in Eq. (14) and the saddening effect is computed as in Eq. (15). Panel A. controls only for age, while Panel B. controls for a large set of variables included in Table 1 *p < 0.10, **p < 0.05, ***p < 0. Table 6 first presents the overall gender differences in experienced well-being and deconstructs these into two components: time composition and saddening effects, while Tables 7 and 8 provide the results of further disaggregated analyses at the level of individual activities. The time composition effect isolates gender differences in experienced well-being attributable to gender differences in time use by fixing activity-specific net affect at the country-specific averages (irrespective of gender) and computing hypothetical gender differences in experienced well-being if men and women would only differ in terms of their activity patterns. The saddening effect, on the other hand, highlights gender differences in experienced well-being attributable to gender differences in activity-specific net affects by fixing time use at the overall country-specific averages (irrespective of gender) and computing hypothetical gender differences in experienced well-being if men and women would only differ in terms of their activity-specific affective experiences.
Panel A of Table 6 shows that, when controlling only for age, gender differences in experienced well-being are mainly driven by the saddening effects, that is, by the fact that women have worse affective experiences when performing most activities than men. Indeed, in all countries but China, if both genders had the same (country-specific average) time use patterns, but differed in their activity-specific net affects, women would have statistically significantly lower levels of experienced well-being than men. The corresponding time composition effects on the other hand seem relatively small. Panel B shows that when considering additional individual characteristics and life circumstances, gender differences in experienced well-being lose statistical significance. The generally negative-although small-time composition effects, on the other hand, become statistically significantly different from zero in three of our study countries. That is, if women and men had exactly the same activity-specific affective experiences (set at the overall country-specific average), the remaining gender differences in time use alone would generally result in lower levels of experienced well-being among women than men. Hence, holding other characteristics and life circumstances fixed, women tend to engage in more unpleasant activities overall than men. Meanwhile, the saddening effects-which were negative and statistically significant everywhere but in China in the age-adjusted models-become insignificant in four countries when we include the whole set of control variables, and even turn significantly positive in China.
Tables 7 and 8 present additional details for this hypothetical thought experiment by showing how each activity group contributes to the estimated time composition and saddening effects, respectively. Table 7 shows that in both the age-and fully-adjusted models, the (lower) amount of time spent working contributes to a relatively higher level of experienced well-being among women compared to men, while the (higher) amount of time spent doing housework contributes to a female disadvantage in terms of experienced well-being. In addition, the lower amount of time spent in "more pleasant" activities such as leisure and self-care further contributes to the lower level of experienced well-being among women relative to men, especially when life circumstances are taken into account. Table 8 shows the decomposition of the saddening effect by activity. In Panel A, we see that-when only age is controlled for-the negative saddening effect observed everywhere but in China is mainly driven by the fact that women have lower affective experiences during leisure or when performing housework than men. We see no consistent pattern across countries when controlling for the whole set of covariates (Panel B). Meanwhile, the positive saddening effect observed in China is driven by the fact that-compared to men-women have higher affective experiences during leisure and self-care activities as well as when working once individual characteristics and life circumstances are incorporated into the models. These findings suggest that specific characteristics and life circumstances of women-more than any intrinsic gender differences in activity-specific affective experiences-may be at the heart of the estimated saddening effects.
Conclusions
Our study highlights an age-adjusted gender gap in experienced well-being in favor of men, but also shows that these gender differences weaken considerably once further individual characteristics and life circumstances are incorporated into our models. These findings suggest that at least part of the experienced well-being gap between men and women might stem from broader disadvantages of women compared to men rather than from any intrinsic "gender effect". In particular, we find that the gender gap is largely driven by poorer average health (higher disability and self-assessed pain) and lower average economic status (permanent income quartiles) among older women when compared to older men.
We then deconstruct potential gender differences in experienced well-being into contributions of the two components of experienced well-being: time use and activity-specific net affect. Our results show that women spend more time performing housework than men, while men spend more time working and traveling. Moreover, gender differences observed for housework are generally larger than those observed for work, implying that women spend more time on work and housework combined than men. These partial associations between gender and time use are strongly statistically significant in both the age-adjusted and the fully-adjusted models. Consistent with traditional gender roles, this finding suggests that gender per se-rather than differences in individual characteristics or life circumstances between men and women-plays an important role for the large observed gender differences in time use.
Women also tend to have lower affective experiences than men across most activities when adjusting for age only. However, the inclusion of a larger set of covariates controlling for individual characteristics and life circumstances decreases this association. This attenuation in the association between gender and activity-specific net affects supports the hypothesis that other factors than gender per se are likely to be responsible for the higher activity-specific net affects of men compared to women. In particular, we find that two factors are consistently associated with net affective experiences for all activities: disability (which is negatively associated with net affect) and belonging to the highest income quartile group (which is positively associated with net affect). Our descriptive statistics show that women suffer more disability than men and belong less often to the top income quartile. These two factors thus appear to be the main drivers of the gender gap in net affective experience in favor of men.
Finally, we perform a thought experiment to disentangle the respective roles of potential time composition and saddening effects for the observed gender differences in experienced well-being. These results show that the lower experienced well-being of women compared to men of the same age is linked to their lower activity-specific net affect for all activities, and in particular for housework and leisure, irrespective of the time spent performing each activity. Perhaps somewhat surprisingly time composition effects contribute only marginally to the overall age-adjusted gender gap in experienced well-being, due to a compensation between the two activities considered most unpleasant, work-performed mostly by men-and housework-performed mostly by women. However, ceteris paribus, fullyadjusted gender differences in time use contribute to lower levels of experienced well-being of women compared to men, as the time spent in unpleasant activities by women (both work and housework) exceeds that of men with similar characteristics (in terms of disability and income in particular). Moreover, at equal levels of disability and income (among other factors), women do not appear to systematically dislike certain activities more than men (saddening effects). Women's lower activity-specific net affect for all activities may thus be linked-as described above-to the conditions under which they are performed, and in particular to the higher levels of disability in women compared to men, rather than any intrinsic gender differences in net affects.
Empirical Contributions
Our study provides new insights into gender differences in experienced well-being among older adults from different geographic and cultural settings in the developing world. Experienced well-being is an important but still relatively rarely explored dimension of emotional well-being as well as of subjective well-being more generally (National Research Council 2013). While most of the literature on subjective well-being focuses on evaluative well-being, the scarcer literature looking at emotional well-being typically considers positive and negative affective experiences separately without assessing the overall welfare implications of these different emotional experiences in terms of net affect. Moreover, given the importance of experienced well-being for the evaluation of welfare (Kahneman and Krueger 2006;Krueger and Schkade 2008), there is a surprising paucity of applied empirical work employing this measure of well-being, possibly due to the relatively low availability of DRM data, which are expensive and time-consuming to collect. In fact, to the best of our knowledge, our paper is the first study to fully explore the relationship between gender and experienced well-being in developing countries and deconstruct this relationship into its two component parts based on detailed data on both time use and activity-specific affective experiences. Due to the absence of evidence on this topic, we cannot compare our results to other studies using the same measure of subjective well-being. One notable exception is the study by Miret et al. (2012), who analyze the impact of sociodemographic characteristics on net affect using the original Day Reconstruction Method (DRM) as well as SAGE's abbreviated version of the DRM on a sample of 1560 adults from Jodhpur (India), but without any particular focus on gender differences. These authors find that being male, living in an urban area and having a high income, are factors associated with a higher net affect, which is consistent with our own results.
We can, however, put our findings into context by comparing our results to evidence from the 2015 World Happiness Report (Fortin et al. 2015), which is one of the few studies researching the gender-specific evolution of several positive and negative emotions through the life course. Although the authors do not combine emotions into an overall net affect score, their results show generally lower levels of positive as well as higher levels of negative emotions in older women as compared to men of the same age. These data thus suggest that assessing gender differences in net affect in their context would likely yield results similar to our age-adjusted regressions, i.e. a disadvantage of older women in terms of emotional well-being. However, the 2015 World Happiness Report does not provide any analyses comparable to our fully-adjusted models, and it is thus not possible for us to assess whether the advantage for men in their assessment would disappear when individual characteristics and life circumstances are controlled for.
Our results for time use are broadly in line with the literature in the field. The evidence that women tend to perform more housework while men tend to spend more time working is remarkably similar across geographical settings and levels of wealth, and highlights the remaining importance of traditional gender roles worldwide, at least among older adults. Indeed, such gender differences in time use were found for example in Guinea by Wodon and Blackden (2006), in Ethiopia by Arbache et al. (2010), and in France, Italy, Sweden, and the USA by Anxo et al. (2011). Yet, to our knowledge, we are the first to assess the relationship between gender and activity-specific net affect, and to disentangle the respective roles of potential time composition and saddening effects for the observed gender differences in experienced well-being.
Finally, our study contributes to the methodological debate regarding the use of control variables in well-being research. On one side of the debate, Glenn (2009) claims that scholars should not control for other factors when studying the association between age and well-being. He argues that excluding control variables allows to identify the "total effects" of age on well-being, i.e., the sum of direct and any indirect effects through other variables. These total effects are, he believes, of greater importance than any potential direct effect of age holding individual characteristics and life circumstances that may change with age fixed. On the other side, some researchers argue that focusing solely on bivariate relationships is not sufficient to the understanding of the complex relationship between age and well-being, which may be mediated or confounded by other age-related differences in individual characteristics or life circumstances that may affect well-being (Blanchflower and Oswald 2008;Blanchflower and Oswald 2009). In this context, we perform both age-adjusted comparisons of subjective well-being between men and women as well as fully-adjusted regressions of subjective well-being on gender that also account for gender differences in health status, socio-demographic characteristics and community participation in the same dataset. Our age-adjusted models, on the one hand, can provide evidence regarding potential advantages or disadvantages of women in terms of their experienced well-being compared to their male counterparts. These analyses are especially important for a descriptive assessment of overall gender inequalities in experienced well-being, as well as for the targeting of potential policies and interventions aimed at mitigating them. The fully-adjusted analyses, on the other hand, account for potential gender differences in other individual characteristics and life circumstances. These may at least in part mediate the age-adjusted relationship between gender and experienced well-being and thus isolate the partial association of gender with experienced well-being ceteris paribus. In addition, these analyses highlight potential policy levers related to gender differences in individual characteristics and life circumstances that may be helpful in alleviating gender differences in experienced well-being. Our analyses confirm our working hypothesis that the results obtained through the two approaches provide different but equally important views on the association between gender and experienced well-being and should therefore be seen as complementary.
Practical Implications
We are facing a situation without precedent: by 2050, it is estimated that there will be more than twice as many persons over the age of 65 than under the age of 5 (United Nations 2019a). This rapid demographic transition is raising important issues not just in industrialized countries but worldwide, as an increasingly large proportion of the older population is living in low-and middle-income countries. In order to meet the post-2015 sustainable development agenda goal of ensuring healthy lives and promoting well-being for everyone at all ages (United Nations 2019b) as well as to enable healthy aging for everybody (World Health Organization 2015), social and health systems worldwide must find effective ways to respond to the needs of older adults. In the near future, increasing the healthspan (i.e., the time that an individual is able to live in good health) as well as quality of life of older adults, and not merely preventing deaths, will be a key objective of health and social interventions. The scarcity of knowledge regarding the drivers of older persons' experienced well-being-especially in low-and middle-income countries-must therefore urgently be addressed in order to construct effective responses to global population aging using evidence-based policies. Due to women's higher longevity, a majority of older adults is female, especially at very advanced ages. Moreover, while women generally constitute a vulnerable group, they may be especially at risk in older age. For example, older women tend to suffer from more chronic health conditions than men of the same age, be poorer, and have lower access to health care services (World Health Organization 2015). However, compared to men, women may benefit from stronger family support from adult children, and may suffer in lower numbers from the negative impacts of role disruption at retirement (Knodel and Ofstedal 2003). It is thus crucial to understand whether and how these objective circumstances translate into subjective well-being differences in old age in order to design policies to address them.
Our paper yields information that is potentially useful to policy makers. First, we document that the gender gap in emotional well-being is pervasive. In particular, we show that much of the gender gap in experienced well-being which disadvantages women relative to men can be linked to gender differences in activity-specific affective experiences. This finding suggests that just moving toward a more equitable time repartition within households may not be sufficient to close the existing gender gaps in experienced well-being. Moreover, we show that gender differences in individual characteristics and life circumstances, notably disability and income, are key factors underlying the experienced wellbeing gap in favor of men. These findings suggest that policies, such as female-targeted campaigns for the early prevention of disability, and increasing entitlement programs for older women, may prove useful in improving women's experienced well-being at older ages. In addition, the empowerment of older women can be encouraged by life-long interventions, such as the promotion of equitable workforce participation, the implementation of compulsory social contributions, and the distribution of non-contributory social pensions at all ages. Finally, health promotion and disease prevention interventions targeted not only towards older populations but also towards younger individuals have the potential of keeping older adults in good health for much longer in the future. These policies should complement more general efforts to improve well-being in older age by improving health and social support systems as well as addressing the social determinants of health.
Limitations and Future Research Directions
To the best of our knowledge, we are the first to assess and deconstruct gender differences in experienced well-being using DRM data from a large-scale multi-country survey effort in the developing world. Performing our analyses on harmonized data from different countries allows us to document robust associations across different cultural contexts and geographic regions. In addition, our study shows the added value of using DRM-based data to explore the respective roles of time use and activity-specific affective experiences for explaining gender differences in experienced well-being. Nevertheless, our estimated partial associations cannot be interpreted as causal due to potential issues of confounding, reverse causation and selection into activities. Estimating average activity-specific net affects only using data from individuals who actually perform these activities may be particularly problematic in this regard. More research is thus needed to address these limitations, in order to allow inference of a causal relationship, perhaps in the context of a structural model for time use. Finally, while our study focuses exclusively on older adults, it would be worthwhile to evaluate how the relationship between gender and experienced well-being evolves over the life course. In spite of these limitations, our study makes a valuable contribution by documenting and deconstructing gender differences in experienced well-being among older adults from different developing country setting and highlighting key individual characteristics and life circumstances beyond gender itself that may help explain gender differences in experienced well-being.
Appendix 2
See Table 10. Table 9 Partial association of gender with experienced well-being U for individuals aged 50+ Standard errors are computed using 250 bootstrap replications. Sample weights are applied. The entries in each column are country-specific average partial effects of gender on experienced well-being. Average partial effects are based on a linear regression (Eq. (7) | 2022-12-25T14:05:46.260Z | 2020-07-24T00:00:00.000 | {
"year": 2020,
"sha1": "6544513c6e1e337245e95f327357b1d306391574",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11205-020-02435-3.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "6544513c6e1e337245e95f327357b1d306391574",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
} |
258345893 | pes2o/s2orc | v3-fos-license | Long-term changes in unionid community in Kentucky Lake: Implications for understanding the effects of impoundment on river systems
Abstract Freshwater mussels are both critically important in their ecosystems and rapidly declining around the world. Damming is a key reason for this decline in many locations because it affects the flow and turbidity of river systems, leading to numerous detrimental effects on benthic communities. Although the ecological effects of impoundment have been well studied on timescales ranging from years to decades, the ecological effects of impoundment on longer (50–100 years) timescales are less well understood, with a key question being: how long after the building of dams and impoundments do we expect community structure to continue changing? In this study, we explore historical changes in the freshwater mussel assemblages in Kentucky Lake (dammed in 1944) using decades-long collections housed at Murray State University in combination with other historical records. After digitizing these collections and applying a robust rarefaction protocol to account for uneven sampling, we quantify changes in unionid assemblage structure alongside coeval water quality data collected through the Kentucky Lake Long-Term Monitoring Program. We find that subsampled richness exhibited declines after dam construction with losses among opportunistic taxa, channelization-tolerant taxa, and impoundment-intolerant taxa. We also find increases in the proportions of equilibrium taxa throughout the dataset. Overall, the assemblage composition reached an equilibrium by the year 2000 (50 years after impoundment). In concert, river water quality data show a decline in turbidity and increase in light penetration in the period 1988–2020. Although the geohistorical records treated in this study are patchy in time, we argue that they are nonetheless valuable and illustrate that freshwater ecosystems may serve as potential sites of restoration decades after anthropogenic disturbance. In turn, this emphasizes the importance of geohistorical collections to studying long-term changes in community structure and developing strategies for conservation and environmental remediation.
Introduction
Freshwater mussels belonging to the family Unionidae are both diverse (Lydeard and Mayden 1995) and critically endangered (Vaughn and Taylor 1999;Haag and Williams 2014).Like most bivalved mollusks, unionids provide numerous ecosystem services to their communities, including water filtration, nutrient cycling, and serving as a food source for larger predators (Kryger and Riisgård 1988;Wotton et al. 2003;McMahon et al. 1991;Barbour et al. 1999;Vaughn and Hakenkamp 2001;Prins et al. 1996;Welker and Walz 1998;Hakenkamp and Palmer 1999;Haag 2012).Additionally, unionids are sensitive to a wide range of abiotic changes, most notably changes in flow, turbidity, depth, and siltation (McMahon et al. 1991).Consequently, unionids are subject to numerous (and often anthropogenic) sources of ecological stress, including eutrophication, impoundments, sediment input alteration, temperature changes, and diversions (Jackson et al. 2001;Giller 2005;Søndergaard and Jeppesen 2007;Haag 2012).In particular, anthropogenic impoundments, of which there are over 80,000 in the United States alone (Ho et al. 2017), have led to habitat fragmentation, silt accumulations, declines in the species richness of unionid mussels, and changes in water quality (Watters 1999;Søndergaard and Jeppesen 2007;Pilger and Gido 2012).Damming has also hindered the ability of fish to move, reproduce, and thrive within rivers, which -given that most unionids have a larval stage that relies on parasitism of specific fish species (Kat 1984;McMahon et al. 1991;Keller and Ruessler 1997;World Commission on Dams 2000;Lydeard et al. 2004;Haag 2012) -can be detrimental to mussel reproduction (Søndergaard and Jeppesen 2007).
Consequently, unionids are experiencing some of the highest current extinction rates of any group of freshwater organisms (Vaughn and Taylor 1999;Haag and Williams 2014).Both as a consequence of their sensitivity to environmental changes and the crucial ecological roles they play in freshwater communities, unionid mussels are an effective biological group for monitoring the health of freshwater ecosystems, many of which are currently undergoing dramatic changes across the United States (Lydeard and Mayden 1995;Quinlan et al. 2015).Despite this, more work needs to be done to completely understand which factors are most detrimental to various mussel species and what steps can be taken to preserve these communities (Lydeard and Mayden 1995;Haag and Williams 2014;Quinlan et al. 2015;Lopes-Lima et al. 2018).Moreover, given the near ubiquity of the anthropogenic impacts listed above, there is a premium on identifying sites that are suitable for restoration efforts.
In this context, a key question with wide relevance to the monitoring and restoration of freshwater unionid ecosystems is: how long after the building of dams and impoundments do we expect assemblage structure to reach an equilibrium?Although impoundments have been shown to have dramatic effects on benthic community structure on the scale of months to years (Armitage 1978;Voshell and Simmons 1984;World Commission on Dams 2000), there is a relative paucity of high-resolution data surrounding the responses of freshwater communities to environmental change on the scale of decades to centuries (Strayer and Dudgeon 2010;White 2014;White et al. 2020).Previous work has assumed that the unionid communities equilibrate in the first few years after the river is impounded, but the lack of decades-long mussel assemblage data has made it difficult to test that assumption (Rickett and Watson 1994;White 2014).Understanding ecological responses over longer timescales is critical to informing conservation efforts, both in helping to identify the signatures of perturbed communities and understanding the long-term ecological impacts of damming (Peacock and Mistak 2008;White et al. 2020).
In this study, we combine historical collections of unionid mussels (spanning the years 1977-2001) collected from Kentucky Lake with historical assemblage data collected by Sickel et al. (2007Sickel et al. ( ) (spanning the years 1930Sickel et al. ( -2001)), which record the long-term response of unionid communities to the construction of the dam in 1944.We analyze community data alongside long-term physiochemical water quality records in order to understand what the long-term environmental effects of impoundment are and how they may have impacted (or are still impacting) benthic ecosystems.These datasets allow us to address specific questions surrounding the long-term effects of dam construction in Kentucky Lake, principally: 1) on what timescales do unionid mussel communities continue to change post-impoundment?And 2) what are the notable temporal trends in richness, assemblage structure, and water quality in Kentucky Lake?Quantifying changes in richness and assemblage structure will reveal whether unionid communities are still changing 40-60 years after dam construction, or, alternatively, whether communities have reached a new ecological equilibrium.Alongside this, our long-term dataset recording water quality will aid in determining if any observed community changes are the result of impoundment or other anthropogenic factors, and potentially whether Kentucky Lake is suitable for further restoration efforts.
Locality and historical context
Located in southwest Kentucky and northwest Tennessee (Figure 1), Kentucky Lake is an ideal locality to study unionid bivalves because of the Southeast's high endemism and diversity of the group (Benz and Collins 1997).With over 300 freshwater bivalve mollusk species, the Southeastern United States is the richest locality for Unionoida in the world (Lydeard and Mayden 1995;Master et al. 1998;Haag and Williams 2014).Kentucky Lake is a mainstem reservoir that was created when the Tennessee River was dammed in 1944 (Tennessee Valley Authority (TVA)), 1985; Yurista et al. 2004;Foresta 2013;White 2014).Kentucky Lake has a surface area of 651 km 2 , a mean discharge of 1,812 m 3 sec −1 , and a mean retention time of 16.7 days (White 2014).Water velocities, discharge, and retention time are highly variable, changing from lake-like to river-like depending on the location within the reservoir and anthropogenic manipulations in response to flood or drought conditions (Yurista et al. 2004;White 2014).The substrate of this river was gravel before damming, and this area of Kentucky Lake now has fine silt coverage with areas of oxygen depletion during low flow (Sickel et al. 2007).Most macroinvertebrates are restricted to the shallow, marginal areas of the lake, which represent the now submerged former floodplain.The former floodplain has a mean water depth of 6 m in the summer with patches ranging between erosional and depositional settings (White 2014).Along with water quality data monitored since 1988, freshwater mussels have been collected at the Hancock Biological Station (HBS) and stored at the Murray State University ('MSU') collections (located in nearby Murray, Kentucky) since 1950.Historical accounts of freshwater mussel species and counts in the Kentucky Lake area have been recorded since 1930 (Sickel et al. 2007).
Historical mussel collections
Local-scale collections, such as those hosted at MSU, provide an excellent opportunity to study historical changes in assemblage composition because of the amount, detail, and quality of information typically available within them that is frequently not duplicated in larger, regional collections (Snow 2005).The MSU collections were largely collected and organized by Dr. James Sickel from several localities around the United States, although collections mainly occurred in Kentucky and Tennessee.Almost all specimens were collected through brailing, meaning that mussels were overwhelmingly likely to have been collected as live samples, rather than representing dead (and thus potentially older) material (Miller and Nelson 1983).The database resulting from the digitization of the collections comprises 3471 specimens with information inclusive of specimen collection date, location, collector, and species identification.Species identifications have been previously verified by the state malacologist from the Kentucky Department of Fish and Wildlife and are updated here following the Integrated Taxonomic Information System (ITIS) (http://www.itis.gov).
For this study, we restricted records to only those collected from Tennessee River Mile 22.4 to 66, which is the location of Kentucky Lake immediately upstream of the Kentucky dam; this geographically restricted dataset comprised 270 specimens.We integrate this new dataset with that of Sickel et al. (2007), which provides a pre-dam ecological baseline from 1930 and additional Kentucky Lake samples .The resulting dataset comprises 5307 individual records including the years 1930 and 1977 to 2001, comprising 36 unique species (Appendix A).
Geohistorical (similar to paleontological) community data frequently suffer from biases resulting from sampling effort -time intervals with more intense sampling (i.e. more specimens) will tend to have artificially inflated species richness compared to those with less intense sampling (e.g.Foote 2001).As the sampling intensity in any year increases (producing greater numbers of specimens), the observed species richness of that time bin will tend to increase (e.g.Alroy 2010).As an illustration of this, rarefaction curves constructed for specimens from individual years in our dataset clearly show a high proportion of under-sampled years (Appendix B).To combat this issue, a process of rarefaction (also known as subsampling) is commonly applied, where consistently sized subsamples are iteratively drawn from species pools in order to generate a vector of sample-standardized community data (e.g.Darroch and Wagner 2015).In our dataset, sampling intensity varied greatly among the collection years with a maximum of 1645 samples in 2001 and several years during which no specimens were collected at all.First, we applied a 'cutoff ' , whereby we only analyze years during which a minimum number of specimens was collected.Many of the years with intermediate to better-sampling possess between 10 and 30 specimens; we therefore randomly subsample five specimens from years during which a minimum of 10 specimens was collected.There are no robust statistical guidelines for choosing the size of the subsample relative to the size of the cutoff value, although a range of between 50-75% is typical (e.g.Darroch and Wagner 2015).As an additional sensitivity test, therefore, we also experimented with different cutoff values and subsampling numbers, allowing us to examine whether rarefied species richness trends are sensitive to the cutoff threshold or subsampling intensity (Appendix C).We iterate (×100,000) this subsampling routine, counting the number of unique species in each iteration.This analysis produced a vector of sample-standardized richness estimates for each year sampled, and which reflects sample-standardized temporal trends in overall richness (hereafter referred to as, 'rarefied richness') within Kentucky Lake over the studied interval (Figure 2).
To better understand changes in the ecological composition of unionid assemblages, all species in the dataset were classified using information provided in Haag (2012) in terms of their channelization tolerance, impoundment tolerance, and life histories (Figure 3).Only years during which at least 15 specimens were collected were analyzed and plotted for this analysis to reduce skewing by low collection years.Life history categories are defined by Haag (2012) and include equilibrium strategists, opportunist strategists, and periodic strategists.Equilibrium strategist species recruit constantly at low rates and have long life spans with late maturity.Opportunist strategist species recruit at high and variable rates with early maturity and short life spans.Periodic strategist species also have high and variable recruitment rates with intermediate life spans (Haag 2012).Proportional abundance, a commonly used metric in paleontological studies, is a robust metric of community structure regardless of sample size and preserves with high fidelity (Kidwell 2001;Kidwell 2002a;Kidwell 2002b;Kidwell 2008;Olszewski and Kidwell 2007;Kidwell 2013).Evaluation of relative mussel abundance in this study allows the understanding of which types of species are dominating the freshwater mussel communities of Kentucky Lake post-impoundment.
Water quality data
The Kentucky Lake Long-Term Monitoring Program, maintained by the MSU Watershed Studies Institute and HBS, has been collecting physiochemical water quality data every 16 days from 12 sites in Kentucky Lake since July 1988 (Figure 1; Kentucky Lake Long-Term Monitoring Program Database, Hancock Biological Station, Murray State University, Murray, KY 42071).All water parameters were measured from the lake's bottom, which reflects the water conditions most affecting the mussels.The main water quality variable of interest, turbidity, quantifies the amount of light scattering (Davies-Colley and Nagels 2008) and is inversely related to water clarity, which impacts light penetration (Davies-Colley and Smith 2001).Moreover, turbidity here is taken to reflect how much fine sediment is suspended in the water column (Davies-Colley and Smith 2001).Additional water quality factors of interest are chlorophyll a (a proxy for primary productivity), total dissolved phosphorous and dissolved organic nitrogen (measures of eutrophication), pH, dissolved oxygen, chlorine, temperature, 1-m light, and 5-m light.Turbidity measurements are shown in Appendix D, along with other water quality measurements.
Data analysis
Mean rarefied species richness values for each year were plotted and a linear regression applied to illustrate richness trends through time (Figure 2).Rarefied species richness trends were analyzed using Spearman's rank correlation coefficients ('Spearman's rho'; Figure 2; Appendix C).Spearman's rho is a non-parametric rank-based test that is commonly used with historical and paleontological datasets, and it assesses how the relationship between two variables can be described using a monotonic function that does not assume normality of the data.Environmental water quality and chemistry data from the Kentucky Lake Long-Term Monitoring Program were plotted against time and temporal trends were illustrated using locally estimated scatterplot smoothing ('LOESS') lines (Appendix D).Analysis of community structure was performed by first filtering out all years during which fewer than 15 specimens were collected.Proportions of species belonging to different categories (Figure 3; impoundment tolerance, channelization tolerance, and life history) were calculated for each of the years and plotted.A Generalized Additive Model (GAM) was fitted for each group identity using the 'mgcv' package in R (Wood 2023).GAMs are nonparametric extensions of linear regression modeling that allow for nonlinear predictors, which is ideal for this dataset which contains noticeable trends and fluctuations throughout which would not be well-represented by linear regression modeling.All analyses were performed in R version 4.2.2 (R Core Team 2022) in RStudio version 2023.03.0 + 386.
Species richness and community structure
Using the 1930 data from Sickel et al. (2007), we can compare the species found in Kentucky Lake before and after dam construction in 1944 (Appendix A).21 distinct species were collected in 1930 before the dam was built, 9 of which were not present in the collections in Kentucky Lake after dam construction and seem to have been extirpated from the lake.The remaining 12 species that were recorded in 1930 were present throughout the dataset, with only one species (Cyclonaias tuberculata) being extirpated before the end of the dataset in 2001.14 new species were recorded in 1977-1981, and only one of those species, Tritogonia verrucosa, was only collected during one year.Five species were last recorded between 1984 and 1987, and one species was last recorded in 1993.The remaining species in the dataset were found in the Kentucky Lake until the end of the dataset in 2001.
Linear regression reveals that the rarefied richness counts decreased between 1930 and 2001 (Figure 2; corr = −0.355,p < 2.2 e-16).This trend was robust to changes in the intensity of subsampling in our rarefaction analyses, maintaining significant decrease in rarefied richness towards the modern when anywhere between 2-10 specimens are iteratively subsampled (Appendix C).The trend was also robust to changing the threshold of specimens required to be collected during a year in order for that year of data to be included in the rarefaction protocol (and hence inclusion in the final dataset; see Appendix C).Spearman-rho statistics illustrate the consistency of decreasing richness trends (consistently negative correlation values, Score range = −0.08 to −0.31) with all p-values being less than 2.2 e −16 .
In terms of community compositions, the generalized additive model results indicated the proportion of channelization tolerant species significantly decreased over the study interval (Figure 3(a); p = 0.003).In Figure 3(b), it is apparent that impoundment intolerant species were extirpated from the dataset after 1930.One impoundment intolerant specimen was collected in 1987, but because there was only one specimen collected in the dataset during that year and the minimum number of specimens collected in this analysis is 15, that year of collections was excluded from this analysis and is not represented in Figure 3(b).The proportion of highly impoundment tolerant species increased over the study interval (p = 0.090) and the proportion of mildly impoundment tolerant species decreased over the study interval, although this is a weaker relationship (p = 0.136).Figure 3(c) shows that the proportion of opportunistic species decreased over the study interval (p = 0.037), the proportion of periodic species remained constant (p = 0.098), and the proportion of equilibrium species increased towards the present (p = 0.057).
Water quality
Indicators of water quality revealed declines in turbidity, dissolved total phosphorus, and dissolved organic nitrogen between 1988 and 2020 (Appendix D).Primary productivity, pH, and dissolved oxygen were stable within the same period, while increases are seen in chlorine, temperature, 5-m light, and one-meter light (Appendix D).Seasonal variations corresponded with what previous studies have found in Kentucky Lake (e.g.Yurista et al. 2004).
Long-term temporal trends in rarefied richness and community structure
Quantitative analysis of historical unionid collections reveals changes in species richness and assemblage structure in Kentucky Lake between the pre-dam sampling completed in 1930 and the period 1977-2001, potentially providing an answer to our first question ('on what timescales do unionid mussel communities change post-impoundment?').The data show changes in richness and community structure from 1930 (i.e.pre-dam) to 2001, involving a marked drop in rarefied richness and increases in the proportions of channelization intolerant and impoundment tolerant species.Although we do not have water quality data from this same period, the observed changes make intuitive sense in context of impoundment -reduced current velocities changing habitats from lotic to lentic and resulting in substantial deposition of silt.This also matches with observations from other lakes across the southeastern US that have been affected by dam construction (see e.g. Bates 1962;Blalock and Sickel 1996;Sickel et al. 2007).These records paint a detailed picture of changing community structure in Kentucky Lake in response to impoundment and indicate that many species were unable to adapt to the more lentic, lacustrine conditions of the reservoir or that their fish hosts have become absent (from the lake as a whole or from the depths at which mussels now live).These data thus indicate that mollusk communities reorganized relatively rapidly (<40 years) in response to impoundment.
The question as to whether communities are still changing (and thus the effects of dam construction are ongoing) is harder to answer.When rarefied species richness data are plotted at annual resolution (Figure 2), there is a significant downward trend, which suggests that the dam is still having an impact on communities 40-60 years after construction.When assessing community structure since 1980, it seems that structures have experienced large changes from before damming (1930) to 1980 but have largely stabilized since then in terms of impoundment tolerance (Figure 3).Our dataset combining records from the MSU collections and culled from the literature thus indicate that communities changed extensively after construction of Kentucky Dam, but that the rate of change has likely slowed in the 20 years spanning ~1980-2000.
More recent environmental changes
Some additional context can be gleaned from examining records of water quality collected from the Hancock Biological Station.While our water quality data and mussel data possess limited overlap in time (and thus prevent a complete comparison between the two) the water data shows only minor shifts -principally a decrease in turbidity and increase in 5-m light penetration depth in the period 1990-2020 -providing an intuitive correlation with stable mussel community structure.The observed changes in water quality are, however, unusual in context of an impounded river system where lower water velocities are expected to lead to an increase in find sediment deposition (siltation).
We hypothesize that that the observed decrease in turbidity (and associated increase in light penetration) is the result of reduction in suspended sediment.Although a decrease in the concentration of suspended plant matter could also be a plausible driver of increasing light penetration (Gallegos and Jordan 2002), our data on chlorophyll concentration (a proxy for phytoplankton density) is relatively stable through the studied interval (Appendix D), indicating that reduced sediment load is the most likely culprit.The reasons for declining levels of suspended sediment are, however, unclear.One plausible explanation is that Kentucky Lake is a 'mainstem' reservoir, meaning that human manipulation and rain events can change the flow regime from lotic to lentic (Yurista et al. 2004;White 2014).Because the lake has maintained a strong flow relative to when it was a wild river (Yurista et al. 2004), it is possible that the majority of the silt in the river has remained entrained in the flow and has not settled out as much as expected with the addition of the impoundment.In addition, there has been a relatively recent effort to reduce soil erosion in the Kentucky Lake area (United States Department of Agriculture (USDA) and Forest Service 2004), which may contribute to declining turbidity in the river.While it is unclear to what extent these efforts have been implemented, efforts have included promoting the use of natural vegetation buffers along waterways and installing fencing to prevent livestock from entering waterways, which lowers the potential for cattle to trample the mussels and introduce waste into the water (Windsor 2000).The eastern side of Kentucky Lake is bordered by the Land Between the Lakes National Recreation Area which has been uninhabited since its created in 1963 and is ~95% forested (Yurista et al. 2004).Thus, strategies designed to prevent soil erosion could be especially important for the western edge of the lake, where land use is predominantly agricultural with several small towns and less than 50% forest cover (Yurista et al. 2004).The rural counties on the western side of Kentucky Lake have small populations (Calloway County, KY: 37,103 people, Marshall County, KY: 31,163 people, and Henry County, TN: 32,056 people in 2020) and slow population growth compared to more urban areas (none of these counties has doubled in population since 1940 whereas Nashville's Davidson County has nearly tripled in population during the same interval) (https://www.census.gov/programs-surveys/decennial-census.html).This limited population increase has prevented mass urbanization of the landscape seen near other impoundment and may allowed soil preservation measures to have a substantial impact on sediment input to the river (Tennessee Valley Authority (TVA) 1985; United States Department of Agriculture (USDA) and Forest Service 2004).
Another possible driver of the decline in suspended sediment is damming upstream of this area; Kentucky Dam is the last of nine dams on the Tennessee River (White 2014), which could be responsible for reducing the fine sediment suspended load entering Kentucky Lake.However, a problem with this hypothesis is that most of the dams upstream were built between 1930 and 1950, so it is unclear why turbidity has been steadily declining into the present.Future work will focus on sampling the modern mussel community and assess to what extent the ecosystem has continued to change in the most recent two decades.
In North America, the biggest threats to freshwater mussel biodiversity (in addition to river impoundment) include land-use changes (often with associated eutrophication-induced impacts), pollution, exploitation, and the introduction of invasive species (see e.g.Vaughn and Taylor 2000).In terms of land-use, the only notable change in this area since impoundment in 1944 has been the establishment of the Land Between the Lakes National Recreation Area in the 1960s (Nickell 2007).Since then, the eastern shore of the lake has remained forested, while the western shore has consistently comprised a mixture of small towns and farmland (Yurista et al. 2004).Moreover, there is little evidence for eutrophication-induced impacts such as harmful algal blooms (given that both dissolved total phosphorous and dissolved organic nitrogen have both decreased through time), or hypoxia/anoxia (given that levels of dissolved oxygen have stayed broadly consistent) (Appendix D).There is similarly little evidence for pollution, with no evidence of toxic waste or unusual pollutants found in previous studies (e.g.White 2014) and little reason to believe that unionid populations in this area are being over-harvested.Lastly, in terms of invasive species, zebra mussels (which have posed significant threats to freshwater ecosystems elsewhere in the United States -see e.g.Nalepa and Schloesser 1992;Strayer 2009) have not yet become established or prevalent within Kentucky Lake (White 2014).While zebra mussels were introduced to Kentucky Lake in the early 1990s, the lack of established colonies and low mussel densities within the lake (White 2014;Meystedt 2023) exclude the invasives as likely drivers of the reduced turbidity trends discussed above.
Implications for remediation efforts
Our combined community and environmental data allow us to make some rudimentary predictions about the current state of mussel communities in Kentucky Lake, as well as the potential for ongoing monitoring and remediation efforts.High turbidity, or the prevalence of fine sediment entrained in the water flow, harms freshwater mussels by increasing recruitment failure, inhibiting their ability to effectively filter out nutrients from the water column, and disrupting reproduction (Box and Mossa 1999;Gascho Landis et al. 2013;Gascho Landis and Stoeckel 2016;Goldsmith et al. 2020).The surprising decrease in turbidity and increase in light penetration over the last 30 years has been coupled with an increase in the proportion of equilibrium species between the 1980s and 2000.If the lake ecosystem has reached equilibrium, the combination of improving water conditions, absence of strong indications of eutrophication, pollution, or establishment of invasive species, may identify Kentucky Lake as an area suitable for the reintroduction of historically extirpated species and other remediation efforts.
One major limitation in restoration of freshwater mussels is the availability of fish hosts.Most larval freshwater mussel species parasitize fish, and consequently mussel reproduction is highly dependent on healthy populations of those fish species (Kat 1984;McMahon et al. 1991;Keller and Ruessler 1997;Lydeard et al. 2004).Illustrating this point, both Watters (1992) and Vaughn and Taylor (2000) found that the health of fish populations in freshwater systems acts as an effective indicator for the richness of mussels in the same system.Fish movement and reproduction is often greatly hindered by the damming of rivers (World Commission on Dams 2000), so the negative effects of impoundment on fish populations have a downstream negative effect on mussel populations.A detailed understanding of the preferred fish hosts and their ecology will therefore be crucial to ensure successful remediation attempts.
Conclusions
In summary, a quantitative analysis of geohistorical collections housed at MSU, in combination with more recent surveys, shows that that mussel richness has declined after dam construction with losses among channelization-tolerant and impoundment-intolerant or -mildly tolerant species.However, a combination of subsampling and generalized additive models suggests that community structure has since stabilized, in concert with a decline in turbidity and increase in light penetration over the period ~1980-2020.These findings suggest that Kentucky Lake may be a site that would benefit from dedicated restoration efforts, particularly if unionid-specific fish host species are also present.
Lastly, the results of this work emphasize the utility and importance of local scale, geohistorical collections such as that housed at MSU and highlight how these data can be brought to bear on pressing issues of ecological monitoring and conservation.Given that both digitization and analysis of geohistorical collections represent relatively inexpensive ways of examining long-term trends in community health, studies along these lines may present an affordable way forward to identifying freshwater river systems in need of monitoring and remediation.As part of this ongoing effort, we have made the digitized MSU collection data freely available upon request.
Figure 1 .
Figure 1.Map of land Between the lakes national recreation area in southwestern Kentucky, bordering tennessee.water quality sites (yellow circles and associated site abbreviations) are monitored by hancock Biological station (hBs), and mussel sampling localities (black dots) are sites recorded in the Murray state university (Msu) freshwater mussel collections.the green shaded land represents the land Between the lakes national recreation area.
Figure 2 .
Figure 2. yearly mean values of rarefied species richness in Kentucky lake through time based on 100,000 iteratively subsampled pools of 5 specimens for years during which > = 10 specimens were collected.the blue line represents a linear model line of best fit (correlation estimate = −.355,p < 2.2 e-16) and the grey shaded region indicates a 99% confidence interval.Because the confidence level is so high, the grey shaded region is present but extremely close to the line of best fit. the black points represent mean values for each year, but linear regression was calculated with all values.
Figure 3 .
Figure 3. temporal changes in the proportion of the assemblage belonging to categories provided and described by haag (2012).years during which at least 15 specimens were collected are represented in these plots.lines on all plots represent generalized additive Models (gaM) of best fit for the data.(a) Proportion of channelization tolerant species.gaM results: proportion of channelization tolerant species decreases significantly over the study interval (p = 0.003 with 93.7% of deviance in proportion explained by the time series).(b) Proportion of Impoundment tolerant species.Intolerant species are only present in 1930 (pre-dam) in this analysis.gaM results: the proportion of mildly impoundment tolerant species decreases over the study interval (p = 0.136 with 46.5% of deviance in proportion explained by the time series).the proportion of highly impoundment tolerant species increases over the span of the dataset (p = 0.090 with 55.2% of the deviance in proportion explained by the time series).(c) Proportion of Different life histories.the proportion of opportunistic species significantly decreases over the study interval (p = 0.037 with 81.3% of the deviance explained by the time series).the proportion of periodic species remains constant (p = 0.098 with 53.6% of the deviance explained by the time series).the proportion of equilibrium species increased significantly near the end of the study interval (p = 0.057 with 76.3% of the deviance explained by the time series). | 2023-04-27T15:17:23.152Z | 2023-04-17T00:00:00.000 | {
"year": 2023,
"sha1": "f60de9e3c87e29998544ebddf24df62a55af168e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1080/02705060.2023.2203712",
"oa_status": "CLOSED",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "22dbee922e400a444fe6536c28d0f35c0cf04bc3",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
} |
11400490 | pes2o/s2orc | v3-fos-license | Betulinic acid protects against N-nitrosodimethylamine-induced redox imbalance in testes of rats
ABSTRACT Objectives: N-nitrosodimethylamine (NDMA) is known to elicit carcinogenic activity in the liver and kidney of animals. There is a dearth of information of its effect in testis. This study evaluated the protective role of betulinic acid (BA) against NDMA-induced redox imbalance in testes of rats. Methodology: Twenty-four male rats were assigned into four groups and treated with normal saline, BA, NDMA and [BA+NDMA]. BA (25 mg/kg) was given for 14 days, while NDMA (5 mg/kg) was given on days 7 and 12. Results: Administration of NDMA significantly increased the weight and relative weight of testes by 51 and 71%, respectively, while treatment with BA attenuated the weight-gain. Furthermore, NDMA decreased the sperm count, motility and live–dead ratio by 57, 36 and 37%, respectively, and increased total sperm abnormality by 56%. However, BA attenuated the changes in the spermiogram of NDMA-treated rats. NDMA significantly decreased the activities of antioxidative enzymes, follicle-stimulating and luteinizing hormones, while testicular levels of thiobarbituric acid reactive substances and total cholesterol were increased. Also, NDMA increased the activities of aniline hydroxylase and aminopyrine-N-demethylase. Supplementation with BA attenuated NDMA-induced alteration in these biochemical indices. Conclusion: BA protects against NDMA-induced redox imbalance via activation of antioxidative pathway.
Introduction
Humans and animals are exposed to N-nitrosamines through chemical industries, processed foods, ground water and polluted environment, giving rise to toxicological implications [1]. The toxicity of NDMA has been associated with free radicals, reactive metabolites and inflammatory substances [2]. N-nitrosamines are activated by N-nitrosodimethylamine-Ndemethylases I and II via α-hydroxylation [3]. The αhydroxy nitrosamine formed is finally converted to an alkyl nitrenium or carbonium ion, which can alkylate DNA and other macromolecules [4]. It is also known that the carcinogenicity of nitrosamines is directly proportional to activities of the activating enzymes [5]. Recently, our studies on the hepatorenal toxicity of NDMA linked the toxicity of the toxicant to increase in DNA fragmentation, expression of Bcl-2, p53, Ki-67 and CD15 in rats [6,7]. However, there is a dearth of information on the effects of NDMA on male reproductive system.
Materials and methods
Chemicals NDMA and BA were purchased from Sigma Chemical Co. (Saint Louis, MO, U.S.A). Glutathione, hydrogen peroxide, 5,5 ′ -dithios-bis-2-nitrobenzoic acid (DTNB) and epinephrine were purchased from Sigma Chemical Co., Saint Louis, MO, U.S.A. Trichloroacetic acid and thiobarbituric acid (TBA) were purchased from British Drug House (BDH) Chemical Ltd, Poole, UK. Other chemicals and reagents were of analytical grade and purest quality available.
Experimental animals
Male Wistar rats (135.7 ± 6.6 g; 128-142 g) were purchased from the Animal House of the Faculty of Basic Medical Sciences of the University of Ibadan, Nigeria. They were housed in plastic cages and fed on rats pellets and given drinking water ad libitum. The rats were acclimatized for 7 days before the experiment and subjected to 12-hour light/ dark cycle and temperature of 29 ± 2°C. The study was approved by the Faculty of Basic Medical Sciences, University of Ibadan Animal Ethics Committee.
Study design
Twenty-four male Wistar rats were randomly assigned into four groups of six rats each. The first group (control) was given normal saline, second group (BA) was given BA alone (25 mg/kg body weight), third group (NDMA) was given NDMA alone (5 mg/kg body weight), while the fourth group (BA + NDMA) was given BA (25 mg/kg body weight) and NDMA (5 mg/kg body weight). The BA and NDMA were separately dissolved in normal saline. The choice of dosage and vehicle was based on a previous study [7]. BA was given by gavage for 14 consecutive days, while NDMA was given intraperitoneally on days 7 and 12 of the study. Administration of NDMA (days 7 and 12) was staggered purposively to produce the desired toxicity in the rats without mortality.
Collection of semen, blood and testes
The rats were fasted overnight after the last dose of BA on day 14. Blood was collected by ocular bleeding, allowed to clot and then centrifuged at 3000g for 10 minutes to obtain serum. Animals were euthanized by exsanguination, and semen was collected from epididymis immediately for analysis. The testes were excised, washed in ice-cold 1.15% potassium chloride solution to remove blood stains, dried and weighed. One portion of testicular tissue was homogenized using a Teflon homogenizer and centrifuged using a high speed refrigerated centrifuge (HITACHI) at 10,000g for 10 minutes to obtain postmitochondrial fraction. The other portion of the tissue was fixed in Bouin solution for histological analysis.
Sperm analysis
Sperm count and sperm motility were determined according to the method described by Oehninger et al. [20]. Sperm abnormalities were determined by assessing the morphological features, including sperm head, mid-piece and tail, as described by Duran et al. [21].
Protein determination
Testicular protein levels were determined according to the method of Lowry et al. [22] using bovine serum albumin as the standard.
Hormonal assays
Serum testosterone was assayed by the enzyme-linked immunoabsorbent assay (ELISA) as described by Tietz [23] using Serozyme I Serono (Diagnostics, Freiburg, Germany). The testosterone concentration was obtained by correlating the absorbance of the test sample at 550 nm with the corresponding absorbance on the standard curve. The FSH, prolactin and LH concentrations were determined based on a solid-phase enzyme-linked immunoabsorbent assay, as described by Uotila et al. [24].
Determination of aniline hydroxylase activity
Aniline hydroxylase activity was determined based on the formation of p-aminophenol during the hydroxylation of aniline hydrochloride, as described by Ko et al. [25].
Determination of aminopyrine-N-demethylase activity
Activity of aminopyrine-N-demethylase was determined according to the method of Tu and Yang [26]. The assay is based on the N-demethylation of aminopyrine (4-dimethyl aminoantipyrine) to 4-aminoantipyrine with a stepwise formation of formaldehyde. The amount of formaldehyde formed was estimated by the method of Nash [27].
Determination of uridyl diphosphoglucuronsyl transferase activity
The uridyl diphosphoglucuronsyl transferase (UDPGT) activity was determined according to the method of Letelier et al. [28]. The activity is proportional to the rate of conjugation of pnitrophenol with UDP-glucuronic acid.
Determination of total cholesterol, triglyceride and phospholipids levels
Testicular total cholesterol (TC), phospholipids (PL) and triglyceride (TG) levels were estimated according to the methods of Naito [29] and Buccolo and David [30].
Determination of thiobarbituric acid reactive substances level
Testicular lipid peroxidation (LPO) was estimated by determining the concentration of thiobarbituric acid reactive substance (TBARs), as described by Ohkawa et al. [31]. The method is based on the reactivity of an end product of LPO, malondialdehyde (MDA), with TBA to produce a pink adduct. The absorbance of the clear supernatant was read in a spectrophotometer against a reference blank at 532 nm. LPO was expressed in micromole MDA formed/mg protein using a molar extinction coefficient of 1.56 × 10 5 /m/ cm.
Determination of superoxide dismutase activity
Testicular superoxide dismutase (SOD) activity was determined by the method of Misra and Fridovich [32]. The method was based on the ability of SOD to inhibit the autoxidation of epinephrine (pH 10.2, 30°C). The increase in absorbance of assay reaction at 480 nm was monitored spectrophotometrically at 30 seconds intervals for 150 seconds. The specific activity of SOD was expressed in units/ mg protein.
Assay of GPx
Assay is based on the reduction of organic peroxide in a reaction mixture and oxidation of reduced glutathione (GSH) to form disulfide glutathione (GSS). The GSSG is then reduced by glutathione reductase and NADPH in the reaction mixture forming NADP + , resulting in decreased absorbance at 412 nm. The decrease in absorbance at 412 nm is directly proportional to the activity of GPx. The GPx activity was expressed as mmol/mg protein.
Assay of CAT
The method is based on the ability of CAT to promote the decomposition of H 2 O 2 in a reaction mixture. The change in absorbance during 3 minutes at 240 nm is a measure of CAT activity. CAT activity was expressed as units/mg protein.
Assay of GST
The method is based on the ability of GST to catalyze the conjugation of L-glutathione and CDNB to form a conjugate (GS-DNB), with an absorbance at 340 nm. Hence, the rate of increase in the absorption at 340 is directly proportional to GST activity in the sample. One unit of GST activity is defined as the amount of enzyme producing 1 mmol of GS-DNB conjugate per minute under the conditions of the assay. Specific activity of GST was expressed as micromoles of GS-DNB conjugate formed per minute per mg protein using an extinction coefficient of 9.61/mmol/cm.
Determination of GSH level
Testicular GSH level was determined using the method described by Mitchell et al. [36]. Briefly, the assay is involved in the oxidation of GSH by the sulfhydryl reagent DTNB to form a yellow derivative, 5 ′ -thio-2-nitrobenzoic acid, with an absorbance at 412 nm. GSH level is proportional to absorbance at 412 nm. Values were expressed as µmol/g tissue.
Histology
A section of the testicular tissue fixed in Bouin solution was dehydrated in 95% ethanol and then cleared in xylene before embedded in paraffin. Microsections (3 μm) were prepared and stained with hematoxylin and eosin (H&E) dye, and were examined under a light microscope by a histopathologist who was ignorant of the treatment groups.
Statistical analysis
All values were expressed as the mean ± standard deviation of six animals per group. Data were analyzed using oneway analysis of variance followed by the post hoc Duncan multiple range test for the analysis of biochemical data using SPSS (10.0). Statistically significant values were taken at P < 0.05.
Results
Effects of BA on body-weight gain and reproductive indices in NDMA-treated rats As shown in Table 1, NDMA caused a 44% decrease in bodyweight gain of rats relative to controls. In the group treated with BA alone, body-weight gain was observed to increase insignificantly (P > 0.05) when compared with controls. However, in rats co-treated with NDMA and BA, the bodyweight gain was found to increase by 52% relative to NDMAtreated animals. Furthermore, NDMA administration significantly (P < 0.05) increased the weight and relative weight of testes by 51 and 71%, respectively, when compared with controls. Upon supplementation with BA, both the weight and relative weight of testes were attenuated (Table 1). In addition, NDMA administration significantly (P < 0.05) decreased the sperm count, motility and live-dead ratio of rats by 36, 31 and 29%, and increased total sperm abnormality by 62%, respectively, relative to controls. However, supplementation with BA significantly (P < 0.05) attenuated the NDMA-induced alteration in spermiogram. Importantly, the sperm volume was insignificantly affected by NDMA intoxication (Table 2). Table 3 depicts the effect of BA on the levels of reproductive hormones in NDMA-treated rats. Administration of NDMA significantly (P < 0.05) decreased the serum concentrations of LH, FSH and testosterone by 30, 43 and 37%, respectively, when compared with controls. Supplementation with BA significantly (P < 0.05) increased the serum levels of LH, FSH and testosterone by 21, 57 and 39%, respectively, relative to NDMA-treated rats. The level of serum prolactin was insignificantly (P > 0.05) affected by treatment with NDMA and BA.
Effects of BA on drug-metabolizing enzymes and histomorphometry of testes in NDMA-treated rats
Effects of NDMA administration on phase I enzymes; aniline-4hydroxylase (AnH) and aminopyrine-N-demethylase (AmD) and phase II enzyme; UDPGT are given in Table 5. NDMA administration significantly (P < 0.05) increased the activities of AnH and AmD in testes of rats by 326 and 59%, respectively, while UDPGT activity was lowered by 42% relative to controls. However, supplementation with BA reduced the Table 3. Follicle-stimulating hormone, testosterone and prolactin levels in the rats treated with NDMA and BA (alone and in combination). alteration caused by NDMA on these biochemical parameters. Histological examination of the testicular tissues revealed that NDMA induced severe sub-capsular congestion, cyto-architectural changes and spermato-cellular necrosis in rats when compared with controls. Such cytological lesions were reduced in group supplemented with BA ( Figure 6).
Discussion
The major findings from the NDMA administered rats were: decrease in body-weight gain, increase in weight and relative weight of testes, induction of phase I enzymes and inhibition of phase II enzyme, alteration in lipid, hormonal and antioxidant profiles as well as cyto-architectural changes in testes of rats. Notably, BA when given to NDMA-treated rats was able to mitigate against NDMA-induced adverse effects in these animals. The observed increase in testicular weight seen in this study may have resulted from NDMA-induced testicular hypertrophy and necrotic degeneration of seminiferous tubules [37]. The ability of BA to reduce the weight and relative weight of testes of NDMA-treated indicates that this triterpenoid could protect against NDMA-induced testicular hypertrophy. NDMA is a toxicant found in processed foods and ground water, and its adverse impacts on organs, especially liver and kidney, have been studied [7]. However, there is a dearth of information on the effect of this toxicant on male reproductive system. In the present study, NDMAintoxication increased the level of TBARS (index of LPO) in the testes of rats. LPO is simply the process of oxidation of polyunsaturated fatty acids by ROS to produce hydroperoxide and peroxyl radicals which can be converted to reactive aldehyde, such as MDA [38]. The elevated level of TBARS in the testes of NDMA-treated rats confirmed the induction of oxidative damage. Also, the observed decrease in testicular activities of SOD and CAT in NDMA-treated rats confirms enzyme inhibition probably due to the action of NDMA metabolites, which may enhance superoxide radical and H 2 O 2 accumulation and thus exposed the testes to oxidative Table 5. Activities of phase 1 enzymes [aniline hydroxylase (AnH) and aminopyrine-N-demethylase (AmD)] and phase II enzyme (UDPGT) in the testis of rats treated with NDMA and BA (alone and in combination). stress. SOD is involved in the control of decidual cell differentiation in rats and regulation of cell proliferation [39]. The decrease in SOD activity of NDMA-treated rats observed in this study is similar to the findings of Choi et al. [40], in which NDMA intoxication decreased hepatic SOD, CAT and GST activities of treated rats. In addition, administration of NDMA caused significant depletion of testicular glutathione and rapid loss of activities of enzymes of GSH pathway such as GPx and GST. GSH, a key cellular antioxidant, has been identified as the critical factor needed for spermatozoa maturation in aged animals [42]. Thus, the observed decrease in GSH is an indication of anti-fertility effect of NDMA. Furthermore, GPx and GST activities were significantly reduced in the testes of NDMA-treated rats. GPx protects against oxidative damage by reducing hydrogen peroxide to water [43]. The combined depletion of GSH, reduction of GPx and GST activities strongly suggest that NDMA may adversely affect glutathione metabolic pathway and promote oxidative damage in the testes. The observed effect of NDMA on markers of oxidative stress is consistent with previous studies [6,7,41]. However, supplementation with BA improved the antioxidant status (enzymatic and non-enzymatic) of NDMA-treated animals.
One of the mechanisms of anti-gonadal action of NDMA may be the inhibitory effect on some biochemical processes in tissues of animals. It has been suggested that reactive metabolites from NDMA may suppress gonadotropins release (LH and FSH) by the pituitary [44]. The normal production of FSH and LH by the pituitary is a factor required for spermatogenesis by seminiferous tubules [45]. LH is known to induce Leydig cell to secrete testosterone which regulates spermatogenesis via androgen receptors, while FSH regulates spermatogenesis by stimulating the production of Sertoli cell factors. In this study, NDMA-intoxication decreased the production of LH and FSH, thus may impair the endocrine regulation of spermatogenesis and consequently affects the reproductive function of the testes. The significant reduction in the level of testosterone observed in NDMA-treated rats confirms impaired steroidogenesis. Testosterone plays an important role in the regulation of spermatogenesis [46]. Thus, NDMA-administration decreased sperm count, sperm motility, live-dead ratio and increased total sperm abnormality in rats. The observed decline in sperm quality may be due to inadequate hormonal levels. The biochemical data from this study were also corroborated by the histology of testes, which showed that the NDMA-administration caused changes in cyto-architecture, congestion and degeneration of the seminiferous tubules. The histopathological result is consistent with the findings of Hard and Butler [37] who reported necrotic degeneration of seminiferous tubules following NDMA intoxication. Notably, BA significantly reduced the adverse effects of NDMA on the reproductive hormones, thereby improving the sperm quality of the animals.
Studies have shown that N-nitrosamines could generate high amounts of ROS that may promote the damage of endothelial region of arterial vessels, resulting in cardiovascular diseases [47]. The elevated levels of TC and TG, and decreased level of PL in NDMA-treated rats are indications that this toxicant could potentially impair lipid metabolism in the testes. Elevated levels of TC in testes may affect the hormonal response of this organ to the production of testosterone and may impair gonadal steroidogenesis. Furthermore, NDMA-intoxication in testes increased the activities of AnH and AmD, thus serving as inducers of these phase I enzymes. Induction of cytochrome P450-dependent monooxygenases will enhance xenobiotics metabolism, consequently forming more reactive metabolites. This may explain why NDMA metabolites could exert its toxic effect in the testes. This observation is in consistent with the findings of Sheweita and Mostafa [5] who reported that the carcinogenicity of N-nitrosamines increased with increase in the Nnitrosamines metabolites. Also, a study by Erkekoglu and Baydar [48] established that inhibition of N-Nitrosamines toxicity could be achieved via reduction in the activities of cytochrome P450-dependent enzymes. The UDPGT enzymes are located in the membrane of endoplasmic reticulum and catalyze the conjugation of xenobiotics with UDP-glucuronic acid to form polar conjugates that can be rapidly excreted [49]. Owing to its role in drug detoxification, impairment of UDPGT may constitute important determinants of toxicologic predisposition, with respect to chemical carcinogenesis and teratogenesis [50]. In this study, UDPGT activity was significantly reduced in NDMA-treated group, which may be an important factor for the observed toxicity of this nitrosamine. Interestingly, BA given to NDMA-treated rats restored the activities of phase I and II enzymes to near control values. This further confirms beneficial effect of BA in ameliorating NDMA-induced toxicity in rats.
In conclusion, our data showed that NDMA-induced redox imbalance in testes of rats led to endocrine disruption, dyslipidemia, lowered sperm quality and altered activities of drug-metabolizing enzymes which are amenable to BA supplementation. | 2018-04-03T04:40:12.912Z | 2017-05-05T00:00:00.000 | {
"year": 2017,
"sha1": "3a80f9f126b4ca09a218cefc4a3f22d4772eedcf",
"oa_license": null,
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/13510002.2017.1322750?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "37a08bba9e95ef52a09ebcedc23929fae3d8e7b8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
32186343 | pes2o/s2orc | v3-fos-license | Secure and Lightweight Key Distribution with ZigBee Pro for Ubiquitous Sensor Networks
We propose a secure and lightweight key distribution mechanism using ZigBee Pro for ubiquitous sensor networks. ZigBee consumes low power and provides security in wireless sensor networks. ZigBee Pro provides more security than ZigBee and offers two security modes, standard security mode and high security mode. Despite high security mode, ZigBee Pro has weakness of key distribution. We use enhanced ECDH for secure key distribution in high security mode. Our simulation results show that the energy consumption of our approach decreases and the average run time is decreased by 39%. Moreover, the proposed scheme enhances security, that is, confidentiality, message authentication, and integrity. We also prove that the proposed key distribution can resist man-in-the-middle attack and replay attack.
Introduction
Various sensors in a sensor network technology are located within wired/wireless network infrastructures. Spatially distributed autonomous sensors monitor physical or environmental conditions such as temperature, humidity, sound, vibration, pressure, and motion and pass their data through the wired/wireless network to a base station. Sensor network technology has been utilized in monitoring military, home automation, and health care systems, as well as agriculture and weather conditions. Sensors have limited memory and throughput capacity for wireless sensor networks. Therefore, limitations of the sensor itself and the underlying vulnerability of wireless communication with the sensors must be considered. In addition, sensed and transmitted data in each field are usually private information or important authentication information. Thus, security is to be applied in most cases. For this, ZigBee [1] provides a low power consumption and security standard-based protocol for applications on wireless sensor networks. ZigBee was developed to address the following [6] is a key exchange algorithm, the wellknown Diffie Hellman [11] key agreement based on ECC (Elliptic Curve Cryptography) [12]. ECDH is important in modern protocols as a key exchange and can be adopted for ECC. Figure 1 shows the key exchange process.
ECDH. ECDH
Consider two parties, and , willing to exchange a common secret key. Both have agreed to a common and publicly known curve over a finite field, as well as to a base point . User randomly chooses , 1 < < 2 and User accordingly , 1 < < 2 . User computes a public key = , User B does = . User A sends to User , User sends to User . User computes the shared secret key by = and User B also by = [13]. An eavesdropper knows only and but is unable to compute the secret key from this. However, vulnerability of ECDH has no authentication [14] and no prevention of manin-the-middle attack [15]. standard security mode. For this, we apply ECDH for secure Network key generation and transmission and sub-MAC mechanism for message authentication and integrity. We proved that our scheme could provide efficiency by achieving a similar run time and similar energy consumed in standard security mode [16].
High Security Mode.
If the Trust Center does not already share a Master or Link Key with the newly joined device, Figure 2 shows the high security mode authentication procedure of ZigBee Pro.
The Symmetric-Key Key Establishment (SKKE) protocol is a process in which an initiator device (Trust Center) establishes a Link Key with a responder device (Joiner) using a Master Key. The next step is an entity authentication process between Router and Joiner.
As in standard security mode, Update-Device Command and Secured Transport-Key Command are encrypted with Master key, but Transport-Key Command sent from the Router to the Joiner is not secure. This has a security issue.
The MAC scheme is used for key confirmation in SKKE. The first 128 bits of keying data shall be a Mac Key and the second 128 bits shall be a Link Key during Mac Key generation. After SKKE, the Network Key is securely transmitted using the Master Key.
We propose a procedure to ensure key secure distribution as shown in When the Trust Center receives an APSME-UPDATE-DEVICE.request message, the Trust Center generates an for secure Master Key and nonce , and sends , , , sub-MAC( , , ) to the Joiner. The Joiner generates sub-MAC( , , ) to compare the transmitted sub-MAC( , , ). If they match, the Joiner confirms that the transmitted message has not been modified. Otherwise, the Joiner discards the transmitted message. If the check is successful, the Joiner computes = , and computes using the Matyas-Meyer-Oseas (MMO) hash function [17]. The 160-bit becomes a 128 bit Network Key, . A sub-MAC [7] is constructed by selecting some bits of an HMAC. We reduce the overhead by transmitting only a part of the actual HMAC, rather than the entire HMAC using sub-MAC. Sub-MAC guarantees message integrity and authentication. Our research selects 8-bits of 16 bytes. We assume each node has the same PRNG (Pseudo Random Number Generator) [18].
Joiner Next, the generated Master Key encrypts +1 , and the result, ( +1 ) is sent to the Joiner to check message integrity and announce successful Master Key generation. The Joiner decrypts the ( +1 ) with the Master Key and checks the +1 to verify secure Master Key generation. If successful, the Trust Center and the Joiner perform the next step, SKKE, to establish a Link Key.
Simulation and Results
The Qualnet simulator was used to evaluate the performance of the proposed scheme. Our research uses Qualnet 4.5 [19] with sensor network libraries based on the ZigBee protocol and additional protocols.
We composed one clustering network structures. The clusters were composed of 15 nodes. Node 1 is a Joiner, node 16 is a Router, and node 8 is a Trust Center.
Efficiency Analysis of Enhanced Key Mechanism.
We propose an enhanced key distribution scheme using ECDH for secure and lightweight key distribution and sub-MAC to overcome the vulnerability of ECDH. The simulation was performed ten times in each of the previous four procedures with Trust Center, Router, and Joiner.
First, we performed the key generation in standard security mode and high security mode, proposed key distribution in standard mode (Standard ECDH), and proposed key distribution in high security mode (High ECDH). Figure 4 shows the total run time measurements.
The average run time of the standard security mode is 0.5156 seconds, and for proposed key distribution in standard mode (Standard ECDH) it is 0.5778 seconds; the difference is 0.0622 seconds. When this value is compared to the average run time of standard security mode, it adds 12%. However, the difference, 0.0622, is slight in terms of the figure and compared to the enhanced security.
The value is compared to the average run time of high security mode, it is decreased by 39%. It also provides enhanced security. Next, we measured energy consumption in Joiner (Node 1), Router (Node 16), and Trust Center (Node 18). Figure 5 shows average energy consumption in transmit mode. Figure 6 shows average energy consumption in receive mode. The average energy consumption of each node for transmit mode and receive mode is similar. Table 3 details the values. When the proposed key distribution in security mode is compared to the standard security mode, it consumes more energy. Especially, the receive mode of the Trust Center (N18-R) shows the maximum difference, 0.001447 mJoule. However, the Trust Center has sufficient capacity and energy, so this difference is negligible. The second difference is 0.001412 mJoule in the receive mode of the Joiner (N1-R). The sensor node uses two AA alkaloid batteries. An AA alkaloid battery contains a maximum of 3000 mAh, so the total energy is 6000 mAh. The formal voltage of an AA battery assumes 1.5 volts. The amount of eletric power is 9 Wh, products of 6 Ah and 1.5 V, and this is converted into 32,400 J, 3600 X 9 (J) [20]. The difference is slight compared to 32,400 J.
The energy consumption of the high security mode and proposed key distribution in high security mode (High ECDH) is similar. The energy consumption of proposed key distribution in high security mode (High ECDH) decreases, except for the transmit mode of the Joiner (N1-T) and the receive mode of the Router (N16-R). Moreover, the proposed scheme enhances security.
Security Analysis
In this section, we analyze our enhanced key distribution for ZigBee Pro that provides security properties and resists some general attacks. ZigBee Pro is vulnerable in the case of key distribution in two security modes. ECDH cannot prevent man-in-the-middle attack and does not provide authentication. However, our proposed scheme overcomes these vulnerabilities and enhances security. Our scheme could resist man-in-the-middle attack, replay attack, and ensure confidentiality of keys, message authentication, and message integrity [16]. We assume that an attacker does not know the sub-MAC method. Therefore, even if the attacker knows the Joiner's private key b, he/she cannot make the sub-MAC message. If the attacker tries to make the sub-MAC message, the probability of failure enhances because the attacker does not know how to create a sub-MAC message using Master Key. Additionally, there is a public key infrastructure (PKI) 6 International Journal of Distributed Sensor Networks system. The Trust Center assures the private key using the received public key through a certificate authority (CA). The security of a MAC scheme can be quantified in terms of the success probability achievable as a function of total number of queries to forge the MAC [21]. The security of a -byte MAC is quantified as 2 ( ×8) because an intruder has a 1 in 2 ( ×8) chance in blindly forging the MAC. To increase the security of a MAC, its size should be increased. Increasing the size of the MAC also increases the communication overhead [22]. Our sub-MAC selects 8 bits of 128 bits. Therefore, the security of the sub-MAC is 28. Hence, the possibility that the false data are not detected by a sub-MAC is 1/2 8 (=0.0039). Moreover, the communication overhead is reduced by 1/16 (=0.0625). Consequently, the size of the sub-MAC is directly related to the strength of the security and the communication overhead. A balance needs to be achieved between the desired security level and the transmission overhead [7].
BAN Analysis.
BAN logic (the Logic of Authentication of Burrows, Abadi and Needham) [23] is widely used and studied in formal analysis due to its simplicity and efficiency. The BAN logic is a model logic based on belief and can be used in the analysis and design of a cryptographic protocol. The use of a formal language in the analysis and design process can exclude faults and improve the security of the protocol.
Basic Notations.
The symbols , , , and are principals involved in this sort of key agreement protocol: represents a good session key for communication between and [24].
| ≡ : Principal believes . believes as if is true.
⊲ : sees . principal has sent a message containing . | ∼ : Principal once said . at some time believed and sent it as part of a message. ⇒ : Principal has jurisdiction over . Principal has authority over and is trusted in this matter.
#( ):
The formula is fresh. That is, has not been sent in a message at any time before the current run of the protocol. A message that is created for the purpose of being fresh is called a nonce.
←→
: and may use a shared key to communicate. The key is good and will always be known only to and and to any other principal trusted by either of them. { } : is encrypted using key .
Inference Rules.
Message Meaning Rules for shared keys: If principal believes that key is shared only with principal and sees a message encrypted under a key , it believes only with principal . may conclude that it was originally created by who once said its contents.
Jurisdiction Rule is as Follows
If believes that believes and also believes that has jurisdiction over , then should believe too.
Nonce Verification Rule is as Follows: If believes that is fresh and that once said , then believes that has said during the current run of protocol and hence that believes at present. In order to apply this rule, should not contain any encrypted text. The nonce verification rule is the only way of "promoting" once said assertion to actual belief.
BAN Analysis of the Proposed Key Distribution Initialization Hypothesis is as Follows
(1) Trust Center |≡ TC.
According to the formalization analysis, we can get the conclusion that the proposed key distribution can resist manin-the-middle-attack and replay attack.
Conclusion
This work proposed an enhanced key distribution scheme using ECDH and sub-MAC for efficiency and security. We have applied ECDH for secure key distribution and improved vulnerability of ECDH, using sub-MAC and nonce for message freshness and integrity.
We compared ZigBee Pro to the proposed scheme. We proved that our scheme could provide efficiency by achieving a shorter run time and lower energy consuming in high security mode. Security analysis proved our scheme could resist man-in-the-middle attack, replay attack, and provide confidentiality, message authentication, and integrity. Consequenly, the proposed scheme provides lightweight and secure key distribution compared to ZigBee Pro. We are going to experiment our proposed scheme with ZigBee devices in future work. | 2018-04-03T02:06:05.921Z | 2013-07-17T00:00:00.000 | {
"year": 2013,
"sha1": "7fdce25f2d1bd2eb71d35103d7898a6b4c372d16",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2013/608380",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "570d11f9bc943bada7e5bee1ed2c2d41244a4f5e",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
215745024 | pes2o/s2orc | v3-fos-license | Modeling the transmission of new coronavirus in S\~ao Paulo State, Brazil -- Assessing epidemiological impacts of isolating young and elder persons
We developed a mathematical model to describe the transmission of new coronavirus in the S\~ao Paulo State, Brazil. The model divided a community in subpopulations comprised by young and elder persons, in order to take into account higher risk of fatality among elder persons with severe CoViD-19. From data collected in the S\~ao Paulo State, we estimated the transmission and additional mortality rates, from which we calculated the basic reproduction number R0. From estimated parameters, estimation of the deaths due to CoViD-19 was three times lower than those found in literature. Considering isolation as a control mechanism, we varied isolation rates of young and elder persons in order to assess their epidemiological impacts. The epidemiological scenarios focused mainly on evaluating the number of severe CoViD-19 cases and deaths due to this disease when isolation is introduced in a population.
Introduction
Coronavirus disease 2019 is caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), a strain of the SARS-CoV-1 (pandemic in 2002/2003), originated in Wuhan, China, in December 2019, and spread out worldwide. World Health Organization (WHO) declared CoViD-19 pandemic on 11 March, based on its own definition: "A pandemic is the worldwide spread of a new disease. An influenza pandemic occurs when a new influenza virus emerges and spreads around the world, and most people do not have immunity".
Coronavirus (RNA virus) can be transmitted by droplets that escape lungs through coughing or sneezing and infects humans (direct transmission), or they are deposited in surfaces and infects humans when in contact with this contaminated surface (indirect transmission). This virus enters in susceptible persons through nose, mouth or eyes, and infects cells in the respiratory tract, being capable of releasing millions of new virus. In serious cases, immune cells overreact and attack lung cell causing acute respiratory disease syndrome and possibly death. In general, the fatality rate in elder patients (60 years or more) is much higher than the average, and under 40 years seems to be around 0.2%. Currently, there is not vaccine, neither efficient treatment, even many drugs (cloroquine, for instance) are under clinical trial. Like all RNA-based viruses, coronavirus tends to mutate faster than DNA-viruses, but lower than influenza viruses.
Many mathematical and computational models are being used to describe current new coronavirus pandemics. In mathematical model, there is a fundamental threshold (see [1]) called the basic reproduction number, which is defined as the secondary cases produced by one case introduced in a completely susceptible population, and is denoted by R 0 . When a control mechanisms is introduced, this number is reduced, and is called as the reduced reproduction number R r . Ferguson et al. [4] proposed a model in order to investigate the effects of isolation of susceptible persons. They analyzed two scenarios called by them as mitigation and suppression. Roughly, mitigation reduces the basic reproduction number R 0 , but not lower than one (1 < R r < R 0 ), while suppression reduces the basic reproduction number lower than one (R r < 1). They predicted the numbers of severe cases and deaths due to CoViD-19 without control measure, and compared them with those numbers when isolations (mitigation and suppression) are introduced as control measures. Li et al. discussed the role of undocumented infections [5].
In this paper we formulate a mathematical model based on ordinary differential equations aiming firstly to understand the dynamics of CoViD-19 transmission, and, using the data from São Paulo State, Brazil, estimate model parameters, and, then, study potential scenarios introducing isolation as a control mechanism.
The paper is structured as follows. In Section 2, we introduce a model, which is numerically studied in Section 3. Discussions are presented in Section 4, and conclusions, in Section 5.
Material and methods
In a community where SARS-CoV-2 (new coronavirus) is circulating, the risk of infection is greater in elder than young persons, as well as under increased probability of being symptomatic and higher CoViD-19 induced mortality. Hence, a community is divided in two groups, comprised by young (under 60 years old, denoted by subscript y), and elder (above 60 years old, denoted by subscript o) persons. The vital dynamics of this community is given by per-capita rates of birth (φ) and mortality (µ).
For each sub-population j (j = y, o), the persons are divided in seven classes: susceptible S j , susceptible persons who are isolated Q j , exposed E j , asymptomatic A j , asymptomatic persons who are caught by test and then isolated Q 1j , symptomatic persons at initial phase of CoViD-19 (or pre-diseased) D 1j , pre-diseased persons caught by test and then isolated, plus mild CoViD-19 (or non-hospitalized) Q 2j , and symptomatic persons with severe CoViD-19 (hospitalized) D 2j . However, all persons in young and elder classes enter to same immunized class I, after experiencing infection.
With respect to new coronavirus transmission, the history of natural infection is the same in young (j = y) and elder (j = o) classes. We assume that only persons in asymptomatic (A j ) and pre-diseased (D 1j ) classes are transmitting the virus, and other infected classes (Q 1j , Q 2j and D 2j ) are under voluntary or forced isolation. Susceptible persons are infected according to λ j S j /N and enter to classes E j , where λ j is the per-capita incidence rate (or force of infection) defined by λ j = λ (δ jy + ψδ jo ), with λ being where δ ij is Kronecker delta, with δ ij = 1 if i = j, and 0, if i = j, and S j /N is the probability of virus encountering susceptible persons. After an average period of time 1/σ j in classes E j , where σ j is the incubation rate, exposed persons enter to asymptomatic A j (with probability p j ) or pre-diseased D 1j (with probability 1 − p j ) classes. After an average period of time 1/γ j in class A j , where γ j is the infection rate of asymptomatic persons, symptomatic persons acquire immunity (recovered) and enter to immunized class I. Another route of exit from class A j is being caught by a test at a rate η j and enters to class Q 1j , and, then, after a period of time 1/γ j , enters to class I. With very low intensity, asymptomatic persons are in voluntary isolation, which is described by voluntary isolation rate χ j . With respect to symptomatic persons, after an average period of time 1/γ 1j in class D 1j , where γ 1j is the infection rate of pre-diseased persons, pre-diseased persons enter to non-hospitalized Q 2j (with probability m j ) or hospitalized D 2j (with probability 1 − m j ) classes. Hospitalized persons acquire immunity after a period of time 1/γ 2j , where γ 2j is recovery rate of severe CoViD-19, and enter to immunized class I, or die under disease induced (additional) mortality rate α. After an average period of time 1/γ j in class Q 2j , non-hospitalized persons acquire immunity and enter to immunized class I, or enter to class D 2j at a relapsing rate of pre-diseased persons ξ j . Figure 1 shows the flowchart of new coronavirus transmission model.
The new coronavirus transmission model, based on above descriptions summarized in Figure 1, is described by system of ordinary differential equations, with j = y, o. Equations for susceptible persons are for infectious persons, and for immune persons, with where, the initial number of population at t = 0 is N (0) = N 0 . If φ = µ + (α y D 2y + α o D 2o ) /N , the total size of population is constant. The initial conditions (at t = 0) supplied to equations (2), (3) and (4) are where n X j is a non-negative number. For instance, n Ey = n Eo = 0 means that there is not any exposed persons in the beginning of epidemics. Table 1 summarizes model variables. Isolation of persons deserves some words. In the modeling, the isolation is applied to susceptible persons, which are known only at exact time of the introduction of new virus, that is, S(0) = N 0 . However, as time passes, susceptible persons are decreased and become immunized persons, and, due to asymptomatic persons, susceptible and immunized persons are indistinguishable (except caught by test and hospitalized persons). For this reason, if isolation of persons is not done at the time of virus introduction, it is probable that virus should be circulating among them, but at very lower transmission rate (virus circulates only among household and neighborhood persons).
The number of persons infected with new coronavirus are given by E y + A y + Q 1y + D 1y + Q 2y + D 2y for young persons, and E o + A o + Q 1o + D 1o + Q 2o + D 2o for elder persons. The incidence rates are Λ y = λ Sy where the per-capita incidence rate λ is given by equation (1), and the numbers of new cases C y and C o are d dt C y = Λ y dt and [2] with C y (0) = 0 and C o (0) = 0, and the numbers of new cases in a day is Notice that C i y and C i o are entering in exposed classes at each day.
The numbers of CoViD-19 cases ∆ y and ∆ o are given by outflux of A y , D 1y , A o and D 1o , that is, with ∆ y (0) = 0 and ∆ o (0) = 0, and the numbers of CoViD-19 cases in a day are which are entering in classes Q 1y , D 2y , Q 2y , Q 1o , D 2o and Q 2o at each day. The numbers of severe CoViD-19 (hospitalized) cases Ω y and Ω o are given by outflux of D 1y , Q 2o , D 2o and Q 2y , that is, with Ω y (0) = 0 and Ω o (0) = 0, and the numbers of hospitalized cases in a day are which are entering in classes D 2y and D 2o at each day. The number of deaths caused by severe CoViD-19 cases Π can be calculated from hospitalized cases. This number of deaths is with Π(0) = 0. The number of died persons in a day is where π y and π 0 are the numbers of deaths of young and elder persons at each day. The number of susceptible persons in isolation in the absence of releasing is obtained from where the corresponding fractions of isolated susceptible persons are f is The system of equations (2), (3) and (4) is non-autonomous. Nevertheless the fractions of persons in each compartment approach to the steady state (see Appendix A), hence, by using equations (A.8) and (A.9), the reduced reproduction number R r is given by where s 0 y and s 0 o are substituted by S 0 y /N 0 and S 0 o /N 0 . Given N and R 0 , let us evaluate the number of susceptible persons in order to trigger and maintain epidemics, but in a special case. Assume that all model parameters for young and elder classes and all transmission rates are equal, then R 0 = σβ/ [(σ + φ) (γ + φ)] and R e = R 0 S/N , using approximated R e given by equation (A.11). Letting R e = 1, the critical number of susceptible persons S th at equilibrium is If S > S th , epidemics occurs and persists (R e > 1, non-trivial equilibrium point P * ), and the fraction of susceptible individuals is s * = 1/R e , where s * = s * y + s * o ; but if S < S th , epidemics occurs but fades out (R e < 1, trivial equilibrium point P 0 ), and the fractions of susceptible individuals s y and s o at equilibrium are given by equation (A.4), or (A.12) if there is not any control.
Let us now evaluate the critical isolation rate of susceptible persons η 2 assuming that all model parameters for young and elder classes and all transmission rates are equal. In this special case, If η 2 < η th 2 , epidemics occurs and persists (R e > 1, non-trivial equilibrium point P * ); but if η 2 > η th 2 , epidemics occurs but fades out (R e < 1, trivial equilibrium point P 0 ). We apply above results to study the introduction and establishment of new coronavirus in the São Paulo State, Brazil. From data collected in the São Paulo State from March 14, 2020 until April 5, 2020, we estimate transmission and additional mortality rates, and, then, study potential scenarios introducing isolation as control mechanisms.
Results
Results obtained in foregoing section is applied to describe new coronavirus infection in the São Paulo State, Brazil. The first confirmed case of CoViD-19, occurred in February 26, 2020, was from a traveler returning from Italy in February 21, and being hospitalized in February 24. The first death due to CoViD-19 was a 62 years old male with comorbidity who never travelled to abroad, hence considered as autochthonous transmission. He manifested first symptoms in March 10, was hospitalized in March 14, and died in March 16. In March 24, the São Paulo State authorities ordered isolation of persons acting in non-essential activities, as well as students of all level until April 6, further the isolation was extended to April 22.
Let us determine the initial conditions. In the São Paulo State, the number of inhabitants is N (0) = N 0 = 44.6 × 10 6 according to SEADE [6]. The value of parameter ϕ given in Table 1 was calculated by equation (A.12), ϕ = bφ/ (1 − b), where b is the proportion of elder persons. Using b = 0.153 in the São Paulo State [6], we obtained ϕ = 6.7 × 10 −6 days −1 , hence, N y (0) = N 0y = 37.8 × 10 6 (s 0 y = N 0y /N y (0) = 0.8475) and N o (0) = N 0o = 6.8 × 10 6 (s 0 o = N 0o /N o (0) = 0.1525). The initial conditions for susceptible persons are let to be S y (0) = N y (0) and S o (0) = N o (0). For other variables, from Table 2, p y = 0.8 and m y = 0.8, the ratio asymptomatic:symptomatic is 4 : 1, and the ratio mild:severe (non-hospitalized:hospitalized) CoViD-19 is 4 : 1. We use these ratios for elder persons, even p o and m o are slightly different. Hence, if we assume that there is 1 person in D 2j (the first confirmed case), then there are 4 persons in Q 2j . The sum (5) is the number of persons in class D 1j , implying that there are 20 in class A j , hence, the sum (25) is the number of persons in class E j . Finally, we suppose that no one is isolated or tested, and also immunized. (Probably the first confirmed COViD-19 person transmitted the virus (since February 21 when returned infected from Italy), as well as other asymptomatic travelers returning from abroad.) Therefore, the initial conditions supplied to the dynamic system (2), (3) and (4) are where the initial simulation time t = 0 corresponds to calendar time February 26, 2020, when the first case was confirmed. The system of equations (2), (3) and (4) is evaluated numerically using 4 th order Runge-Kutta method. This section presents parameters estimation and epidemiological scenarios considering isolation as control measure. In estimation and epidemiological scenarios, we assume that all transmission rates in young persons are equal, as well as in elder persons, that is, we assume that β y = β 1y = β 2y = β 1o = β 2o , and β o = ψβ y , hence the forces of infection are λ y = (A y + D 1y + A o + D 1o ) β y and λ o = ψλ y .
Parameters estimation
Reliable estimation of both transmission and additional mortality rates are crucial aiming the prediction of new cases (to adequate the number of beds in hospital, for instance) and deaths. When the estimation is based on few number of data, that is, in the beginning of epidemics, some cautions must be taken, because the rates maybe over or under estimated. The reason is that in the very beginning phase of epidemics, the spreading out of infection and deaths increase exponentially without bound. Currently, there is not sufficient number of kits to detect infection by new coronavirus. For this reason, tests to confirm infection by this virus is done only in hospitalized persons, and, also, in persons who died manifesting symptoms of CoViD-19. Hence, we have only data of hospitalized persons (D 2y and D 2o ) and those who died (Π y and Π o ). Taking into account hospitalized persons with CoViD-19, we estimate the transmission rates, and from persons died due to CoViD-19, we estimate the additional mortality rates. These rates are estimated applying the least square method (see [14]).
The introduction of quarantine is t = 27, corresponding to calendar time March 24, but the effects are expected to appear later. Hence, we will estimate taking into account confirmed cases and deaths from February 26 (t = 0) to April 5 (t = 39), 1 hence n = 40 observations. Notice that the sum of incubation and recovery periods (see Table 2) is around 16 days, hence it is expected that at around simulation time t = 43 (April 10) the effects of isolation appear.
To estimate the transmission rates β y and β o , we let α y = α o = 0 and the system of equations where min stands for minimum value, n is the number of observations, t i is i-th observation time, Ω y and Ω o are given by equation (7), and D ob 2y and D ob 2o are observed number of hospitalized persons. The better transmission rates are those minimizing the square difference To estimate the mortality rates α y and α o , we fix previously transmission rates β y and β o and the system of equations (2), (3) and (4) is evaluated and calculate where min stands for minimum value, n is the number of observations, t i is i-th observation time, Π y and Π o are given by equation (8), and P ob y and P ob o are observed number of died persons. The better mortality rates are those minimizing the square difference.
Instead of using equations (13) and (14), the least square estimation method, we vary transmission or additional mortality rates and choose better fittings by evaluating the sum of squared distances between curve and data.
Estimation of transmission and additional mortality rates
Firstly, letting additional mortality rates equal to zero (α y = α o = 0), we estimate a unique β = β y = β o , with ψ = 1, against hospitalized CoViD-19 cases (D 2 ) data from the São Paulo State. The estimated value is β = 0.8 days −1 , resulting, for the basic reproduction number, R 0 = 6.99 (partials R 0y = 5.83 and R 0o = 1.16). Around this value, we vary β y and β o and choose better fitted values comparing curves of D 2 = D 2y + D 2o with observed data. The estimated values are β y = 0.77 and β o = ψβ y = 0.9009 (days −1 ), where Ψ = 1.17, resulting in the basic reproduction number R 0 = 6.915 (partials R 0y = 5.606 and R 0o = 1.309). Figure 2 shows the estimated curve of D 2 and observed data. This estimated curve is quite the same as the curve fitted using a unique β.
Fixing previously estimated transmission rates β y = 0.77 and β o = 0.9009 (both days −1 ), we estimate additional mortality rates α y and α o . We vary α y and α o and choose better fitted values comparing curves of deaths due to CoViD-19 Π = Π y + Π o with observed data. By the fact that lethality among young persons is much lower than elder persons, we let α y = 0.1α o [9], and fit only one variable α o . The estimated rates are α y = 0.0036 and α o = 0.036 (days −1 ). Figure 3 shows the estimated curve of Π = Π y + Π o and observed data. We call this as the first estimation method The first estimation method used only one information: the risk of death is higher among elder than young persons (we used α y = 0.1α o ). However, the lethality among hospitalized elder persons is 10% [2]. Combining both findings, we assume that the numbers of deaths for young and elder persons are, respectively, 10% and 1% of accumulated cases when Ω y and Ω o approach plateaus (see Figure 6 below). This is called as second estimation method, which takes into account a second information besides the one used in the first estimation method. In and η 2o are varied aiming to obtain of epidemiological scenarios. In general, the epidemic period of infection by viruses τ is around 2 years, and depending on the value of R 0 , a second epidemics occurs after elapsed many years [10]. For this reason, we analyze epidemiological scenarios of CoViD-19 restricted during the first wave of epidemics letting τ = 140 days.
Epidemiological scenario without any control mechanisms
All effects of isolation will be compared with new coronavirus transmission without any control. Initially, estimated curves will be extended until τ = 140 days, when disease attains low values. Figure 5 shows the estimated curves of the number of hospitalized (severe) CoViD-19 (D 2y , D 2o and D 2 = D 2y + D 2o ). We observe that the peaks of severe CoViD-19 are for elder, young and all persons are, respectively, 2.061 × 10 5 , 5.532 × 10 5 and 7.582 × 10 5 , which occur at same time t = 72 days. Figure 6 shows the estimated curves of accumulated number of severe CoViD-19 (Ω y , Ω o and Ω = Ω y + Ω o ), from equation (7). At t = 140 days, Ω is approaching to asymptote (or plateau), which can be understood as the time when the first wave of epidemics ends. The curves Ω y , Ω o and Ω attain values at t = 140, respectively, 1.798 × 10 6 , 0.563 × 10 6 and 2.361 × 10 6 . (8). At t = 140 days, Π is approaching to plateau. The values of Π y , Π o and Π are at t = 140, for the first method of estimation, respectively, 0.6235 × 10 5 (3.47%), 1.883×10 5 (33.4%) and 2.507×10 5 (10.62%), and for the second method of estimation, respectively, 1.60 × 10 4 (0.89%), 6.265 × 10 4 (11, 13%) and 7.865 × 10 4 (3.33%). Percentage between parentheses is the ratio Π/Ω. The second estimation method is shown in Figure 7. By comparing percentages between deaths due to CoViD-19 (Π) and accumulated severe CoViD-19 cases (Ω), the first method predicts at least 3-times that predicted by the second method. Especially among elder persons, second method predicts 11.13%, three times lower than 33.4% predicted by the first method. Hence, the second estimation is more credible than the first one. Hence, we will adopt the second estimation method for additional mortality rates, α y = 0.0009 and α o = 0.009 (days −1 ) hereafter except explicitly cited. Remember that additional mortality rates are considered constant in all time. Figure 8 shows the curves of the number of susceptible persons (S y , S o and S = S y + S o ). At t = 0, the numbers of S y , S o and S are, respectively, 3.77762 × 10 7 , 0.68238 × 10 7 and 4.46 × 10 7 , and diminish due to infection, to lower values at t = 140 days. Notice that, after the first wave of epidemics, very few number of susceptible persons are left behind, which are 1.23880 × 10 5 (0.33%), 0.02643 × 10 5 (0.039%) and 1.26523 × 10 5 (0.28%), for young, elder and total persons, respectively. Percentage between parentheses is the ratio S(140)/S(0). Figure 9 shows the curves of the number of immune persons (I y , I o and I = I y +I o ). At t = 0, the number of immune persons I y , I o and I increase from zero to, respectively, 3.76156 × 10 7 (99.57%), 0.67234 × 10 7 (98.53%) and 4.43390 × 10 7 (99.41%) at t = 140 days. Percentage between parentheses is the ratio I/S(0).
From Figures 8 and 9, the difference between percentages of I/S(0) and S(140)/S(0) is the percentage of all persons who have had contact with new coronavirus. Hence, the second wave of epidemics will be triggered after elapsed very long period time waiting the accumulation of Let us estimate roughly the critical number of susceptible persons S th from equation (11). For R 0 = 6.915, S th = 6.450 × 10 6 . Hence, for the São Paulo State, isolating 38.15 million (85.5%) or above persons is necessary to avoid persistence of epidemics. The number of young persons is 3.5 million less than the threshold number of isolated persons to guarantee eradication of CoViD-19. Another rough estimation is done to isolation rate of susceptible persons η 2 , letting η 3 = 0 in equation (12), resulting in η th = 2.19 × 10 −4 years −1 , for R 0 = 6.915. Then, for η > η th the new coronavirus epidemics fades out.
Epidemiological scenarios considering control mechanisms
Using estimated transmission and additional mortality rates, we solve numerically the system of equations (2), (3) and (4) considering only one control mechanism, that is, the isolation, due to the fact that there is few number of testing kits, and treatment and vaccine are not available yet.
We consider two cases: Isolation without subsequent releasing of isolated persons, and isolation followed by releasing of these persons. By varying isolation parameters η 2y and η 2o , and releasing parameters η 3y and η 3o , we present some epidemiological scenarios. In all scenarios, t is simulation time, instead of calendar time.
3.2.1 Scenarios -Isolation without releasing (η 3y = η 3o = 0) At t = 0 (February 26) the first case of severe CiViD-19 was confirmed, and at t = 27 (March 24) isolation as mechanism of control (described by η 2y and η 2o ) was introduced until April 22. We analyze two cases. First, there is indiscriminated isolation for young and elder persons, hence we assume that the same rates of isolation are applied to young and elder persons, that is, η 2 = η 2y = η 2o . Further, there is discriminated (preferential) isolation of elder persons, hence we assume that η 2o = η 2y .
Regime 1 -Equal isolation of young and elder persons (η 2 = η 2y = η 2o ) In regime 1, we call equal isolation of young and elder persons in the sense of equal isolation rates. Recalling that η 2y and η 2o are per-capita rates, both rates isolate proportionally young and elder persons, but the actual number of isolation is higher among young persons. We choose 7 different values for the isolation rate η 2 (days −1 ) applied to young and elder persons. The values for η 2 : 0.00021 (R r = 1), 0.001 (R r = 0.23), 0.005 (R r = 0.048), 0.01 (R r = 0.024), 0.015 (R r = 0.016), 0.025 (R r = 0.009) and 0.035 (R r = 0.007). The value for the reduced reproduction number is R r is calculated from equation (10). For η 2 = 0.035, the reduced reproduction number with respect to the basic reproduction number is reduced in 0.1%. In all figures, the case η 2 = 0 (R 0 = 6.915) is also shown. Figure 10 shows curves of severe cases of CoViD-19 D 2j , j = y, o, without and with isolation for different values of η 2 . Notice that first two curves obtained with η 2 = 0 and 0.00021 practically coincide, and the latter is slightly lower than the roughly estimated η th = 2.19×10 −4 years −1 . We present values of peak for three values of η 2 . For η 2 = 0, the peak of young (first coordinate) and elder (second coordinate) persons are (5.532×10 5 ,2.061×10 5 ), and for η 2 = 0.01 (3.566 × 10 5 ,1.361 × 10 5 ), and 0.035 (0.699 × 10 5 ,0.292 × 10 5 . The time (days) at which the peak occurs for young (first coordinate) and elder (second coordinate) persons are for η 2 = 0 (72,71), 0.01 (75,74) and 0.035 (77,77). For η 2 = 0.01 in comparison with η 2 = 0, the peaks are reduced in 64.4% and 66.0%, respectively, for young and elder persons. For η 2 = 0.035, the peaks are reduced in 12.6% and 14.2%. As isolation parameter η 2 increases, the diminishing peaks of curves of D 2y and D 2o displace initially to right (higher times), but at η 2 = η c 2 , they change the direction and move leftwardly. However, all curves remain inside the curve without isolation (η 2 = 0). The values at which the peaks change direction are η c 2y = 0.0027 days −1 (t = 78.35) and η c 2o = 0.0028 days −1 (t = 77.58). In order to understand this phenomenon, we recall an age-structured model to describe rubella infection [11] [12]. There, as vaccination rate increases, the peaks of age-depending forces of infection initially moves to right, and, then, move leftwardly. As a consequence, the average age at the first infection increases.
As isolation parameters η 2 increases, the number of susceptible persons decreases according The beginning of isolation is at t = 27.
to sigmoid shape, but, at a sufficient higher value, follows exponential decay. Again, this phenomenon is understood recalling rubella transmission model [13]. There, as vaccination rate increases, the fraction of susceptible persons decreases following damped oscillations when R r > 1, attaining non-trivial equilibrium point. However, for R r < 1, there is trivial equilibrium (2) if R r is low, the fraction of susceptible persons decreases never lower than the value of trivial equilibrium point, for this reason attains this equilibrium value decaying exponentially without surpassing it in any time. Figure 15 shows curves of the number of isolated susceptible persons S is j , j = y, o, with isolation for different values of η 2 , from equation (9). We present at t = 140 for three values of η 2 . For η 2 = 0, there is not isolated persons, and for η 2 = 0.01 (1.09 × 10 7 ,1.892 × 10 6 ), and 0.035 (3.079 × 10 7 ,5.419 × 10 6 ). For η 2 = 0.01 in comparison with all persons N 0 (at t = 0), isolated susceptible persons are 2.4% and 0.42%, respectively for young and elder persons. For η 2 = 0.035, isolated susceptible persons are 6.9% and 1.22%. Figure 16 shows curves of the number of immune persons I j , j = y, o, without and with isolation for different values of η 2 . We present at t = 140 for three values of η 2 . For η 2 = 0, the number of young (first coordinate) and elder (second coordinate) persons are (3.762 × 10 7 ,6.723 × 10 6 ), and for η 2 = 0.01 (2.671 × 10 7 ,4.849 × 10 6 ), and 0.035 (0.683 × 10 7 ,1.349 × 10 6 ). For η 2 = 0.01 in comparison with η 2 = 0, immune persons are reduced to 71.0% and 72.1%, respectively for young and elder persons, very close to the reductions observed in deaths due to CoViD-19. For η 2 = 0.035, immune persons are reduced to 18.1% and 20.0%, very close to the reductions observed in deaths due to CoViD-19.
Immunological parameters (peak of D 2 , Ω, ,Π and I) are reduced quite similar for η 2 = 0.035 days −1 , between 4.8-times (21%) and 8.3-times (12%), however the susceptible persons left behind at the end of the first wave increase dramatically, 20-times (young) and 240-times (elder), with 24-times higher for elder persons. Hence, in a second wave, there will be more infections among elder persons.
Regime 2 -Different isolation of young and elder persons (η 2o = η 2y ) In regime 2, we call different isolation of young and elder persons in the sense that elder isolation rate is fixed, and young isolation rate is varied, and vice-versa. Firstly, we choose the isolation rate of elder persons η 2o = 0.01 days −1 , and vary η 2y = 0.001 (R r = 0.235), 0.005 (R r = 0.049), 0.01 (R r = 0.024), 0.015 (R r = 0.016), 0.025 (R r = 0.009), 0.035 (R r = 0.007) and 0.1 (R r = 0.002). The value for the reduced reproduction number is R r is calculated from equation (10). Figure 17 shows curves of severe cases of CoViD-19 D 2j , j = y, o, varying η 2y , fixing η 2o = 0.01 days −1 . The decreasing pattern of D 2y follows that observed in regime 1, but in D 2o , as η 2y increases, the peaks displace faster to right, and the curves become more asymmetric (increased skewness) and spread beyond the curve without isolation. Figure 18 shows curves of the number of susceptible persons S j , j = y, o, varying η 2y , fixing η 2o = 0.01 days −1 . The decreasing pattern of S y follows that observed in regime 1 (sigmoid shape substituted by exponential decay), but the sigmoid shaped decreasing curves of S o , as η 2y increases, move from bottom to top, which is an opposite pattern observed in regime 1. As isolation of young increases, the number of susceptible young persons decreases, but the number of susceptible elder persons increases. However, from Figure 17, severe CoViD-19 cases decrease for both subpopulations. This can be explained by the decreasing in immune persons: young immune persons decrease 41-times when η 2y decreases from 0.015 to 0.1, while elder persons decrease 4-times (see Table 3).
The curves of accumulated cases of severe CoViD-19 Ω, accumulated cases of CoViD-19 deaths Π, the number of isolated susceptible person S is , and the number of immune persons I are similar than those shown in foregoing section. For this reason, we present in Table 3 The percentages are calculated as the ratio between epidemiological parameter evaluated with (η 2j > 0) and without (η 2y = η 2o = 0) isolation, at t = 140. The number of isolated susceptible persons is S is = 0 when there is not isolation, hence the percentage is the ratio between S is at t = 140 and N 0 . Figures 17 and 18 and Table 3 portrait preferential isolation of young persons, but maintaining elder persons isolated at a fixed level. Hence, the increasing in η 2y of course protects young persons, but elder persons are also benefitted . Now, we choose the isolation rate of young persons η 2y = 0.01 days −1 , and vary the isolation rate of elder persons η 2o (days −1 ) for 7 different values: η 2o = 0.001 (R r = 0.025), 0.005 (R r = 0.02444), 0.01 (R r = 0.02442), 0.015 (R r = 0.024416), 0.025 (R r = 0.024413), 0.035 (R r = 0.024411) and 0.1 (R r = 0.02440). Figure 19 shows curves of severe cases of CoViD-19 D 2j , j = y, o, varying η 2o , fixing η 2y = 0.01 days −1 . The same pattern observed in Figure 17, changing D 2y by D 2o , but more smooth. Figure 20 shows curves of the number of susceptible persons S j , j = y, o, varying η 2o , fixing η 2y = 0.01 days −1 . The same pattern observed in Figure 18, changing S y by S o .
The curves of accumulated cases of severe CoViD-19 Ω, accumulated cases of CoViD-19 deaths Π, the number of isolated susceptible person S is , and the number of immune persons I are similar than those shown in foregoing section. For this reason, we present in Table 4 (η 2y = 0.01 days −1 fixed) their values at t = 140 for young, elder and all persons, letting Figure 19: The curves of severe cases of CoViD-19 D 2j , j = y, o, varying η 2o , fixing η 2y = 0.01 days −1 . Curves from top to bottom corresponds to increasing η 2 . The beginning of isolation is at t = 27.
η 2o = 0.015, η 2o = 0.035 and η 2o = 0.1 (days −1 ). Values for Ω, Π, S is and I, for η 2o = η 2y = 0, are those used in Table 3, as well as the definitions of the percentages. Figures 19 and 20 and Table 4 portrait preferential isolation of elder persons, but maintaining young persons isolated at a fixed level. Hence, the increasing in η 2o of course protects elder persons, but young persons are also benefitted. Tables 3 and 4 show two kinds isolation for two different goals. If the objective is diminishing the total number of severe CoViD-19 cases Ω, the better strategy is isolating more young than elder persons. However, if the goal is the reduction of fatality cases Π, the better strategy is the isolating more elder than young persons, but if the isolation is very intense (η 2y = η 2o = 0.1), then isolating more young persons is recommended. Notice that only strategy η 2o = 0.01 and η 2y = 0.1 attains the number of isolated susceptible persons above the threshold 3.815 × 10 7 .
Scenarios -Isolation and releasing
When releasing is introduced, then equation (9) is not anymore valid to evaluated the accumulated number of isolated susceptible persons. Hence, we use Q y , Q o and Q = Q y + Q o for the numbers of isolated susceptible, respectively, young, elder and total persons. Q y and Q o are solutions of the system of equations (2), (3) and (4). At t = 0 (February 26) the first case of severe CiViD-19 was confirmed, and at t = 27 (March 24) isolation as mechanism of control (described by η 2y and η 2o ) was introduced until April 22. 2 Hence, the beginning of releasing of isolated persons will occur at the simulation time t = 56. 3 We assume that same rates of releasing are applied to young and elder persons, that is, η 3 = η 3y = η 3o , and consider regime 1-type isolation, that is, η 2 = η 2o = η 2y . Hence, from time 0 to 27 we have R 0 = 6.915 (no isolation), we have regime 1-type isolation from 27 to 56 with R r = 0.007, and sinceafter 56, we have isolation and releasing with value of R r depending on η 3 .
The curves of accumulated cases of severe CoViD-19 Ω, accumulated cases of CoViD-19 deaths Π, the number of isolated susceptible person S is , and the number of immune persons I are similar than those shown in foregoing section. For this reason, we present in Table 6 (η 2 = 0.035 days −1 fixed) their values at t = 360 for young, elder and all persons, letting η 3o = 0.015, η 3o = 0.035 and η 3o = 0.1 (days −1 ). Values for Ω, Π, S is and I, for η 2 = 0, are those used in Table 3, as well as the definitions of the percentages. Figure 23 shows curves of severe cases of CoViD-19 D 2j , j = y, o, fixing η 2 = 0.035 days −1 , and varying η 3 . The beginning of release is at t = 63, a week later. For instance, when η 3 = 0.035 days −1 , the peaks are for young and elder persons, respectively, 2.084 × 10 5 and 8.197 × 10 4 , which occur at t = 108 and 107. In comparison with Figure 21, the peaks are decreased for young and elder persons in, respectively, 9.8% and 9.5%, which are both delayed in 9 days.
The curves of accumulated cases of severe CoViD-19 Ω, accumulated cases of CoViD-19 deaths Π, the number of isolated susceptible person S is , and the number of immune persons I are similar than those shown in foregoing section. For this reason, we present in Table 7 (η 2 = 0.035 days −1 fixed) their values at t = 360 for young, elder and all persons, letting η 3o = 0.015, η 3o = 0.035 and η 3o = 0.1 (days −1 ). Values for Ω, Π, S is and I, for η 2 = 0, are those used in Table 3, as well as the definitions of the percentages. Comparing Figures 21, 22 and 23, the peaks are increased in 9% and anticipated in 6 days if isolation is released 7 days earlier, while the peaks are decreased in 10% and delayed in 9 days if isolation is released 7 days later. From Tables 5, 6 and 7, the increase in severe coViD-19 cases and deaths due to this disease by anticipating isolation in 7 days are 0.9%, 0.3% and 0.06% for, respectively, η 3 = 0.015, 0.035 and 0.1 (days −1 ); while by delaying in 7 days, both are decreased in 0.9%, 0.6% and 0.2% for, respectively, η 3 = 0.015, 0.035 and 0.1 (days −1 ). However, 0.9% represents 95 deaths.
In Figure 24 we show releasing occurring without isolation, that is, from time 0 to 27, we have R 0 = 6.915 (no isolation), we have regime 1-type isolation from 27 to 56, R r = 0.007, and sinceafter 56, we have only releasing with R 0 = 6.915 (η 2 = 0 and η 3 = 0.035 days −1 ). When releasing is done without new isolations, there appears small epidemics (see curve for η 3 = 0.005), which is delayed as η 3 increases (for other values, the second small epidemics does not appear until 360 days). If releasing strategy is done, the first wave does not vanish completely, except for huge releasing scheme (higher η 3 ). This is a good epidemiological scenario due to not only in the diminishing in the pressure for hospitalization (consequently, decreases deaths), but also in the increasing in immune persons, hence decreasing the effective reproduction number (known as herd immunity).
Discussion
System of equations (2), (3) and (4) were simulated to providing epidemiological scenarios. These scenarios are more reliable if based on credible values assigned to model parameters. We used ratio 4 : 1 for the ratios of asymptomatic:symptomatic and mild:severe (nonhospitalized:hospitalized) CoViD-19 [2]. Also, we let α y = 0.1α o , and α o must be such that deaths will occur in 10% of hospitalized elder persons, hence, 1% of hospitalized young persons will die [9]. We used overvalued parameters, except maybe the ratio between asymptomatic:symptomatic, which is completely unknown. In many viruses, the ratio is higher than 4 : 1, but for new coronavirus is unknown. When mass testing against new coronavirus could be done, then this ratio can be estimated.
The least square estimation method was approximated by the sum of the square of the distance between parametrized curve and observed data. When estimation of epidemic curves are based on few available data, in general parameters are overestimated. Hence, both transmission and mortality rates were overestimated. Fortunately, there was another information to use: 10% of fatality among elder hospitalized persons. Taking into account this information, we estimated lower mortality rates, but estimated transmission rates were those based on few available data. Hence, the basic reproduction number R 0 = 6.915 seems overestimated.
Let us consider estimation of transmission and mortality rates based on few data. From Figures 7 and 8, it is expected at the end of the fist wave of epidemics, 2.36 million of severe (hospitalized) CoViD-19 cases, and 250 thousand of deaths due to this disease in the São Paulo State. If we consider a 5-times higher inhabitants than the São Paulo State, it is expected 11.8 million of severe (hospitalized) CoViD-19 cases, and 1, 250 thousand of deaths. Approximately these numbers of cases and deaths were projected to Brazil by Ferguson et al. [4]. However, the second method of estimation for fatality rates resulted in 78.7 thousand of deaths in the São Paulo State, but the number of severe cases is the same. Hence, extrapolating to Brazil, the number is 383 thousand of deaths.
We address the question of the discrepancy in providing number of deaths during the first wave of epidemics. Mathematical and computational (especially agent based models) models that are based on data to estimate model parameters, these models must be fed continuously with new data and reestimated model parameters. As the number of data increases, their estimations become more and more reliable. Hence, initial estimations and forecasting are extremely bad, and, moreover, they become dangerous when predicting catastrophic scenarios, which can lead to formulate mistaken public health policies.
With respect to isolation of susceptible persons, depending on the target we have two strategies. If the goal is decreasing the number of CoViD-19 cases in order to adequate capacity of Hospital and ICU, the better strategy is isolating mor young than elder persons. However, if death due to CoViD-19 is the main goal, better strategy is isolating more elder than young persons.
We also studied releasing strategies. We compare the releasing that will be initiated in April 22, with releasing one week earlier (April 19) and one week later (April 29).
The estimated basic reproduction number and its partial values were R 0 = 6.915 (partials R 0y = 5.606 and R 0o = 1.309), and the asymptotic fraction of susceptible persons and its partial fractions provided by Runge-Kutta method were s * = 0.15008, s * y = 0.14660 and s * o = 0.00348. Using equation (A.10), we obtain 1/R 0 = 0.1446. Clearly, s * is not the inverse of the basic reproduction number R 0 , and f (s * , s * The analysis of the non-trivial equilibrium point to find f (s * , s * y , s * o ) is left to a further work. In order to understand this question, we suppose that new coronavirus is circulating in non-communicating young and elder sub-populations, then each population approach to s * y = 1/R 0y = 0.178 or s * o = 1/R 0o = 0.764 at steady state (non-trivial equilibrium point P * ). But, new coronavirus is circulating in a homogeneously mixed populations of young and elder persons (this is a strong assumption of the modeling). Using equation (1), let us calculate the forces of infection λ 1 = β 1y A y + β 2y D 1y (contribution due to infectious young persons), λ 2 = β 1o A o + β 2o D 1o (elder persons) and λ = λ 1 + λ 2 (both classes), which are shown in Figure 25 (elder persons) and λ = λ 1 + λ 2 (both classes). and 10.31 × 10 6 , which occur at 68.18, 66.94 and 68.18 (days), and contributions at peak of λ 1 and λ 2 with respect to λ are 83.6% and 16.4%. The ratio between peaks λ 1 :λ 2 is 5.1:1, which is close to the ratio between numbers of young:elder 5.5:1. When virus circulates in mixed populations, young and elder persons are infected additionally by, respectively, elder (λ 2 ) and young (λ 1 ) persons. This is the reason for the actual equilibrium values are bigger (s * y > 1/R 0y and s * o > R 0o ), but among elder persons the increase (220-times) is huge (λ 1 , very big, acting in relatively small population S o ). For this reason contacts between elder and young persons must be avoided.
Conclusion
We formulated a mathematical model considering two subpopulations comprised by young and elder persons to study CoViD-19 in the São Paulo State, Brazil. The model considered continuos but constant rates of isolation and releasing. In a future work, we change rates to describe isolation and releasing by proportions of susceptible persons being isolated or released. The reason behind this is the absence of translation of rates to proportions.
Our model estimated quite same number of severe CoViD-19 cases predicted by Ferguson et al. [4] for Brazil, but 3.3-times lower for deaths due to CoViD-19. The difference is mainly done by estimation of the additional mortality rates. It is also expected that R 0 must be lower if additional information may exist, or more data will be available. As consequence, maybe severe CoViD019 cases should be much lower (consequently, deaths also). If currently adopted lockdown is indeed based on the goal of decreasing hospitalized CoViD-19 cases, then our model agrees, since it predicts that higher number of young and elder persons must be isolated in order to achieve this objective. However, if the goal is reduction in the number of deaths due to CoViD-19, as much elder persons must be isolated, but not so much young persons. Remember that in a mixing of young and elder persons, the infection is much harmfull in elder than young persons, which is reason to avoid contact between them. An optimal rates of isolation of young and elder persons to reduce both CoViD-19 cases and deaths can be obtained by optimal control theory [8].
If vaccine and efficient treatments are available, the pandemic of new coronavirus should not be considered a threaten to public health. However, currently, there is not vaccine, neither efficient treatment. For this reason the adoption of isolation or lockdown is a recommended strategy, which can be less hardly implemented if there is enough kit to test against new coronavirus. Remember that all isolation strategies considered in our model assume the identification of susceptible persons. Hence isolation as control mechanism allows an additional time to seek for cure (medicine) and/or develop vaccine.
A Trivial equilibrium and its stability
By the fact that N is varying, the system is non-autonomous non-linear differential equations. To obtain autonomous system of equations, we use fractions of individuals in each compartment, defined by, with j = y and o, using equation (5) for N . Hence, equations (2), (3) and (4) in terms of fractions become, for susceptible persons, and for immune persons d dt i = γ y a y + γ y q 1y + γ y q 2y + (γ 2y + θ y ) d 2y where λ is the force of infection given by equation (1), and j=y,o (s j + q j + e j + a j + q 1j + d 1j + q 2j + d 2j ) + i = 1, which is autonomous system of equations. We remember that all classes vary with time, however their fractions attain steady state (the sum of derivatives of all classes is zero). This system of equations is not easy to determine non-trivial (endemic) equilibrium point P * . Hence, we restrict our analysis with respect to trivial (disease free) equilibrium point.
The trivial or disease free equilibrium P 0 is given by , with s 0 y + q 0 y + s 0 o + q 0 o = 1. Due to 17 equations, we do not deal with characteristic equation corresponding to Jacobian matrix evaluated at P 0 , but we apply the next generation matrix theory [3].
The next generation matrix, evaluated at the trivial equilibrium P 0 , is obtained considering the vector of variables x = (e y , a y , d 1y , e o , a o , d 1o ). We apply method proposed in [15] and proved in [16]. There are control mechanisms (isolation), hence we obtain the reduced reproduction number R r by isolation.
In order to obtain the reduced reproduction number, diagonal matrix V is considered. Hence, the vectors f and v are where the superscript T stands for the transposition of a matrix, from which we obtain the matrices F and V (see [3]) evaluated at the trivial equilibrium P 0 , which were omitted. The next generation matrix F V −1 is , and the characteristic equation corresponding to F V −1 is where the reduced reproduction number R r and its partial reduced reproduction numbers R ry and R ro are and R 0y and R 0o are the basic partial reproduction numbers defined by R 1 0y = σ y σ y + φ β 1y γ y + η y + χ y + φ , and R 2 0y = σ y σ y + φ β 2y γ 1y + η 1y + φ (A.9) Actually, we must have η j = χ j = η 1j = χ 1j = 0, with j = i, o, to be fit in the definition of the basic reproduction number. Instead of calculating the spectral radius (ρ (F V −1 ) = √ R r ), we apply procedure in [15] (the sum of coefficients of characteristic equation), resulting in a threshold R r . Hence, the trivial equilibrium point P 0 is locally asymptotically stable (LAS) if R r < 1.
In order to obtain the fraction of susceptible individuals, M must be the simplest (matrix with least number of non-zeros). Hence, the vectors f and v are (σ y + φ) e y − e y (α y d 2y + α o d 2o ) −p y σ y e y + (γ y + η y + χ y + φ) a y − a y (α y d 2y + α o d 2o ) − (1 − p y ) σ y e y + (γ 1y + η 1y + φ) d 1y − d 1y (α y d 2y where superscript T stands for the transposition of a matrix, from which we obtain the matrices F and V evaluated at the trivial equilibrium P 0 , which were omitted. The next generation matrix F V −1 is The spectral radius is ρ (F V −1 ) = R r = R ry + R ro given by equation (A.8). Hence, the trivial equilibrium point P 0 is LAS if ρ < 1. Both procedures resulted in the same threshold, hence, according to [19], the inverse of the reduced reproduction number R r given by equation where s * = s * y + s * o (see [18] [19]). For this reason, the effective reproduction number R e [17], which varies with time, can not be defined by R e = R 0 (s y + ψs o ), or R e = R 0y s y + R 0o ψs o .
The function f (κ) is determined by calculating the coordinates of the non-trivial equilibrium point P * . For instance, for dengue transmission model, f (s * 1 , s * 2 ) = s * 1 × s * 2 , where s * 1 and s * 2 are the fractions at equilibrium of, respectively, humans and mosquitoes [18]. For tuberculosis model considering drug-sensitive and resistant strains, there is not f (κ), but s * is solution of a second degree polynomial [19]. From equation (A.10), let us assume (or approximate) that f s * , s * y , s * o = s * y + s * o . Then, we can define the effective reproduction number Re as R e = R r (s y + s o ) , (A.11) which depends on time, and when attains steady state (R e = 1), we have s * = 1/R r . When a mechanism of protection of susceptible persons is introduced in a population, the basic reproduction number R 0 is reduced to R r , the reduced reproduction number. The protection of susceptible persons is done or by vaccine (not yet available), or isolation (or quarantine). The isolation was described by the isolation rate of susceptible persons η 2j , with j = y, o. When η 2j = 0, the fraction of young persons and elders are, from equation (A.4), where R 0y and R 0o are given by equation (A.9). The basic partial reproduction number R 1 0ys 0 y (or R 2 0ys 0 o ) is the secondary cases produced by one case of asymptomatic individual (or pre-diseased individual) in a completely susceptible young persons without control; and the partial basic reproduction number R 1 0os 0 o (or R 2 0os 0 o ) is the secondary cases produced by one case of asymptomatic individual (or pre-diseased individual) in a completely susceptible elder persons without control. If all parameters are equal, and ψ = 1, then R 0 = pR 1 0 + (1 − p) R 2 0 , where R 1 0 = R 1 0y + R 1 0o and R 2 0 = R 2 0y + R 2 0o are the basic partial reproduction numbers due to asymptomatic and pre-diseased persons.
The global stability follows method proposed in [7]. Let the vector of variables be x = (e y , a y , d 1y , e o , a o , d 1o ), vectors f and v, by equations (A.5) and (A.6), and matrices F and V evaluated from f and v at trivial equilibrium P 0 (omitted here). Vector g, constructed as and g T ≥ 0 if s 0 y ≥ s y , s 0 o ≥ s o and α y = α o = 0. Let v l = (z 1 , z 2 , z 3 , z 4 , z 5 , z 6 ) be the left eigenvector satisfying v l V −1 F = ρv l , where ρ = √ R r , and This vector is v l = σ y + φ ρβ 2y s 0 y R ry , β 1y β 2y , 1, and Lyapunov function L, constructed as L = v l V −1 x T , is L = z 1 σ y + φ e y + z 2 γ y + η y + χ y + φ a y + 1 γ 1y + η 1y + φ d 1y + | 2020-04-14T01:00:38.658Z | 2020-04-12T00:00:00.000 | {
"year": 2020,
"sha1": "72146b6187e3fcbf22473ea4533f521761030d25",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "72146b6187e3fcbf22473ea4533f521761030d25",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Geography"
]
} |
4693496 | pes2o/s2orc | v3-fos-license | Evidence for the involvement of the cyclooxygenase-metabolic pathway in diclofenac-induced inhibition of spontaneous contraction of rat portal vein smooth muscle cells
The effects of diclofenac, a cyclooxygenase (COX) inhibitor, were investigated on spontaneous phasic contractions of longitudinal preparations of the rat portal vein. Diclofenac produced a concentration-dependent decrease in the amplitude of these spontaneous phasic contractions. Diclofenac (30 μM) decreased the amplitude of the spontaneous phasic increase in the F340/F380 ratio of Fura PE3, an indicator of intracellular Ca2+ concentration. It also reduced the number of action potentials in each burst discharge without changing the resting membrane potential of longitudinal smooth muscle cells. The extent of the distribution of Lucifer Yellow injected into a smooth muscle cell was decreased in the presence of diclofenac (30 μM). Both AH6809, a prostanoid EP receptor antagonist, and SQ22536, an adenylate cyclase inhibitor, decreased the amplitude of the spontaneous contractions. On the other hand, neither ozagrel, a thromboxane synthase inhibitor, nor SQ29548, a prostanoid TP receptor antagonist, significantly affected spontaneous contractions. These results indicate that diclofenac inhibits the amplitude of spontaneous contractions of the rat portal vein through inhibition of electrical activity, which may be related to an inhibition of the cyclooxygenase pathway.
Introduction
Longitudinal preparations of the rat portal vein develop spontaneous phasic myogenic contractions which are accompanied by bursts of action potentials (Funaki and Bohr, 1964;Axelsson et al., 1967;Johansson et al., 1967).Because such contractions are very sensitive to extracellular calcium, the rat portal vein has been used to examine the effects of drugs on vascular contraction (Sutter, 1990).Products of the cyclooxygenase (COX) pathway have been shown to be involved in the regulation of physiological activities (Wright et al., 2001).Spontaneous contractions of the rat portal vein have been shown to be potentiated by prostaglandin E1 (PGE1) (Miwa et al., 1997) and arachidonic acid (Vidulescu et al., 2000).On the other hand they are inhibited by meclofenamate, a COX inhibitor (Enero, 1979), although the mechanism involved in this inhibition has not been determined.In this study, we have examined the effect of diclofenac, another COX inhibitor (Mitchell et al., 1994), on spontaneous contractions of smooth muscle cells of the rat portal vein, as well as on their membrane potential and intracellular Ca 2+ concentration.Products of the COX pathway act on prostanoid receptors, which use different intracellular mediators (Wright et al., 2001).To identify the product involved in this spontaneous contraction, we examined the effects of prostanoid receptor antagonists and an adenylate cyclase inhibitor on these spontaneous contractions.Cell-to-cell coupling has been shown to play an important role in propagation of excitation and therefore the coordination of vascular responses (Christ et al., 1996).Therefore, we have examined the effect of diclofenac on the distribution of Lucifer Yellow injected intracellularly, as this has been used to evaluate intercellular communication (Beny, 1990).The results obtained indicate that COX may play a role in the generation of spontaneous contractions in the rat portal vein.
Materials and Methods
Male Wistar rats weighing 200-300 g were anesthetized using CO2, and treated according to the Guiding Principles for the Care and Use of Laboratory Animals Approved by the Japanese Pharmacological Society.The hepatic portal vein was isolated and preparations of the longitudinal smooth muscle layer dissected under a binocular microscope and placed in a modified Tyrode's solution.The luminal surface of the tissue was rubbed with paper to remove the endothelium.The composition of the modified Tyrode's solution (in mM) was as follows: 137 NaCl, 5.4 KCl, 2.0 CaCl2, 1.0 MgCl2, 0.4 NaH2PO4, 11.9 NaHCO3, 5.6 glucose, with a pH of 7.3.The high K + -Tyrode's solution was made by replacing NaCl with an equimolar concentration of KCl.All experiments were conducted at 37°C in Tyrode's solution aerated continuously with a mixture of 95% O2 -5% CO2.In the contractile force recording experiments, each preparation was mounted vertically in an organ bath.The isometric contractile force of preparations was measured using a force-displacement transducer (U-gage 10G, Minebea, Karuizawa, Japan) equipped with a strain amplifier (AS2102, NEC, Tokyo, Japan) and recorded with a thermal-pen recorder (Linearcorder WR3310, Graphtec, Tokyo, Japan).
Changes in intracellular Ca 2+ concentrations were estimated according to our previous report (Shimamura et al., 2003).The surfaces of all materials used for the Ca 2+ -indicator-experiment were coated with silicon using Siliconizer L-25 (Fuji System, Tokyo, Japan).The small adventitia and endothelium-removed longitudinal strips of rat portal vein were incubated with Fura-PE3/AM (Wako, Osaka, Japan) 20 µM dissolved in dimethyl sulfoxide (DMSO) and 0.08% Pluronic F-127 in Tyrode's solution for 2 hours at room temperature.The preparations were then mounted adventitial-side down on a silicon rubber in the temperature-controlled chamber of a fluorometer (CAF 100, JASCO, Tokyo, Japan).Intracellular Ca 2+ concentrations were estimated from the luminescence intensity ratio (F340/F380) when excited at wavelengths of 340 nm (F340) and 380 nm (F380).One end of each preparation was connected to a forcedisplacement transducer so that changes in tension could be measured simultaneously.
Isometric tension and fluorescence were recorded on a pen recorder (Yokogawa LR4220, Tokyo, Japan) and stored in a PCM recorder (RD101-T, TEAC, Tokyo, Japan).Mirror images in both F340 and F380 were considered to be a marker of the successful measurement of F340/F380 without movement interference.After each experiment, preparations were first exposed to 40 mM K + Tyrode's solution and then subsequently exposed to 2 mM EGTA in a Ca 2+ -free Tyrode's solution to determine the minimum level of intracellular Ca 2+ .Changes in the fluorescence ratio were expressed as a percentage of the elevation induced by the 40 mM K + Tyrode's solution.The contraction induced by the 40 mM K + Tyrode's solution had a stable amplitude which was slightly, but not significantly, less than that induced by the 50 mM K + Tyrode's solution.
The membrane potential was measured by the microelectrode technique as reported previously (Shimamura et al., 2003).Adventitia and endothelium-removed longitudinal strips were mounted adventitial-side up on a silicon rubber bed in a chamber that was continuously perfused with warmed Tyrode's solution at a flow rate of 5 ml/min.A pulled glass capillary microelectrode (PN-3, Narishige, Tokyo, Japan), filled with 3M KCl and with a tip resistance of 40MΩ, was impaled from the adventitial side.Membrane potentials were monitored using a microelectrode amplifier (Intra 767 Electrometer, World Precision Instruments, Sarasota, FL, USA) and recorded through both a thermal pen recorder (Graphtec Lineacorder WR3310) as well as a PCM recorder (RA125T, TEAC, Tokyo, Japan).Data were retrieved using both Axotape (Axon Instruments, Fostercity, USA) and Labmaster (Scientific Solutions, Inc., Mentor, OH, USA) software in an IBM-compatible PC.
Intercellular diffusion capability was estimated by detection of the fluorescent signals of neighboring cells following injection of Lucifer Yellow (Sigma Chemical Co.St. Louis, MO, USA) as reported by others (Beny and Connat, 1992;Sakai et al., 1992).A 4% solution of Lucifer Yellow (Lucifer Yellow dissolved in 150 mM Lithium chloride) was back-filled into the microelectrodes using a resistance of 200-300 MΩ.The dye was considered to be successfully loaded when a stable membrane potential of below -40 mV was recorded with action potentials of more than 30 mV in amplitude both before and after the Lucifer Yellow injection.During impalement of single longitudinal smooth muscle cells of the rat portal vein, dye was injected with a hyperpolarizing direct current (0.35 nA) for 2 minutes.When dye injection was complete, preparations were suspended vertically in Tyrode's solution for 10 minutes and the tissue then fixed with 4% paraformaldehyde.In experiments with Lucifer Yellow, diclofenac 30 µM was added to the perfusate 5 min prior to dye injection.To evaluate the extent of intercellular dye diffusion, preparations were examined with a fluorescence microscope (DMIRB/E, Leica Microsystem, Wetzlar, Germany).Preparations were excited at 480 nm and the resulting fluorescence recorded at 520 nm with a CCD camera (Hamamatsu, Japan) and the data stored on a computer hard disk.
Data analysis
Results are given as the mean ± SEM with the number of preparations in parenthesis.
Statistical significance was assessed using the Student t-test.Paired t-test's were used when appropriate.P values <0.05 were considered to be statistically significant.
Effects of diclofenac on spontaneous contraction
The rat portal vein exhibited spontaneous phasic contractions with a stable frequency of 4.1 ± 0.6 cpm (n=15).The amplitude of these spontaneous phasic contractions was stable at 1.4 ± 0.1 mN (n=14) and was 41.6 ± 2.7% (n=9) of that developed in 50 mM K + -Tyrode's solution.Five to 10 min after administration, diclofenac 10 µM lowered the amplitude of the spontaneous contractions of the rat portal vein by 59.4 ± 2% (n=10) without a marked change in the contraction frequency.The inhibition of the contractile amplitude by diclofenac was concentration-dependent (Fig. 1).The amplitude of contraction induced by 50 mM K + in the presence of 30 µM diclofenac was 96.2 ± 1.9% (n=5) of that in the absence of diclofenac.When the amplitude of spontaneous contractions was decreased by 10 µM diclofenac, administration of 10 nM PGE2 restored the amplitude to 94.1 ± 4.9% (n=7) of that before administration of diclofenac.
Effects of diclofenac on intracellular Ca 2+ concentration
During the simultaneous recording of the Fura PE3 F340/F380 ratio and the force of contraction, we observed that spontaneous phasic contractions were generated synchronously with a phasic increase in the F340/F380 signal.The amplitude of these spontaneous contractions was 39.6 ± 8.7% (n=7) of that developed in 40 mM K + -Tyrode's solution.The amplitude of the contractions induced by 40 mM K + -Tyrode's solution were not significantly different from those in 50 mM K + -Tyrode's solution.The maximum intensity of the F340/F380 signals was 37.0 ± 3.0% (n=7) of that developed in 40 mM K + -Tyrode's solution.The increase of the ratio was inhibited by incubation of the nominally Ca 2+ -free Tyrode's solution.The amplitude of the Fura PE3 F340/ F380 ratio in the control was 40.2 ± 10.0% (n=5) and it was decreased by 30 µM diclofenac to 8.8 ± 4.1% (n=5) (Fig. 2).
Effects of diclofenac on the membrane potential
The spontaneous action potential bursts that appeared periodically in the rat portal vein preparations could be inhibited by reduction of the Ca 2+ concentration in the Tyrode's solution or by application of 1 µM nicardipine (data not shown).Diclofenac (30 µM) did not change the resting membrane potential (control, -43.6 ± 1.3 mV, n=19; 30 µM diclofenac, -46.0 ± 1.2 mV, n=20).However, it markedly inhibited the number of action potentials in each burst (Fig. 3).The number of action potentials in each burst was 7.0 ± 1.3 (n=11) in the absence of diclofenac and 2.0 ± 0.6 (n=11) in the presence of 30 µM diclofenac.
Effects of a thromboxane synthesis inhibitor and of prostanoid receptor antagonists on spontaneous contraction
Neither ozagrel, a thromboxane synthase inhibitor, nor SQ29548, a prostanoid TP receptor antagonist, changed the amplitude of spontaneous contractions (data not shown).An EP prostanoid receptor antagonist, AH6809, reduced the amplitude of spontaneous contractions in a concentration-dependent manner (Fig. 4).In the presence of 30 µM AH6809, the amplitude of spontaneous contractions decreased to 61.8 ± 7.3% (n=8) of that in control, while the 50 mM K +induced contracture in the presence of 30 µM AH6809 was 101.7 ± 4.1% (n=4) of that in control.
Effects of cyclic nucleotide synthesis inhibitors
The role of cAMP in spontaneous contraction was examined by using SQ22536, an adenylate cyclase inhibitor.The amplitude of spontaneous contractions in the presence of 100 µM SQ22536 was 52 ± 3% (n=8) of that in control.The inhibitory effect of SQ22536 was concentration-dependent between 30 µM and 100 µM (Fig. 5).The 50 mM K + -induced contracture in the presence of 100 µM SQ22536 was 98.9 ± 7.0% (n=4) of that in control.
The role of cGMP in spontaneous contraction was examined by using ODQ, a guanylate cyclase inhibitor.The amplitude of spontaneous contraction in the presence of 10 µM ODQ was 98.1 ± 1.9% (n=5) of that in control.
Effects of diclofenac on Lucifer Yellow dye distribution
After Lucifer Yellow dye was injected into a longitudinal muscle cell from the microelectrode, its intercellular diffusion occurred predominantly in the direction of the longitudinal muscle layer of the preparation.When the area and length of the dye-stained regions were compared between untreated preparations and those treated with 30 µM diclofenac, the area and longitudinal distance of dye staining were significantly smaller in the diclofenac-treated preparations when compared to the untreated preparations (Fig. 6).
Discussion
COX inhibitors have been reported to inhibit the spontaneous tonic contractions of the smooth muscle of the cat esophagus (Cao et al., 1999), the spontaneous rhythmic contractions in the rat renal pelvis (Davidson and Lang, 2000), and the twitch contractions in the rat gastric fundus (Yoneda et al., 2001).It has also been indicated that products of the COX pathway are important in the regulation of vascular smooth muscle contraction (Wright et al., 2001).While meclofenamate, a COX inhibitor, has been reported to inhibit spontaneous contractions of the rat portal vein (Enero, 1979), the detailed mechanism of action is not clear.In the present study, we have observed that diclofenac, another COX inhibitor (Mitchell et al., 1994), decreased the amplitude of spontaneous phasic contractions of longitudinal preparations of the rat portal vein without affecting 50 mM K + -induced contractions.As nicardipine, an L-type Ca 2+ channel inhibitor abolished both spontaneous and 50 mM K + -induced contractions in this preparation, the inhibition by diclofenac does not involve voltage-dependent Ca 2+ channels or nonspecific mechanisms.A relationship between contraction and free intracellular Ca 2+ concentration in smooth muscle cells has been reported on the basis of studies using calcium indicators (Morgan and Morgan, 1984).Very little information is available concerning the relationship between contraction and intracellular free Ca 2+ concentration in smooth muscle cells of the rat portal vein (Swärd et al., 1993).In the present study, we have observed that each spontaneous phasic contraction was accompanied by a phasic increase in the Fura PE3 F340/F380 ratio.Since diclofenac decreased both the amplitudes of the spontaneous phasic contractions and the phasic increases in the intensities of the Fura PE3 F340/F380 ratio in a similar manner, it would appear that the diclofenac-induced inhibition of the spontaneous contractions was mediated by a decrease in the intracellular Ca 2+ concentrations.It has been shown that spontaneous contractions of the rat portal vein are accompanied by bursts of action potentials (Funaki and Bohr, 1964;Axelsson et al., 1967;Johansson et al., 1967).Both contraction and action potentials are dependent on an influx of extracellular calcium through L-type Ca 2+ channels (Sutter, 1990;Kamishima and McCarron, 1996).We observed that diclofenac decreased the number of spikes in each burst of action potentials with an accompanying decrease in the Ca 2+ influx.As this agent did not affect the high-K + -induced contraction, it was considered that diclofenac influenced pathways other than voltage-dependent Ca 2+ channels.
Further investigation would be needed to discriminate between the effects of diclofenac and nicardipine, an L-type channel inhibitor, on action potentials.
Elevation of cAMP increased intercellular communication in both cardiomyocytes, (Burt and Spray, 1988) and osteocytes (Cherian et al., 2003).Thus, formation of cAMP is involved in both electrical cell-to-cell coupling and in spontaneous contractile activity.Generally cell-to-cell conductance has been shown to be increased by elevation of cAMP and decreased by elevation of cGMP (Brink and Barr, 2000).These responses may be mediated via the phosphorylation of connexin, however, its relationship to change in intercellular communication is not clear (Sáez et al., 2003).Several studies have suggested that the inhibition of spontaneous contraction by COX inhibitors is mediated by a decrease in intercellular communication.An earlier study has shown that dye transfer through gap junctions in osteocyte-like MLO-Y4 cells was inhibited by indomethacin and that PGE2 facilitated gap junction-mediated communication in these cells (Cheng et al., 2001).In the canine trachealis muscle, PGE2 or PGI2 increased gap junction formation (Agrawal and Daniel, 1986).It has also been shown that an increase in intercellular communication enhanced the amplitude of spontaneous contractions (Garfield et al., 1988;1992), while inhibition of intercellular communication reduced myogenic contraction in cerebral arteries (Lagaud et al., 2002).However, in cardiac muscle cells, cyclooxygenase metabolites were not involved in gap junction conductance (Schmilinsky-Fluri et al., 1997).In the rat myometrium, generation of gap junctions was inhibited by indomethacin and increased by PGH2 and arachidonic acid (Garfield et al., 1980).The reduction in the distribution of Lucifer Yellow induced by diclofenac observed in the present study is consistent with the findings of the aforementioned studies on smooth muscle.However, in rat myometrium, it was reported that 2deoxy-D-glucose diffusion was reduced by an increase in intracellular cAMP or isoproterenol and PGE2 administration (Cole and Garfield, 1986).This difference in the myometrium might indicate the possible presence of a different regulation by cyclooxygenase pathway from that in the smooth muscle cells of the rat portal vein.In the present study, our results have indicated that the physiological level of cAMP plays an important role in the maintenance of spontaneous contractions.The marked decrease in the distribution of Lucifer Yellow by diclofenac was observed in the longitudinal direction but not in the circular direction.This observation was compatible with the results of the mechanical and electrical recordings from longitudinal preparations.
We observed that ozagrel and SQ29548 did not affect the amplitude of spontaneous contractions; thus, prostanoid TP receptor agonists such as thromboxane and PGF2α do not appear to play a role in mediating the contractions.On the contrary, an EP prostanoid receptor antagonist, AH6809 (Janssen et al., 2000;Woodward et al., 1995;Coleman et al., 1985), inhibited the amplitude of spontaneous contractions in the rat portal vein.Since AH6809 did not inhibit contractions induced by 50 mM K + , its inhibitory effect on spontaneous contraction appears to involve neither L-type Ca 2+ channel-mediated nor nonspecific mechanisms.Thus, it would appear that prostanoid EP receptors are involved in spontaneous contractions of the rat portal vein.
In the present study, SQ22536, an adenylate cyclase inhibitor (Gao and Raj, 2001), decreased the amplitude of the spontaneous contractions of the rat portal vein.Since SQ22536 did not inhibit the 50 mM K + -induced contracture, the inhibition did not appear to involve L-type Ca 2+ channel-mediated or nonspecific mechanisms.Since cAMP serves as a second messenger for PGE2, the inhibition of spontaneous contractions by adenylate cyclase inhibition was compatible with the effect of a PGE2 receptor antagonist.As only 50% of the spontaneous contraction was inhibited by SQ22536, other mechanisms might also be involved in spontaneous contraction.The contribution of cGMP was excluded since ODQ did not affect spontaneous contraction.
In conclusion, these results indicate that diclofenac inhibits the spontaneous contraction of the rat portal vein by decreasing electrical activity.The inhibition of spontaneous contraction may be mediated by the COX pathway.Decreases in the production of cAMP and PGE2, as well as cell-to-cell coupling, would appear to be involved in the inhibition.In the rat portal vein, intrinsic PGE2 plays an important role in the maintenance of spontaneous electrical and mechanical activities.
Fig. 1 .
Fig. 1.Effect of diclofenac on spontaneous contractions of longitudinal preparations of the rat portal vein.A: A typical trace showing the inhibitory effects of 3 µM and 30 µM diclofenac on these contractions.B: A summary plot showing the concentrationdependent depression of the amplitude of the spontaneous contractions by diclofenac.An average of the consequent amplitudes of 5 to 10 min recordings of the contractions in the presence and absence of various concentrations of diclofenac was used as a representative value for each preparation.Each point shown in the figure represents the mean of data from 5 to 10 preparations.
Fig. 2 .
Fig. 2. Effect of 30 µM diclofenac on the spontaneous intensity increases in the Fura-PE3 fluorescence ratio (F340/F380) (upper trace) and contraction force (bottom trace).The fluorescence intensity ratio is expressed as a % of the change induced by 40 mM K + Tyrode's solution.
Fig. 3 .
Fig. 3.A typical recording of the electrical activity of a smooth muscle cell from the longitudinal muscle layer of the rat portal vein.Spontaneous bursts of action potentials in the control (upper trace) were inhibited by 30 µM diclofenac (bottom trace).Traces were obtained from the same cell both before and 5 min after the application of diclofenac.
Fig. 4 .
Fig. 4. Effect of the PGE1, E2 receptor antagonist, AH6809, on spontaneous contractions of longitudinal preparations of the rat portal vein.A: A typical trace shows an inhibitory effect of 30 µM AH6809 on spontaneous contraction.B: A summary plot shows the concentration-dependent depression of the amplitude of the spontaneous contractions in the presence of AH6809.Each point represents the mean of 4 to 8 preparations.
Fig. 5 .
Fig. 5. Effect of the adenylate cyclase inhibitor, SQ22536, on spontaneous contractions of longitudinal preparations of the rat portal vein.A: A typical trace showing the inhibitory effect of 100 µM SQ22536 on spontaneous contractions.B: A summary plot showing the concentration-dependent depression of the spontaneous contraction amplitude by SQ22536.Each point represents the mean of 4 to 6 preparations.
Fig. 6 .
Fig. 6.Changes in Lucifer Yellow dye distribution in longitudinal preparations of the rat portal vein following a 2 min electric current injection from the microelectrode.The area of distribution (A) and the distance of distribution in both the longitudinal and circular directions are compared.Open columns, absence of diclofenac; filled columns, presence of 30 µM diclofenac.Data shows the mean of 6 preparations in each experiment.Asterisks indicate statistical significance (P<0.05). | 2018-04-03T00:33:51.460Z | 2005-08-01T00:00:00.000 | {
"year": 2005,
"sha1": "57e6adc4e7fe358180b6743f0deae61c5121db4a",
"oa_license": "CCBYNC",
"oa_url": "https://www.jstage.jst.go.jp/article/jsmr/41/4/41_4_195/_pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "57e6adc4e7fe358180b6743f0deae61c5121db4a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
229459669 | pes2o/s2orc | v3-fos-license | Safety Profile and Radiographic and Clinical Outcomes of Stand-Alone 2-Level Anterior Lumbar Interbody Fusion: A Case Series of 41 Consecutive Patients
Objective: The use of stand-alone 2-level anterior lumbar interbody fusion (ALIF) for degenerative lumbar disease has been increasing as an alternative to routinely augmenting these constructs with posterior fixation or fusion. Despite the potential benefits of a stand-alone approach (decreased cost and operative time, decreased pain and early mobilization), there is a paucity of information regarding these operations in the literature. This investigation aimed to determine the safety profile, radiographic outcomes including fusion rates, improvement in preoperative pain, and spinopelvic parameter modification, for patients undergoing stand-alone 2-level ALIF. Methods: This retrospective case series involved a chart review of all patients undergoing 2-level stand-alone ALIF at a single tertiary hospital from 2008 to 2018. Data included patient demographics, hospitalization, complications and radiological studies. Visual analog scale (VAS) back and leg scores were measured via patient-administered surveys preoperatively and up to 18 weeks postoperatively. Results: Forty-one patients who underwent L4-S1 stand-alone ALIF were included. Sixteen (39%) of patients had undergone previous posterior lumbar surgery. Length of stay averaged 4.2 days. Complication rates were comparable to 1-level ALIF. Two patients required reoperation. Fusion rates were 100% for L4-5 and 94.4% for L5-S1. There was no significant change in lumbar lordosis (LL) or LL-pelvic incidence (PI), but there was improved segmental lordosis (SL) and disc height at L4-S1 on final follow-up imaging. There was also modest but statistically significant improvement in VAS back and leg scores. Conclusions: Stand-alone 2-level ALIF is an option for a surgeon to perform in the absence of significant instability, even in the setting of prior posterior surgery. These procedures increase SL and disc height, but do not have the same effect on LL or LL-PI.
Introduction
Lumbar interbody fusion can be performed from anterior (anterior lumbar interbody fusion [ALIF]), lateral (lateral or oblique lumbar interbody fusion [LLIF or OLIF]), or posterior (posterior or transforaminal lumbar interbody fusion [PLIF or TLIF]) approaches [1]. The anterior approach allows for a comprehensive ventral discectomy and straightforward endplate preparation, followed by direct graft insertion [2,3]. Comparable to posterior and lateral approaches, ALIF is associated with high fusion rates [1,[4][5][6]. This approach allows substantial deformity correction and indirect neuroforaminal decompression, while also enabling early postoperative mobilization by sparing posterior spinal and psoas muscle dissection [7][8][9][10][11]. ALIF may be superior to TLIF, LLIF or OLIF at providing both segmental and overall lordosis correction [1,4,11,12]. The anterior approach has limitations however, as it is most suitable for the L4-5 and L5-S1 levels and provides only indirect neural element decompression. Complications can include visceral, ureteral, and vascular injuries, along with retrograde ejaculation secondary to sympathetic injury [13,14]. Stand-alone 2-level ALIF is an alternative to routinely augmenting multilevel anterior constructs with posterior fixation or fusion in patients without significant instability. However, the literature evaluating this technique is limited, especially in considering patients who have undergone prior posterior surgery. To our knowledge, no prospective or large-series retrospective studies are available. The potential benefits of stand-alone 2-level ALIF include early postoperative mobilization and high fusion rates, with satisfactory neuroforaminal height restoration and neurospinopelvic parameter improvement. Socioeconomic benefits may include decreased operating time, length of stay, and overall cost. We sought to evaluate stand-alone 2level ALIF in a large comparatively homogenous series of patients, in the absence of significant instability, and even with a history of prior posterior lumbar surgery.
Materials And Methods
Institutional review board approval was obtained for this study; as a retrospective review of medical records and charts, the informed consent of patients was not sought. We performed a search of the operating room electronic scheduling record of a single tertiary hospital to locate all patients undergoing 2-level stand-alone ALIF from 2008 to 2018. A retrospective review of each patient's chart was performed to obtain information on the following patient demographics: gender, smoking status, presence of osteoporosis, history of previous posterior surgery, age at the time of surgery, body mass index and pain characteristics. Details about the operation and hospitalization were collected, including the use of bone morphogenic protein-2 (BMP-2) (Medtronic, Dublin, Ireland) and length of stay. Polyetheretherketone (PEEK) (Globus, Audubon, PA, USA) and titanium-coated PEEK (Aesculap, Tuttlingen, Germany) implants were used. Immediate postoperative and long-term complications were also collected, including re-operation, retrograde ejaculation, intraoperative vessel injury, hardware complication, adjacent segment disease, urinary symptoms, deep vein thrombosis, and cerebrospinal fluid (CSF) leak.
All preoperative, immediate postoperative, and final standing radiographs, as well as computed tomography (CT) scans, for those patients included in the analysis were evaluated. A single spine surgeon measured all spinopelvic parameters to decrease the interobserver variability. Parameters measured were lumbar lordosis (LL, angle from the superior L1 endplate to the superior S1 endplate), segmental lordosis (SL, angle from the superior L4 endplate to the superior S1 endplate), sacral slope (SS), pelvic tilt (PT), pelvic incidence (PI), L4-5 anterior disc height, L4-5 posterior disc height, L5-S1 anterior disc height, and L5-S1 posterior disc height. X-rays were utilized to evaluate for fusion in the interbody cage as to limit unnecessary ionizing radiation. When appropriate, CT scans were obtained and evaluated for the presence of interbody fusion. Only patients with imaging studies at least 40 weeks (approximately nine months) from surgery were included in the fusion analysis. A paired t-test (SAS Institute, Cary, NC, USA) was utilized to determine differences between the means of spinopelvic parameters at the immediate postoperative period and on final X-rays in comparison to the preoperative radiographs.
Voluntary electronic surveys were given to patients preoperatively and at every clinic appointment with questions about pain levels. These included a 10-point visual analog scale (VAS) for a patient's maximum back pain, average back pain, maximum leg pain, and average leg pain. A paired t-test was used to determine differences between these scores at every clinic visit in comparison to the preoperative scores.
Patient demographics
Of the 41 patients who underwent L4-S1 ALIF, 22 were female and 19 were male. The average age at surgery was 51.9 years (95% confidence interval (CI): 47.6-56.2). The average body mass index was 28.1 (95% CI: 26. 8-29.3). When asked about radicular or lower back pain as a percentage of total symptomology, patients noted that an average of 62.4% of their total symptoms were due to back pain (95% CI: 52.3-72.4). Only one patient was a smoker, and one patient had a history of osteopenia. Sixteen patients (39%) had a history of previous posterior lumbar surgery. Ten of these patients underwent bilateral laminectomies with or without discectomy at L4-5, L5-S1 or both. Five patients underwent hemilaminectomy or laminotomy with or without discectomy at L4-5, L5-S1 or both. One patient underwent a posterior fusion from L4-S1 with removal of posterior instrumentation and subsequent pseudarthrosis prior to presentation to our clinic. The demographic data are listed in Table 1
Immediate postoperative and long-term details and complications
All approaches were performed through a transperitoneal approach with the aid of a vascular approach surgeon. The transperitoneal approach, even though not standard, was the choice of the approach surgeon. BMP-2 (8.2 mg per level) was used as an interbody graft in both cages and in all patients. The average length of stay was 4.2 days (95% CI: 3.6-4.8). Immediate postoperative and long-term complications are listed in Table 2.
TABLE 2: Perioperative and long-term complications for patients undergoing 2-level stand-alone anterior lumbar interbody fusion
Overall, 17 of 41 of patients (41.5%) had at least a minor complication after surgery, which included reoperation, adjacent segment disease, prolonged ileus, transient or permanent retrograde ejaculation, transient urinary retention/hesitancy, hardware complication, deep vein thrombosis (DVT), intraoperative vessel injury, and cerebrospinal fluid leak. Three patients (7.32%) developed adjacent segment degeneration, with two treated conservatively and another treated with surgery. Two patients (4.88%) underwent reoperation: one for early hardware failure at L5-S1 with subsequent S1 endplate fracture, and another for early adjacent segment degeneration at L3-4. Three patients (7.32%) developed a clinically significant ileus requiring nasogastric tube placement or a prolonged hospital stay greater than seven days. Two males (10.5%) had permanent retrograde ejaculation, while another (5.26%) had transient retrograde ejaculation. Three patients (7.32%) had transient urinary dysfunction, sexual dysfunction or both. Three patients (7.32%) had an intraoperative vessel injury: two caused by retractors injuring the vena cava or iliac vein and one iliac vein injury that occurred during screw placement. There were two (4.88%) hardware complications: lateral breach of an L4 screw discovered and removed intraoperatively with a resultant radiculopathy treated medically, and an L4-5 interbody cage placement causing L4 body fracture that was managed conservatively. Two patients (4.88%) developed DVT. One patient (2.44%), who had a history of prior L4-5 discectomy, sustained an intraoperative CSF leak during discectomy at the L4-5 level.
Fusion rates and spinopelvic parameters
Thirty-six of 41 patients (87.8%) had X-rays, CT scans or both performed at least 40 weeks postoperatively. Average follow-up was 59.9 weeks (95% CI: 48.5-71.3). One patient was lost to follow-up immediately after surgery. Two patients were lost to follow-up after their six-week postoperative visit. Two patients were removed due to re-operations with subsequent posterior instrumentation and fusion. All 36 eligible patients fused at L4-5. Thirty-four of the 36 patients (94.4%) fused at L5-S1. In the two patients with L5-S1 pseudarthrosis, there was fusion mass formation at L4-5 but not at L5-S1. This was also indicated on their Xrays. X-rays alone were sufficient to confirm fusion for 20 patients. CT scans confirmed fusion in 14 of 16 patients.
Spinopelvic parameters were determined based on imaging acquired at preoperative, immediate postoperative and final visit intervals; they are listed in Table 3. Thirty-four patients had preoperative standing X-rays. Seven patients either did not have standing X-rays or had only flexion-extension X-rays. Thirty-eight patients had final X-rays. Three patients were removed from final X-ray analysis: two secondary to early reoperation and one who had no follow-up after discharge from the hospital. Of the final X-rays of 38 patients, two were obtained at the six-week visit, 16 at the 40-week visit and 20 at a visit greater than 52 weeks. There was a significant decrease of 5.0 degrees in LL between preoperative and immediate postoperative X-rays, and a significant decrease in SS of 2.1 degrees in the immediate postoperative period. However, there was no difference between preoperative and final LL or SS. There was a significant increase in SL of 7.7 degrees in the immediate postoperative period. This difference decreased to 3.9 degrees on final X-rays, though still statistically significant. There was no significant difference in PT between the three time periods. Finally, there was a significant increase in anterior and posterior disc heights at both L4-5 and L5-S1 immediately postoperatively. This increase continued to be statistically significant, though diminished, on the final radiographs. When evaluating PI-LL balance (PI and LL being within 10 degrees of one another), 52.6% of patients were balanced preoperatively, and 68.4% were balanced at final follow-up. This trend was not statistically significant. The average PI-LL difference was 10.5 degrees preoperatively and 8.6 degrees postoperatively, which was also not statistically significant. The VAS maximum back pain score preoperatively was 7.9 (standard error of the mean = 0.47), decreased to 5.2 (0.56) at six weeks, and further to 4.8 (0.67) at 18 weeks. The VAS average back pain score preoperatively was 5.8 (0.46), decreased to 3.0 (0.36) at six weeks, and remained stable at 3.0 (0.53) at 18 weeks. The VAS maximum leg pain score preoperatively was 7.9 (0.34), decreased to 5.2 (0.74) at six weeks postoperatively, and then increased slightly to 5.3 (0.74) at 18 weeks. The VAS average leg pain score preoperatively was 6.0 (0.43), decreased to 3.1 (0.626) at six weeks postoperatively, and then increased slightly to 3.6 (0.77) at 18 weeks. All values from the four measures at the two postoperative time points were statistically improved (p < 0.05) when compared to the preoperative measures. This trend is graphically shown in Figure 1.
FIGURE 1: Graphical depiction of visual analog scale (VAS) scores for back and leg pain. (A) Maximum back pain, (B) average back pain, (C) maximum leg pain, and (D) average leg pain all statistically improved postoperatively at the 6-week and 18-week visits (p < 0.05). Error bars
show standard error of the mean.
Discussion
We present our experience with performing 2-level stand-alone ALIF in patients with symptomatic degenerative lumbar disease without significant instability. Avoiding posterior fixation or fusion in these patients decreases overall cost and operative time, while enabling early postoperative mobilization and reducing length of stay [15]. We found an average length of stay of 4.2 days, well in accordance with the national average of 5.1 days for ALIF and considerably below the average of six days for combined anteriorposterior fusion approaches [15]. However, though this technique is utilized on a widespread basis in spine surgery, most of the evidence regarding this procedure comes from heterogeneous studies or meta-analyses that combine patients undergoing 1-level and multilevel stand-alone ALIF. As such, our study describes the largest series of 2-level stand-alone ALIF in the literature. We demonstrate that this intervention achieves high fusion rates without the need for posterior instrumentation, even in those patients who had undergone previous posterior lumbar surgery, provided there is no significant instability. Based on our experience, significant instability likely is any mobile spondylolisthesis with greater than 3 mm of translation between flexion-extension X-rays. However, this claim would need significant testing with prospective, larger studies for validation. Patients with significant instability or concerns regarding the strength of anterior fixation would likely require a posterior fixation or fusion procedure and would not be candidates for a stand-alone 2level ALIF.
A high fusion rate was achieved even with 39% of patients having undergone previous posterior surgeries, most of which were complete laminectomies rather than hemilaminectomies or laminotomies ( Figure 2). This result suggests that disruption to the posterior tension band is not an absolute contraindication to performing 2-level stand-alone ALIF. Moreover, four patients had a preoperative grade-I spondylolisthesis at L4-5, three of which worsened just slightly on flexion-extension X-rays. The fourth patient did not have dynamic X-ray imaging available, though comparing upright to supine imaging modalities did not demonstrate a change in the amount of anterolisthesis. One of these four patients was lost to follow-up, but the other three fused at the L4-5 level (one of whom had an L4-5 unilateral iatrogenic pars fracture from a previous laminectomy). None of our patients had spondylolysis at either the L4-5 or L5-S1 levels. However, one patient in the cohort preoperatively had a mobile grade-I spondylolisthesis at L5-S1 that worsened to an almost grade-II spondylolisthesis upon standing. Postoperatively, this patient developed an S1 endplate fracture with anterior migration of the L5-S1 cage, ultimately requiring posterior instrumented fusion two months later. In combining the findings above, 2-level stand-alone ALIF achieves a high fusion rate in patients with a history of previous posterior surgery, provided the patient does not have gross preoperative instability, especially at L5-S1. The high fusion rates observed in this study are comparable with those of the interbody fusion literature. However, they might be inflated by patient selection (only one smoker and one patient with osteopenia) and using BMP-2 in all cages. Our dose of BMP-2 per level is within the range described in the literature [16]. Nonetheless, two patients developed pseudarthrosis at L5-S1, likely due to increased forces at the lumbosacral junction compared to L4-5. In one of these patients, who developed pseudarthrosis at L5-S1 nine months postoperatively, we had initially planned to perform posterior percutaneous fixation in a delayed fashion due to a significant scoliotic deformity at L4-S1. However, the patient opted against continuing with the posterior surgery due to improvement in symptoms after the anterior procedure. The patient was subsequently lost to follow-up. The second patient developed pseudarthrosis diagnosed at 15 months postoperatively. Even though she had significant symptoms, she opted for continued conservative management and surveillance imaging with the hope of achieving delayed fusion. Of note, this patient had a high PI of 74.9 degrees. In addition, another patient with a high PI of 68.0 degrees reached delayed L5-S1 fusion at 18 months. These findings suggest that 2-level stand-alone ALIF should probably be supplemented with posterior percutaneous fixation or fusion in patients with a high PI. However, this is not an absolute recommendation as there was also a patient in our cohort with a PI of 75.2 degrees who had fused by one year. Further prospective studies are needed to determine if a PI threshold exists that makes posterior supplementation necessary.
We found a risk of adjacent segment degeneration after 2-level stand-alone ALIF comparable to that described in the literature. Three patients (7.32%) developed adjacent segment degeneration, two in a delayed fashion (at two years and 15 months). Both patients had developed adequate fusion at L4-S1 and ultimately opted for conservative management. The third patient developed significant adjacent segment degeneration two months postoperatively and required an L3-4 TLIF with L3-S1 posterior instrumented fusion. However, this outcome was not entirely unexpected as preoperative imaging demonstrated significant disc disease at L3-4 with an associated disc herniation causing mild stenosis. Because of this patient's young age of 25 years, we opted to initially manage only the most symptomatic segments and performed an L4-S1 ALIF. We perhaps should have initially been more aggressive with treatment of the L3-4 disc space.
The most problematic perioperative complications associated with the anterior spinal approach are ileus, vascular injury, bowel injury, DVT due to venous retraction, and damage to the sympathetic plexus causing retrograde ejaculation in men. Our low rate of ileus, even with utilizing our vascular approach surgeon's preferred transperitoneal approach, is likely due to prophylactic placement of the patients on scheduled intravenous prokinetic drugs, meticulous electrolyte replacement and strict diet advancement postoperatively. There were no bowel injuries in the cohort, likely owing to the experience of the approach surgeon. We did have three patients (7.32%) with an intraoperative vessel injury, and the vascular surgeon repaired these injuries intraoperatively with no significant blood loss or patient morbidity. This finding demonstrates the importance of having an experienced approach surgeon present and involved in the operation from beginning to end. Two patients developed DVT postoperatively, one of whom had a remote history of DVT with associated pulmonary emboli and was bridged off his anticoagulation for our surgery. The other was unprovoked. Neither of these patients developed pulmonary embolus and both were successfully treated with anticoagulation. To help prevent against DVT, we administer 5000 units of heparin subcutaneously preoperatively; we also minimize venous retraction time and have retraction "breaks" every few minutes to prevent venous stasis. Lastly, our permanent retrograde ejaculation rate of 10.5% is comparable to that seen in the literature for 1-or 2-level ALIF with concurrent use of BMP-2 [17,18]. This result suggests that performing a 2-level ALIF does not specifically place patients at an increased risk of developing this complication when compared to a 1-level ALIF. It is important to note that a meta-analysis of the two techniques showed a higher postoperative retrograde ejaculation rate and a trend towards higher overall complication rates in the transperitoneal approach (which is what we used) versus the retroperitoneal approach [19]. The approach surgeon at our institution is an experienced vascular surgeon, which might explain our complication rates being similar to the complication rates following a retroperitoneal approach in the literature. Lastly, a meta-analysis of BMP-2 in ALIF demonstrated a weak trend between likelihood of complications (retrograde ejaculation, endplate resorption, and graft subsidence) and dosing of BMP-2. Therefore, decreasing the dose of BMP-2 further might decrease complication rates further [16].
There is some controversy regarding the effect of ALIF on spinopelvic parameters. We found that 2-level ALIF did not significantly alter most parameters on final X-rays, though we did find an increase in SL of 3.9 degrees, as demonstrated in previously reported studies [7]. The most notable radiographic change following 2-level ALIF was increased anterior and posterior disc height at both L4-5 and L5-S1. Subsidence expectedly occurred at operated levels between the immediate postoperative and final X-rays, with a slight decrease in both final SL and disc height measurements. However, performing 2-level ALIF did not ultimately alter final LL, SS, PT or LL-PI parameters in a significant way. We suspect the transient decreases in LL and SS, observed immediately postoperatively, were due to patient discomfort and a subsequently associated positional distortion. Nonetheless, the spinopelvic parameter findings suggest that 2-level stand-alone ALIF should not be utilized as an isolated deformity operation, and this intervention requires supplementation with posterior osteotomies and compression to achieve significant corrections to LL. This is supported by the fact that, although trending toward ideal (≤10 degrees), we did not have a statistically significant change in LL-PI or the number of patients with corrected LL-PI mismatch after surgery [20]. Further large prospective studies are needed to assess this assertion and to determine what spinopelvic parameter changes may be tied to clinical outcomes. There is increasing evidence to suggest that patients with ideal LL-PI values postoperatively have reduced back pain [20][21][22].
With regards to clinical outcomes after a 2-level stand-alone ALIF, our study showed statistically significant improvement in back and leg pain postoperatively. Even though statistically significant, the clinical effect in our study was modest, as the patients continued to have back and leg pain-albeit at a lower level. However, this result should be seen with the caveat of a low patient response rate, which is almost certainly due to the surveys being completely voluntary and some patients likely decided not to take the time to fill them out.
This significant limitation of our study likely biases the results, especially if there is a higher propensity of patients with good outcomes to disregard the survey when compared to patients with poor outcomes, or vice versa.
This study is the largest to date assessing radiographic outcomes associated with 2-level stand-alone ALIF, though the findings remain limited by the retrospective nature of the analysis and by the number of patients. The paucity of functional outcomes information is the most glaring limitation of this study and will be addressed in subsequent studies. Moreover, though our study had an acceptable follow-up rate (87.8%) with 36 of 41 eligible patients reaching an average of 59.9 weeks, the loss of some patients and lack of longterm data may decrease our stated rates of adjacent segment degeneration and reoperation. As such, we listed absolute numbers for each characteristic studied in addition to percentages. Another limitation is the fact that most of our fusion analysis comes from evaluating X-rays, which is not ideal. Nonetheless, our practice evolved to this pattern to minimize radiation in patients who are doing well clinically. Lastly, the goal of this project was not to discuss or help determine the indications for when to perform an ALIF versus TLIF. The goal was to show that it is reasonable and possible in a select patient subgroup to perform a standalone 2-level ALIF rather than a 2-level ALIF with posterior fixation or fusion. On the same note, in our practice, the patient population requiring ALIF in addition to posterior fixation differs from the population of patients who are candidates for a 2-level stand-alone ALIF. This is the gist of this manuscript. For this reason, we decided to present these data as a case series and not as a comparative study to TLIF patients or ALIF patient with posterior fixation or fusion.
Conclusions
Utilizing a retrospective analysis, we demonstrate that performing 2-level stand-alone ALIF in patients without significant instability is a valid option and achieves high fusion rates even in the setting of previous posterior lumbar surgery. The findings also illustrate the lack of a significant change in LL, SS and PT. However, 2-level ALIF does significantly increase SL and disc height at operated levels. These changes are greatest immediately after surgery but decrease in magnitude due to subsidence on final radiographs. Further prospective studies, including higher patient numbers and functional outcomes, are required to substantiate our findings. | 2020-11-26T09:06:18.168Z | 2020-11-01T00:00:00.000 | {
"year": 2020,
"sha1": "78d9cb3da588e1cbb9f9c21d23640182cc119df9",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/39184-safety-profile-and-radiographic-and-clinical-outcomes-of-stand-alone-2-level-anterior-lumbar-interbody-fusion-a-case-series-of-41-consecutive-patients.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f9a45994c4642ab2bdec21015b011d54ed30bead",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118511535 | pes2o/s2orc | v3-fos-license | Parameter estimation for heavy binary-black holes with networks of second-generation gravitational-wave detectors
The era of gravitational-wave astronomy has started with the discovery of the binary black hole coalescences (BBH) GW150914 and GW151226 by the LIGO instruments. These systems allowed for the first direct measurement of masses and spins of black holes. The component masses in each of the systems have been estimated with uncertainties of over 10\%, with only weak constraints on the spin magnitude and orientation. In this paper we show how these uncertainties will be typical for this type of source when using advanced detectors. Focusing in particular on heavy BBH of masses similar to GW150914, we find that typical uncertainties in the estimation of the source-frame component masses will be around 40\%. We also find that for most events the magnitude of the component spins will be estimated poorly: for only 10\% of the systems the uncertainties in the spin magnitude of the primary (secondary) BH will be below 0.7 (0.8). Conversely, the effective spin along the angular momentum can be estimated more precisely than either spins, with uncertainties below 0.16 for 10\% of the systems. We also quantify how often large or negligible primary spins can be excluded, and how often the sign of the effective spin can be measured. We show how the angle between the spin and the orbital angular momentum can only seldom be measured with uncertainties below 60$^\circ$. We then investigate how the measurement of spin parameters depends on the inclination angle and the total mass of the source. We find that when precession is present, uncertainties are smaller for systems observed close to edge-on. Contrarily to what happens for low-mass, inspiral dominated, sources, for heavy BBH we find that large spins aligned with the orbital angular momentum can be measured with small uncertainty. We also show how spin uncertainties increase with the total mass. Finally...
I. INTRODUCTION
The Advanced LIGO [1] observatories have discovered gravitational waves (GWs) emitted by a binary black hole coalescence (BBH), on September 14th 2015 [2]. The event was named GW150914. A few months later, a second clear BBH detection (GW151226) was made [3,4], and a weaker candidate BBH signal (LVT151012) was also identified.
The key astrophysical parameters of these sources have been estimated using Bayesian algorithms [3][4][5][6][7] and tests of general relativity have been performed [4,8,9]. The source-frame masses of the two black holes in GW150914 have been estimated [4,6] . Within general relativity, the dimensionless spin magnitude can take values in the range [0, 1], with 0 being non-spinning and 1 being maximally spinning. For both * salvatore.vitale@ligo.org sources, the spins of the two black holes have been measured with high uncertainty, the 90% credible interval on the measurement spanning most of the prior support. For GW150914, the median and 90% credible interval were 0.32 +0. 47 −0.29 and 0.48 +0.47 −0.43 [6]. Something more could be said about the spins of GW151226, for which there was evidence that at least one of the spins was larger than zero [3,4], but no meaningful constraint on the spin tilt angles has been set, for any of the systems.
Precise estimation of masses and spins of black holes from gravitational-wave sources will contribute toward the understanding of the formation and the properties of these objects, and will complement measurements made with electromagnetic radiation. For example, both masses and spins of black holes can be measured for black holes in X-ray binaries, but those are indirect measurements. The mass is found by measuring the mass of the companion object and the projection of the radial velocity along the line of sight (which is degenerate with the inclination of the orbital plane) [10]. The masses of several black holes have been estimated using this method, with values that cover the range [5 − 15] M [11]. Two main methods exists to measure spins [12][13][14][15], both of which rely on modeling the disk surrounding the BH. The mass and spin estimation of GW150914 and GW151226 thus represent the first direct measurements of such quanti-ties.
The main astrophysical implications of the discoveries have been discussed in [4,16], while a prediction of the rate of heavy BBH coalescence and prospects for detection in future observing runs was given in [17] (and later updated in [4]). The rate estimates suggest that the number of significant BBH detections by ground based detectors could already be around one per month in the second observing run, starting before the end of 2016 [18]. In view of the numerous detections that will be made in the next few years, it is worth addressing the following question: was the precision in the measurement of parameters for the detected systems typical of what we can expect in the future? In this paper we address that question. Since results already exist in the literature (see below) for lighter BBH, here we will thus focus on heavy BBH. These are systems that will only be in band for a few cycles before merger, thus making unclear a priori, what and to which precision can be deduced about the individual binary constituents parameters. Furthermore, given that advanced detectors have a selection bias toward higher masses [19], one might expect heavy BBH to be detected more often, if the rates are comparable to those for stellar-mass BBHs.
Some previous studies of parameter estimation for BBH (including heavy BBH) have been performed. [20] considers BBHs with spins aligned with the orbital angular momentum (i.e. without spin-induced precession) and reports statistical uncertainties for the main astrophysical parameters. Their results are comparable to ours for BBH of similar mass, although the reported uncertainties are slightly smaller since they don't have potential correlations coming from the precessing spin degrees of freedom. More recently, [21] looked at neutron starblack hole systems. Their uncertainties in the spin parameters are smaller than what we find here, consistently with the fact that larger mass ratios enhance the measurability of spins [22]. Most of the early work, e.g. [23,24], deals with only a few systems at the time, using postnewtonian inspiral-only waveforms. As such, these papers are not directly comparable to ours.
We create an astrophysical population of 200 spinning heavy BBHs and estimate their parameters with a network of advanced LIGO and Virgo detectors at design sensitivity. We find that source-frame component masses can be estimated with typical uncertainties of 40%. This is slightly larger than what was measured for GW150914, owing to its large signal-to-noise ratio. Spin magnitude is hard to estimate: for the most (least) massive blackhole in the system we find that only 10% of the times the 90% credible interval uncertainty will be smaller than 0.7 (0.8). Similar conclusions hold for the tilt angles, i.e. the angle between each spin vector and the orbital angular momentum, for which the uncertainties will be larger 60 • for most systems. As we mentioned above, GW150914 fits perfectly in this scenario. We quantify how often large and negligible spins can be excluded, and find that large spins are easier to exclude. For example, if only BBH with primary spins up to 0.2 are considered, 90% of the times spins above 0.95 can be excluded. We also verify that the effective spin along the orbital angular momentum [4,6] can be estimated more precisely than the individual spins, and that 70% of the times one can correctly measure the sign of the effective spins, if the underlying population has even mild effective spins (below -0.3 or above 0.3).
We then show how precessing spins can be estimated more precisely as the inclination angle moves away from zero, and the uncertainties reach a minimum for angles close to π/2, where the binary is viewed in alignment with the orbital plane. Contrarily to what expected for low mass sources, dominated by the inspiral phase, we find that for heavy BBH spins aligned with the orbital angular momentum are not extremely degenerate with the mass parameters, and can thus be measured very precisely. In fact, considering BBH with mass ratios of 1 and 2 and spin magnitude of 0.9, we find that aligned spins can be measured a factor of several better than precessing spins, no matter of the orbital orientation.
We investigate how the uncertainties depend on the (redshifted) total mass of the system and find that the uncertainties increase with the total mass, with larger increases for larger mass ratios.
Finally, we show how the properties of the underlying astrophysical distribution can be estimated, in a very simple toy model.
The rest of this paper is organized as follows. We first describe the GW network (Sec. II A) and BBH events (Sec. II B) used in this study. The main results are summarized in Sec. III, while conclusions and discussion can be found in Sec. VI.
A. Detectors
In this study, we consider a network of 3 advanced detectors, the two LIGO interferometers (IFOs) and the Virgo detector. (In Appendix A we will consider a fivedetector network includes the three instruments above plus the KAGRA detector in Japan [25] and LIGO India [26].) For all instruments, we used the noise spectral density corresponding to their design sensitivities [1,27]. We are thus focusing on instruments that will be available later in the decade. However it is easy to realize that the main results we obtain will not strongly depend on this choice. The main difference on the detected events if the instruments are made more sensitive is that the distance distribution of the detected events will get shifted to higher values, while keeping roughly the same shape 1 . Critically, the distribution of the signal-to-noise ratios (SNRs) will be the same. Since the uncertainty in the intrinsic parameters (mass, spins) mostly depend on the SNR, with the caveat above 1 , the distribution of uncertainties we obtain should be representative of the uncertainties of the next few years. In fact, we will see that the uncertainties of GW150914 follow very well the ones we obtain here.
B. Simulated events
We simulated 200 binary black hole systems with intrinsic masses uniformly drawn from the range [30 − 50] M , and dimensionless spin magnitudes, a ≡ c| S| Gm 2 , drawn uniformly from the range [0, 0.98]. The sky position and orientation of the systems are isotropically distributed. The distances are drawn uniformly in comoving volume, with a lower network signal-to-noise ratio (that is, the quadrature sum of the SNR in each instrument) cut at 12. This corresponds to distances up to ∼12 Gpc, or a redshift of ∼ 1.5 using a ΛCDM flat cosmology [30]. The redshift distribution of the simulated signals is shown in Fig. 1 with a vertical line showing the median measured redshift of GW150914.
We notice that GW150914 is on the left side of the distribution since it was detected by 2 LIGO instruments at early sensitivity [6,31], while in this paper we consider a network made of more sensitive detectors. The vertical dashed line is the median estimated redshift for GW150914 [4] In Fig. 2 we show the network SNR of the population of BBH. Here too, a vertical dashed line shows the SNR of GW150914. We see that, even when considering a 3detector network at design sensitivity, as we do in this work, GW150914 is considerably louder than the "typical" detection.
The simulations were performed using the IMR-PhenomPv2 waveform approximant [32][33][34]. This is won't be the case for systems in the mass range we consider. a phenomenological inspiral-merger-ringdown approximant, and is one of the two used to estimate the parameters of the detected events [3,4,6]. It must be stressed that IMRPhenomPv2 uses a simplified spin description [32,35], in which the main spin parameters are the effective component of the total spins along the orbital angular momentum (χ eff in [6]) and perpendicular to it (χ p in [6]). The magnitude and orientations of the component spins can be obtained from those. Although IMRPhenomPv2 uses a simplified spin prescription, it has been shown for GW150914 that the results obtained with IMRPhenomPv2 broadly agree with those obtained with a fully precessing time-domain approximant (SEOBNRv3) [7]. The agreement might be inferior in some corners of the parameter space (e.g. for systems seen from edge-on, i.e. with their orbital angular momentum forming an angle of ι ∼ π/2 with the line of sight) [36]. However, IMRPhenomPv2 is more than one order of magnitude faster to compute than SEOB-NRv3 for the masses considered in this study. Considering that a single parameter estimation run requires the computation of ∼ 10 6 waveforms, we will thus work with the former. When surrogate Reduced Order Models (ROMs) [37][38][39][40] become available for SEOBNRv3, possibly followed-up by Reduced Order Quadrature [41,42], this study should be repeated. However the main conclusions of this study should hold, since most events detected by advanced detectors will be oriented close to face-on (ι ∼ 0) or face-off (ι ∼ π) [43]. By using the same waveform family to simulate the signals and to estimate their parameters, we do not consider any effect of waveform systematics. In practice different waveform families will always lead to slightly different parameter estimates, but here we assume that those difference will keep becoming smaller in the next months and years, as more and more elaborate waveform families are introduced.
All simulated BBH are added ("injected") into simulated interferometric data of advanced LIGO and Virgo.
Algorithms to estimate parameters of spinning CBC have been developed over the last several years, based on either Monte Carlo [24,44] or nested sampling [45] methods. In this paper we use the algorithm that yielded estimates for the two detected events [4], lalinference [5].
III. RESULTS
In what follows we will use the symbol Σ x to refer to the 90% credible interval (CI) for the parameter x (with dimensions), and the symbol Γ x for the relative uncertainty w.r.t. the true value: Γ x ≡ Σ x /x true (dimensionless). Our Σ will thus be directly comparable with the uncertainties as reported for GW150914 and GW151226.
A. Masses
We start by looking at the estimation of mass parameters. When sources at non-negligible redshifts are being detected, one must distinguish between the intrinsic (or source-frame) masses, and the detector-frame (or redshifted) masses. Using an index s for source-frame quantities and an index d for detector-frame quantities, the relationship is trivially: where here with m we generically indicate any mass parameter. In what follows we will use m i for the component masses, M for the total mass and q = m 2 /m 1 ∈ [0, 1] for the asymmetric mass ratio. All masses will be expressed in units of solar masses. We will examine both intrinsic and redshifted masses, because while intrinsic masses are what is astrophysically relevant it is the redshifted masses that control the shape and phase evolution of the signals in the instruments and hence impact the uncertainties. It is known that for low mass CBC such as binary neutron stars or systems containing stellar-mass BHs, GWs can yield extremely precise measurement of the chirp mass M ≡ (m1 m2) 3/5 . On the other hand, uncertainties are larger for the measurement of the component masses, total mass and mass ratio [46][47][48]. This happens because the chirp mass enters the waveform phase at the lowest order in the inspiral, while the mass ratio (and thus the component masses and total mass) enters at higher orders (see e.g. [49]). The situation is different for the heavy BBH we consider in this work, since not only the inspiral, but also the merger and ringdown phases will be in the bandwidth of the detectors. Since those depend on the total mass, we can expect similar uncertainties for the chirp mass and the total mass [50][51][52]. Furthermore, since the length of the inspiral phase shortens as the mass increases [3,4], the measurement of the chirp mass should slightly worsen as the masses increase.
In Fig. 3 we report the 90% CI for the source-frame chirp mass measurement (y axis) against the true injected source-frame chirp mass, while the colorbar reports the injected redshift z. Here and in other plots (unless otherwise indicated) a yellow star reports the values for GW150914 (since we don't know the "true" value in this case, the x axis refers to the median measured values as given in [4]).
We do not see a strong correlation between injected mass and uncertainties. The only clear trend is that closer events have smaller uncertainties, due to their high SNRs. What is happening is that, as mentioned above, the shape of the signal in the detector will depend on the detector-frame masses, and thus on the redshift. If one plots the uncertainties against the detector-frame chirp mass, Fig. 4, then the correlation becomes evident.
Typical uncertainties span a broad range, from a few to ∼ 20M , depending on the detector frame chirp mass. This translates to relative uncertainties (over the injected value) in the range few-60%, as shown in Fig. 5, where once again the colorbar reports the redshift, with a peak at ∼30%.
In all these plots, we see that the uncertainties for GW150914 seem to be quite typical of systems with comparable masses. edly, the mass ratio. In fact, measuring the masses of heavy BH would allow to estimate their mass distributions, while the mass ratio can be used to distinguish formation channels [53].
In Fig. 6 we show the relative uncertainties for the source-frame mass of the primary BH (i.e. the most massive) against the intrinsic chirp mass. We see that 90% CI uncertainties of the order of several tens of percent will be common for quiet events, while nearby or loud events can have uncertainties of a few tens of percent. GW150914 lives near the tail of the distribution, with uncertainty of ∼ 25%, since its SNR was large (∼ 23.7). The histogram on the right side reports the distribution of the uncertainties. For a population like the one we considered here, the peak is at ∼ 40%.
A similar plot for the secondary object is shown on Fig. 7. We see that the uncertainties are similar to what obtained for m 1 , with a slightly larger median. Earlier in this section we mentioned that for heavy BBH, we expect the total mass to be estimated as well as the chirp mass (while for BBH of hundreds of solar masses, it will be estimated better than the chirp mass [50][51][52]). This is indeed confirmed by Fig. 8 we see that typical uncertainties in the measurement of the source-frame total mass will be of a few tens of percent, with a peak of probability at ∼ 25%.
B. Spins
The uncertainties for the spin magnitudes for GW150914 covered most of the prior range, with only extreme spins excluded [4,6]. In [22] we have shown how uncertainties will generally be large for systems with comparable masses, unless the systems are observed from to edge-on. However, in that paper we only considered a few corners in the parameter space, and worked with stellar mass black-holes. In this section we wish to show what spin estimation will look like for an astrophysical distribution of more massive BBH.
In Fig. 9 we show the 90% CI uncertainty in the measurement of the spin magnitude for the most massive BH (y axis) as a function of the redshifted chirp mass. The true spin magnitude is reported in the colorbar. The histogram on the right shows the distribution of the uncertainties.
We find that larger spins are often easier to measure while for small spins only occasionally the 90% CI does not cover 90% of the prior.
The dashed red line in the right panel shows the position of the 10th percentile of the uncertainty distribution, at Σ a1 = 0.7. We thus expect that only in 10% of the cases we will be able to measure the spin magnitude of the primary BH with an uncertainty smaller than 0.7. We do not see a clear correlation of spin uncertainties with the redshifted chirp mass since too many other factors affect the measurability of the spins. Later, in Sec. IV B we will investigate how the spin measurement depends on the mass, working with a controlled setup.
We have indicated with a yellow star the median recovered spin magnitude and the uncertainty for GW150914, which we see is totally consistent with the uncertainty of the BBH we simulated.
The same type of plot but for the secondary spin is shown in Fig. 10 (note the different range in the y axis). As expected, the uncertainties are much larger for the secondary object (10th percentile at 0.85). We thus conclude that it be extremely hard to measure the spin magnitude of the secondary object in heavy BBH systems. This conclusion was reached by [54] for spinaligned BBH, and by [22] for a few precessing stellar mass BBHs.
Two spin values which have special meaning are obviously zero and one, i.e. no spinning and maximally spinning. In fact, one of the main conclusions of the GW150914 analysis is that the primary BH was not maximally spinning [6], whereas for GW151226 zero-spin for at least one of the BHs was excluded with high confidence [3,4].
We have used subsets of our BBH to verify how often we will be able to exclude the extreme scenarios of non spinning and maximally spinning. We will focus on the primary spin since, as we just saw, the secondary is hardly ever measurable. We then check for which fraction of them we can exclude no-spinning and maximally-spinning BHs. This is shown in Fig. 11. The left of the plot, with a min 1 = 0.05, thus corresponds to assuming that the astrophysical distribution of a 1 is flat in most of the allowed spin range. At the other extreme, on the right of the plot one is assuming that nature only produces BHs with large spins in CBCs. Let us first focus on the red circles. They report the fraction of BBH having minimum spin magnitude given in the x axis for which one can conclude a 1 > 0.05 at 90% CI. As one would expect, the worst result is obtained when we keep nearly all events (a min 1 = 0.05) since that will include events with small spins, for which it will be hard to exclude low spin values (or actually, to draw any conclusion). As we increase the minimum value of the true spin magnitude, moving to the right of the plot, the fraction of events for which we can exclude small spins increases until it reaches ∼ 75% when we only keep sources with large spins. We remark that this fraction doesn't go close to 100%, and even when all systems have large spin on the primary, for ∼ 20% of them we won't be able to exclude the absence of spin. The blue diamonds in the same plot quantify the fraction of events for which we can exclude than a 1 is larger than 0.95, again at the 90% CI. The curve is roughly a mirror of the previous one. If a whole distribution of spin is considered (a min 1 = 0.05), roughly 75% percent of the time one can exclude very large spins. As the spins increase in the underlying population, the efficiency goes naturally down, until it reaches ∼ 50%.
One might be surprised that even when the minimum spin is large (say 0.9) it is still the case that ∼ 50% of the times the 95th percentile is smaller than 0.95. This does happen because for most events, not matter of their spins, the posterior distribution for a 1 will be centered in the middle of the prior, with errorbars that cover a large fraction of the prior (see Fig. 13 below and the related discussion).
We next perform the opposite exercise, and downselect events with decreasing maximum primary spin, given in the x axis of In the x axis we give the minimum value of the spin magnitude of the primary BH. The red circles give the fraction of events (y axis) with that minimum spin for which the 5 th posterior percentile is larger than 0.05. The blue diamonds report the fraction of events for which the 95 th percentile is smaller than 0.95. If the underlying population is made of BH with large spins (right side of the plot) ∼ 75% of the times one can exclude that the primary BH had negligible spin.
can be excluded. We see that this fraction is nearly always below 0.5. Looking at the blue diamonds, i.e. the fraction of events for which nearly maximal spins can be ruled out, we see that this numbers is close to 90% if only small primary spin systems are used. However, the curve is roughly flat. As we move to large a max 1 we basically consider the whole distribution of spins, and obtain the same results of the left side of Fig. 11. It is worth stressing that the efficiency at excluding large spins is nearly always larger than for excluding small spins, the opposite only happening when the spins are in fact large. This is of course yet another way of saying that it's easier to measure large spins than small ones.
Given the relatively high fraction of events for which large spins can be excluded if the underlying population has random spins in the range [0, 1], it is thus not surprising that a similar conclusion could be drawn for GW150914.
In Fig. 13 we explicitly show the 90% CI (as errorbars around the median) for all our events, sorted by the true value of the primary spin (empty diamonds). As mentioned above, we see that even for large spins it is not uncommon that the posterior is centered around medium spins.
Let us now look at the estimation of the effective total spin along the orbital angular momentum. This is a quantity which was referred to as χ ef f in [4,6]. Motivations for the use of this parameterization can be found elsewhere [55][56][57][58][59][60]. Here we stress that being able to measure the sign of χ ef f with high confidence could help favor some formation models for compact binaries [61]. In fact, the main claim that could be made about the spins of GW151226 is that χ ef f was positive and non-zero [3,4]. We find that χ ef f is estimated better than either component spins. A similar conclusion was reached by [54] for aligned-spin BBHs. In Fig. 14 we show the distribution of the 90% CI for χ ef f against the detector frame chirp mass. The colorbar reports the true χ ef f . We see that the uncertainties are typically much smaller than what obtained while estimating the component spins (Figs. 9 and 10). This is not surprising, since it is the total spins, and in particular its projection along the orbital angular momentum, that affects the waveform length in both time and frequency domain. In particular, 10% of events will have 90% CI uncertainties below 0.17, with the typical event having uncertainties of ∼0.35. For comparison, GW150914 had a 90% CI of 0.28 [4]. In Fig. 15 we show the median estimates for χ ef f with the 90% CI for all simulated events, with the green diamonds reporting the true simulated values. The small uncertainties suggest one might learn from χ ef f more rapidly than from the component spins. population has positive (negative) true values, Fig. 16. The arrows pointing to the left report the fraction of events having χ ef f below the corresponding abscissa for which the 95 th percentile of the χ ef f posterior is negative. We see that when the populations has χ ef f below -0.3, ∼70% of events can be correctly identified as having negative χ ef f . The leftmost point is not reliable since very few events in our population have χ ef f below -0.4. We expect that if the population extended to more negative values, the efficiency would continue to go up. We see this happening when we perform the opposite exercise (arrows pointing to the right). For example, if the population has positive χ ef f larger than +0.3, 80% of the times negative χ ef f can be excluded. Naturally, the exact numerical values of the efficiency at measuring the sign of χ ef f depends on the population we simulated. However it seems safe to say that it is a much easier measurement than that of the individual spins. We end this section with a quick discussion of tilt angles, i.e. the angle between the spins and the orbital angular momentum. We will focus on the primary object since, as for the spin magnitude, the tilt angle of the secondary object will typically be unmeasurable. The The arrows pointing to the left report the fraction of events with true χ ef f below that abscissa for which the 95 th percentile for χ ef f is below zero. The arrows pointing to the right report the fraction of events with true χ ef f above that abscissa for which the 5 th percentile for χ ef f is above zero.
tilt angles are among the key quantities we wish to measure in a BBH, since they could directly be linked to the formation channel of CBCs [62][63][64]. Of course, they are not constant during the evolution of the waveform, since both the spins and the angular momentum precess around the total angular momentum. Similarly to what done in [4,6], we will quote the values of tilts at a frequency of 20 Hz. In Fig. 17 we report the 90% CI for the tilt of the primary, τ 1 against its true value, both in degrees. The spin of the primary is given in the color bar. We see that for the typical event the uncertainty will be very large: the distribution peaks at ∼ 110 • (histogram on the right panel). Only for ∼ 6% of the systems will the uncertainty be smaller than 60 • . Once again, GW150914 (for which we don't show a star since the medians for the tilt angles were not made public) fits perfectly in this scenario, since for none of the spins it was possible to estimate the orientation [6]. From Fig. 17 we see that large spins are typically required to have a chance of estimating the tilt angle. The other factor that plays a large role in the capability of measure spins parameters is the orientation of the orbital plane, which we discuss in the next section.
C. Distance and sky location
We end the analysis of the uncertainties of a population of BBH events with the luminosity distance and sky location. Precise estimation of distance and sky position will play a role in some of the proposed methods to calculate cosmological parameters with gravitational waves and to pinpoint the host galaxy of CBC sources [65][66][67].
In Fig. 18 we show the relative 90% CI uncertainty against the true redshift, the color reports the true source frame chirp mass. We see that uncertainties have scatter for low distances, then converge toward values of around 50%. A rough Fisher matrix based approach would suggest that the relative errors should only depend on the SNR [49,68]. Since for large redshifts most events will have similar SNR (corresponding to the threshold value we used to consider an event "detected"), that explains why the points converge to a similar value.
We find that the uncertainties peak at ∼50%, slightly below what was found for GW150914. We should stress that we are only reporting statistical uncertainties in the luminosity distance. As LIGO and Virgo start to detect sources at non-negligible redshifts weak lensing could affect distance measurement. This potential systematic effect has already been investigated in the contest of third-generation gravitational waves detectors, such as the Einstein Telescope [69] or the Cosmic Explorer [70,71]. Following [66], we can assume that weak lensing could introduce a systematic of ∼5% on the the luminosity distance measurement for sources at z=1, and smaller for sources at smaller redshifts. For all the sources in our study this potential systematic effect would thus be much smaller than the statistical uncertainty.
While unlikely, it is not impossible that BBH will in fact emit energy in the electromagnetic band, or neutri-nos, as some mechanisms have been proposed [72,73] after the discovery of GW150914 and the potential EM sub-threshold trigger found by the Fermi mission [74]. Furthermore, it could be possible to use the position of detected events to study the large-scale structure of the Universe [75,76] and to look for the host galaxy and calculate the cosmological parameters [65].
In Fig. 19 we show cumulative distributions for the 90% credible interval for the sky position, in square degrees. In our runs we have not included marginalization over instrumental calibration uncertainties, which have the potential to increase the sky uncertainties [4,6] or to bias it, if not accounted for [77]. We have implicitly assumed that by the time the advanced detectors reach design sensitivity, calibration uncertainties, which are now at a ∼5% level [4,78], will be better understood.
Our results are comparable with [79], which focused on binary neutron stars. The main difference is that the uncertainties we obtain for BBH are larger than what they obtained for binary neutron stars, in spite of the fact that we quote 90% CI, while they used 95% CI.
For example, for the HLV network (actually HHLV in [79] since they considered the two detectors that were at the Hanford site, one of which will be relocated to India) we obtain a median uncertainty of 50 deg 2 , while [79] obtains ∼30 deg 2 . This is, of course, due to the fact that BBH signals have a smaller effective bandwidth [80], and hence harder to localize than longer binary neutron star sources [81].
Finally, it is worth mentioning that the sky maps shared with partner astronomers for prompt followup are currently produced by a low-latency algorithm (BAYESTAR, [82]), while lalinference sky maps which include a more detailed model of the source and instrument calibration follow with an higher latency. It has been shown that the low and medium latency maps are in very good agreement for a network of two instruments, while the agreement is lower for a three-instrument network because lalinference is able to use data from all three detectors regardless of the presence of a trigger [83]. This discrepancy is currently being addressed in preparation for Advanced Virgo's first observing run [84] (see Section X of [82]). Ref. [83] deals with binary neutron stars, but the situation should be similar for BBH, unless significant spin precession is present. In that case lalinference should provide a more accurate skymap, since the low-latency algorithm is based on the output of search pipelines which currently neglect precession.
IV. TRENDS
In the previous section we have focused on an astrophysical population of events and obtained distributions for the expected uncertainties of the sources' parameters. We now want to show how the estimation of the spin parameters depends on the intrinsic parameters of the source (i.e. mass and spin) as well as on its orientation.
A. Dependence on orientation
It is commonly assumed that in the limit of spins aligned with the orbital angular momentum, the spin parameters are strongly degenerate with the mass ratio for small masses [85] and hence hard to measure. Mathematically, this happens because the leading order spin term in the waveform inspiral phase depends on a combination of mass ratio and (aligned) spins. At the same time, when misaligned spins are present, spin-spin and spin-orbit interactions will make the orbital plane precess, which gives the signal amplitude and phase modulation [86]. One would thus think that precessing spins are easier to measure. Since the amount of precession visible at Earth is also a function of the inclination angle [6,22,86], the best case scenario should be when precession is present and the system is observed from "edge-on" (orbital angular momentum forming an angle of π/2 with the line of sight). In [22] it was shown, for one particular low-mass BBH system, how uncertainties in the measurement of spin do indeed reach a minimum for inclination angles close to π/2. However, as [85] underlines, there is no reason the known degeneracies of the inspiral phase should hold true when the merger and ringdown parts of the waveform are measurable.
In this section we investigate how the characterization of heavy BBH sources depends on the orientation of the system and their spins.
We consider several systems with different values of masses, spin orientation and SNR, and analyze them for different values of inclination angle (to be exact, what we have varied is the angle between the total angular momentum and the line of sight, θ JN [5,87]). The parameters of the sources we used are reported in Tab. I. We consider mass ratios from 1 to 1:2.5, and mostly focus on large spins. Tilt angles are typically chosen large to be sure there are precessional effects to be seen in the first place.
We stress that every time a system is rotated, its dis-tance is varied to keep the same SNR. The variations we see are thus not due to variations in the loudness of the source, but only on the extra complexity of the signal when not face-on. In Fig. 20 we report the 90% CI uncertainty for the primary spin against θ JN . We see that the effect strongly depends on the mass ratio of the systems. For equal-mass sources (diamonds) we don't see any strong variation on the ability of measure the spin magnitude. This is compatible with the fact that spin induced modulation effects are minimal for equal-mass systems [35]. As the mass ratio increases, so does the effect of the inclination angle. For the source with mass ratio 1:1.5 (crosses) we start to observe a reduction of the uncertainties for large inclination angles, unless the spins are small.
The improvement is even more pronounced for the sources with mass ratios of 2 (squares) and 2.5 (triangles). For these sources, as expected, uncertainties reach their minimum for angles close to π/2. Furthermore, we see that the ratio between uncertainties in the best and worst case scenario can be over a factor of two. Although in this paper we don't deal with neutron star -black hole binaries, the ratio would be even larger for those sources given their larger mass ratio. We stress that by using IMRPhenomPv2 for large inclination angles we are in fact working on a corner of the parameter space where that approximant might not be highly reliable [36]. The fact that the curves we obtain look similar to those reported in [22] using a different approximant (SpinTay-lorT4 [88,89]) and lower masses, reassures us that the results we find in this section are at least a good indication of the trends one can expect. Of course, a similar study should be repeated as soon as fast double-spinning IMR waveforms become available. Potential systematics against numerical relativity waveforms should also be quantified.
We now want to verify if aligned spins are harder to measure even for heavy BBH, for which merger and ringdown are in band. In Fig. 20 we show results for two spin-aligned BBH, with mass ratio of 1 (black club suits) and 2 (yellow spade suits). In both cases, the spins are 0.9 (see Table I). We stress that while the simulated BBH had aligned spins, the parameter estimation algorithm did not make this assumption, i.e. we explored the full precessing parameter space. We see that the uncertainties in this case are considerably smaller than all other precessing systems we considered, at around 0.2.
As mentioned above, it has been stressed elsewhere [85] that one should not a-priori expect the same correlations found in inspiral-dominated (i.e. low mass) systems to hold true for heavy BBH. This is also consistent with the fact that for large and aligned spins the length of a waveform is increased [90]. While this effect would be degenerate with the total mass if only the inspiral phase were in band, the presence of a measurable merger and ringdown breaks that degeneracy, improving the measurability of spin parameters.
B. Dependence on mass
The results of the previous section have shown how characterization of heavy BBH might have properties that were not previously thoroughly discussed or investigated. In this section we want to investigate another common assumption, that heavier CBCs are harder to characterize, being shorter in both the time and the frequency domain.
We consider two precessing systems with fixed mass ratios of 1 (green diamonds) and 2 (red squares) and a spin-aligned system with mass ratio of 1 (black club suits). Their parameters are given in Table II. a1,a2 q SNR cos τ1, cos τ2 marker q1 0.9,0.9 1 17 0.5,0.5 q2 0.9,0.9 2 17 0.5,0.5 q1ali 0.9,0.9 1 17 1,1 ♣ These systems were simulated with increasingly large detector frame total mass. Every time the total mass if varied, the distance to the source is also changed to yield the same network SNR for all masses. It must be recalled than when spin-induced orbital precession is present some spin parameters become time, and hence frequency, dependent. Throughout this work we have defined spin parameters, such as the tilt angles, at 20 Hz. However, in this section we make a different choice. To ensure that spins are defined at fixed number of cycles before merger, we define spins at a different reference frequency for each value of mass. To be precise, for each M tot the spins are defined at a reference frequency, such that M tot f ref = const.
We first look at the estimation of the magnitude of the primary spin. In Fig. 21 we show the 90% CI for the primary spin magnitude versus the redshifted total mass. We see how, while the overall trend is an increase of the uncertainties with the total mass, the amount of variation depends on the mass ratio. The precessing equalmass system (red squares) shows the smallest variation, with uncertainties which are significantly large already at small masses. On the other hand, the system with mass ratio of 2 has mild uncertainties at M = 60 M which increase by a factor of 2 as the total mass increases to M = 600 M . Remarkably, the uncertainties for the spin-aligned system (club suits) stays much smaller than the precessing spins systems in the whole mass range.
Next, we report the uncertainties on the measurement on the effective spin along the orbital angular momentum. As we have seen above, the effective spin parameter can generally be estimated more precisely than either component spin. We find this is the case for all values of masses we consider, at least for the precessing systems, Fig. 22. For the spin-aligned system we see that the uncertainty in χ ef f is similar to the uncertainty in a 1 , which is not surprising since the whole spin is along the orbital angular momentum, and hence contributes to the effective spin.
We notice that the uncertainty in the estimation of χ ef f is similar among the two precessing systems, while we had observed large differences in the measurement of the primary spin magnitude, Fig. 21. This is due to the fact that the measurements of the component spins magnitude are also affected by the correlation of spin magnitude with spin orientation, which depends on how much Table II for the parameters.
precession is "visible". We thus look at the estimation of the effective precessing spin, χ p , i.e. a mass-weighted combination of the total spin component in the plane of the orbit. As for χ ef f , motivations for the use of this parameterization has been discussed elsewhere [55][56][57][58][59][60]. This is shown in Fig. 23.
We see that, especially for low masses, the q = 2 system has smaller uncertainties for χ p than the precessing equal-mass source. This is due to the fact that, as mentioned above, precession effects are more visible when there exit a mass asymmetry. We have verified that the small jump in uncertainty for M ∼ 150 M happens as the peak of the first precession cycle (i.e. the one at lower frequency) starts going out of band, due to the increasing total mass. We notice that for χ p too, the spin-aligned system have smaller uncertainties. However that does not happen because they do better than the precessing systems. Quite the contrary, we find that the posterior for χ p is centered at ∼0.4 for most spin-aligned runs (the injected value is 0.0), not far from where the prior is centered. We notice however that the posterior for the spin-aligned runs is slightly narrower than the prior. Table II for the parameters.
V. CHARACTERIZATION OF MASS AND SPINS DISTRIBUTION
Although measuring the spins of single objects will be hard, we stress that it will be possible to learn something about the underlying population combining information from several sources. For example, in Fig. 11 we have seen how one can very often discard large values of spins, if the true distribution has smaller spin values. In that case, one can imagine how as more detections are made large spins become less and less supported by the data.
In this section we want to use a very simple toy model to show how inference about the mass and spin distributions can be done. Let us consider a set of 105 BBH with masses uniformly distributed in the range [30,50] M and spins uniformly distributed in a ∈ [0.7, 0.98]. Under the hypothesis that mass and spin distributions are flat, with unknown boundaries, can the extrema be estimated? If yes, how many detections are needed?
Let us start by estimating the boundaries of the component masses distribution. We will call m min and m max the minimum and maximum of the astrophysical distribution, and H the model that the distribution is flat. If N detections are made, symbolized by their data streams d, then using Bayes' theorem one can write: where d i is the data stream of the i − th signal.
Each term in the product is just the usual evidence of the data, but restricted to mass values between the min and max being considered. This can be implemented trivially in the parameter estimation algorithm we used by restricting the prior range of the component masses [5]. In practice, to avoid wasting computational resources and since the original priors are flat, we just used importance sampling [91]. The other term in the RHS is the prior distribution for the minimum and maximum, which we can take as flat.
In Fig 24 we show how the estimation of the minimum and maximum range for the source-frame component masses evolves as more events are detected. The x axis reports the number of events used, and the y axis the estimated values of the maximum (upper curve) and minimum (bottom curve). To calculate errorbars, for each choice of the number of events, N, we generated 100 random sets of N events with bootstrapping and calculated mean and standard deviation of the edges of the 90% credible interval. The same exercise can be done for the spin magnitude. Using the same expression we derived for the masses, one obtains the joint distribution for a min and a max . In Fig. 25 we show the evolution of the estimation for a min and a max as function of detected events.
We see that the error bars are much larger than for the masses, which is simply a consequence of the fact that spins are harder to measure than masses. After 10-20 events, non-spinning BBH are excluded, and after a few tens the data points to a minimum spin at around 0.6, with standard deviations of ∼0.1. The results of this section should be seen as the simple application of a toy model, and are only meant to give the reader an idea of what can be done when several sources are available. Here we list three main caveats. The number of sources that are needed to e.g. exclude negligible spins are of course dependent on our choice of population, of which we consider one possibility. For example, if the true population had spins down to e.g. 0.4 rather than 0.7 then more sources would be needed. Furthermore the true astrophysical distribution spin and mass would not have sharp boundaries and its shape would not be known to start with. Measuring the edges of a flat (top-hat) distribution lead to better results than estimating the parameters of more realistic distributions (e.g. gaussian, power law) 2 . An example of a more elaborate treatment in the context of population modeling can be found in [92], which appeared when this work had already started. Finally, as mentioned in section II B, the IMRPhenomPv2 approximant may not be able to accurately compute the gravitational waveform for the few edge-on [29,43], low SNR, signals in the population.
VI. CONCLUSIONS
In this paper we have considered an astrophysical distribution of heavy (m 1,2 ∈ [30, 50] M ) spinning BBH detected by a network of advanced LIGO and Virgo detectors. Sources like these will be detected in high number in the next few years, and it is interesting to verify what kind of measurement one can expect for the masses and spins of black holes in these systems.
We find that source-frame component masses will be estimated with typical relative uncertainties of the order of ∼40%. The exact size of the errors will depend, beside the signal-to-noise ratio, on the detector-frame masses, since those control the duration and amplitude of the signal. There will thus be a coupling between source-frame mass estimation and source redshift. This correlation will be exacerbated in the next generation of gravitational wave detectors [28]. The source-frame chirp mass is estimated with similar precision.
The spin magnitude of either object in the binary will typically be estimated with large uncertainties. We found that for the primary (i.e. most massive) object in the system only 10% of sources will yield a measurement with uncertainty below 0.7. For the secondary, below 0.85. We found that large spins typically can be estimated with smaller uncertainties, similarly to what happens for BH in X-ray binaries. The effective spin along the orbital angular momentum, χ ef f , can be measured better than either spins, with uncertainties for 10% of sources below 0.17.
Considering only the BBH in our population with primary spin below 0.2, we saw that ∼90% of the times one can exclude that the BH was fast spinning (i.e. with spin above 0.95). This number goes down to roughly 80% if a flat distribution of spin is used. Conversely, if only BBH with primary spin above 0.8 are used, 75% of them will not support negligible spins (i.e. spin below 0.05). If the whole flat spin distribution is used, 55% of the systems will exclude negligible spins. We have checked how well the sign of the effective spin can be measured, which could be used to prefer some formation models for CBCs. We have found that if one only considers BBH with χ ef f < −0.3 (χ ef f > +0.3) 70% (80%) of the times one can exclude positive (negative) χ ef f .
The angle between spin and orbital angular momentum, which could also be used to probe the formation 2 We thank Will Farr for having clarified this point. channels of CBC, will also be estimated quite poorly. For only 6% of our BBH the 90% CI for this angle is below 60 • .
We have verified that the uncertainties of GW150914 for both masses and spins are typical of events in the same mass range. We have shown how correlations can exist between the ability of measuring the spin parameters, for precessing systems, and the inclination of the orbit. However, these correlations are only clear if the mass ratio is not close to unity. For equal mass systems, precessing spins are hard to measure no matter of the orientation of the orbit. We considered spin-aligned systems with mass ratio of 1 and 2 and spin magnitude of 0.9, and found that the spin magnitude can be measured extremely well, with 90% CI of ∼0.2. This is contrary to what traditionally expected for low mass CBCs, which are dominated by the inspiral phase, and show a strong degeneracy between spin and mass ratio.
We then investigated how the uncertainties on the spin magnitude depend on the detector frame total mass. We found that while uncertainties get larger overall for larger masses, the increase is much more significant when the mass ratio is not close to unity. For the system with mass ratio of 2 we considered, the uncertainty in the primary spin magnitude at M tot = 60 M is a factor of 2 smaller than at M tot = 600 M .
Finally, we have verified what can be said about the mass and spins of the underlying distribution of BBH events. Considering a toy model where masses and uniform in the range [30 − 50] M and spins uniform in the range [0.7−0.98], we have shown how the boundaries can be measured assuming a top-hat distribution, with less than 100 detections. A top-hat distribution is of course only a crude approximation, and more work will needed to assess the characterization of more realistic distributions.
VII. ACKNOWLEDGMENTS
The authors would like to thank T. Dent, D. Gerosa, V. Kalogera, M. Pürrer, C. Rodriguez, R. O'Shaughnessy and the LSC-Virgo parameter estimation subgroup for useful discussion and comments. We also thank the Referee for the many useful comments. SV and RL acknowledge the support of the National Science Foundation and the LIGO Laboratory. LIGO was constructed by the California Institute of Technology and Massachusetts Institute of Technology with funding from the National Science Foundation and operates under cooperative agreement PHY-0757058. JV was supported by STFC grant ST/K005014/1. During this work RS has been supported by FAPESP grants 2012/14132-3, 2013/04538-5 and 2014/50727-7. The authors would like to acknowledge the LIGO Data Grid clusters, without which the simulations could not have been performed. Specifically, we thank the Albert Einstein Institute in Hannover, supported by the Max-Planck-Gesellschaft, for use of the Atlas high-performance computing cluster. We are also grateful for computational resources provided by the Leonard E Parker Center for Gravitation, Cosmology and Astrophysics at University of Wisconsin-Milwaukee. This is LIGO Document P1600292.
Appendix A: A 5-detector network In this appendix we report results on a different BBH population, with intrinsic component masses flat in the range [25−100] M (and M tot ≤ 100M ) as detected by a 5-detector network which includes the two LIGO, Virgo, KAGRA and LIGO India (henceforth HVLIJ). The main goal of this section is to show that, if an astrophysical distribution of BBH of roughly similar masses is considered, the actual configuration of the network do not matter, in first approximation, for the measurement of the intrinsic parameters. We will in fact see that the uncertainties we obtain with the HLVIJ network are similar, for masses and spins, to what we reported in the main body for the smaller HLV network.
Let us start with the relative uncertainties in the source-frame chirp mass, Fig. 26. Comparing with the corresponding plot for the HLV network, Fig. 5, we see that uncertainties are similar, and mostly around ∼ 30%. The bulk of the distribution is slightly larger for HLVIJ because more events with high redshifted mass are detected by this network, owing to its larger range. For an HLVIJ network, the distribution of the 90% CI relative uncertainty (in percent over the true value) in the estimation of the source-frame chirp mass (y axis) against the true source-frame chirp mass (x axis). The colorbar is the redshift of the sources. A star reports the coordinates of GW150914.
We will not plot the distribution of uncertainties for m 1 and m 2 , but just mention they too look very similar to the corresponding HLV curves. In particular, the relative source-frame m 1 (m 2 ) uncertainty peaks at ∼ 45% (∼ 50%), which is slightly more than for HLV, for the reasons just mentioned above. In Fig. 27 we show instead the uncertainties for the spin magnitude of the primary. We still find that large errors will be common, with only 10% of the systems having 90% CI below 0.73 (basically the same as HLV, for which we obtained 0.70). Once again, measurement is harder for the spin of the secondary object, 90% of the sources will have uncertainties above 0.86, i.e. they will be unmeasurable. Fig. 27 also shows that measurement of spins gets worse for systems with large (redshifted) mass. We have seen above, Sec. IV B how this is indeed the case. We end this appendix by mentioning that, as one would expect, sky localization gets better with the 5-detector network. Using the same figure of merit of Sec. III C we find that the median sky localization uncertainty is ∼25 deg 2 , i.e. a factor of ∼ 2 smaller than what obtained with the HLV network, Fig. 28. [2] B. P. Abbott, R. Abbott, T. D. Abbott, et al. | 2017-03-01T16:17:00.000Z | 2016-11-03T00:00:00.000 | {
"year": 2016,
"sha1": "7f672b248c9e8b233f46bd6915a473a1343a3f68",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://link.aps.org/accepted/10.1103/PhysRevD.95.064053",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "d8178fff06f0463f8391eca2ca9818ea37166262",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
16342284 | pes2o/s2orc | v3-fos-license | The Association Between Dietary Flavonoid and Lignan Intakes and Incident Type 2 Diabetes in European Populations
OBJECTIVE To study the association between dietary flavonoid and lignan intakes, and the risk of development of type 2 diabetes among European populations. RESEARCH DESIGN AND METHODS The European Prospective Investigation into Cancer and Nutrition-InterAct case-cohort study included 12,403 incident type 2 diabetes cases and a stratified subcohort of 16,154 participants from among 340,234 participants with 3.99 million person-years of follow-up in eight European countries. At baseline, country-specific validated dietary questionnaires were used. A flavonoid and lignan food composition database was developed from the Phenol-Explorer, the U.K. Food Standards Agency, and the U.S. Department of Agriculture databases. Hazard ratios (HRs) from country-specific Prentice-weighted Cox regression models were pooled using random-effects meta-analysis. RESULTS In multivariable models, a trend for an inverse association between total flavonoid intake and type 2 diabetes was observed (HR for the highest vs. the lowest quintile, 0.90 [95% CI 0.77–1.04]; P valuetrend = 0.040), but not with lignans (HR 0.88 [95% CI 0.72–1.07]; P valuetrend = 0.119). Among flavonoid subclasses, flavonols (HR 0.81 [95% CI 0.69–0.95]; P valuetrend = 0.020) and flavanols (HR 0.82 [95% CI 0.68–0.99]; P valuetrend = 0.012), including flavan-3-ol monomers (HR 0.73 [95% CI 0.57–0.93]; P valuetrend = 0.029), were associated with a significantly reduced hazard of diabetes. CONCLUSIONS Prospective findings in this large European cohort demonstrate inverse associations between flavonoids, particularly flavanols and flavonols, and incident type 2 diabetes. This suggests a potential protective role of eating a diet rich in flavonoids, a dietary pattern based on plant-based foods, in the prevention of type 2 diabetes.
CONCLUSIONSdProspective findings in this large European cohort demonstrate inverse associations between flavonoids, particularly flavanols and flavonols, and incident type 2 diabetes. This suggests a potential protective role of eating a diet rich in flavonoids, a dietary pattern based on plant-based foods, in the prevention of type 2 diabetes.
Diabetes Care 36:3961-3970, 2013 T he prevalence of diabetes is markedly increasing worldwide, with the number of people with diabetes projected to rise from 366 million in 2011 to 552 million in 2030 (1). Dietary patterns characterized by higher consumption of fruit and vegetables (2), such as within a Mediterranean diet (3), are associated with a reduced risk of type 2 diabetes. Flavonoids and lignans are bioactive polyphenols that are contained in plant-based foods such as fruits, vegetables, nuts, legumes, cocoa, and cereals, and in beverages such as tea, wine, and juices (4), and have been proposed to have a potential role in the prevention of type 2 diabetes through diverse biological effects, including antioxidant and anti-inflammatory properties and insulin sensitivity-enhancing effects (5)(6)(7).
Epidemiological evidence for an association between dietary intake of flavonoids and the risk of type 2 diabetes is inconsistent (8)(9)(10)(11)(12)(13). For the six flavonoid subclasses, flavanols (including flavan-3-ol monomers, proanthocyanidins, and theaflavins), anthocyanidins, flavonols, flavanones, flavones, and isoflavones (Supplementary Table 1), a range of associations with diabetes has been reported in six prospective studies (8)(9)(10)(11)(12)(13). An inverse significant association with type 2 diabetes was observed with anthocyanidins (15% risk reduction in a comparison of extreme quintiles), and significant inverse trends were observed with some flavonols (quercetin and myricetin) in a pooled analysis of Nurses' Health Study I and II and the Health Professionals Follow-Up Study (8) and the Finnish Mobile Clinic Health Examination Survey (10), respectively. However, no associations were reported in the other two U.S.-based studies (Women's Health Study and Iowa Women's Health Study) (9,11) and for any other flavonoid subclasses (8)(9)(10)(11). Among two Asian studies, the Singapore Chinese Health Study reported an inverse association of diabetes with soy intake and an inverse borderline significant association with isoflavone intake (12), whereas the Japan Public Health Centre-Based Prospective Study observed no significant association between soy or isoflavone intakes and type 2 diabetes in the whole population; however, among overweight Japanese women there was an inverse association (13). To our knowledge, there are no studies evaluating the association of dietary lignan intake with type 2 diabetes, although some experimental studies have shown promising antidiabetic properties (14,15).
In light of the inconsistent current evidence, and in particular the paucity of information in European populations with considerable variability in flavonoid and lignan intakes, the aim of this study was to investigate the association between dietary flavonoid and lignan intakes, and the risk of developing type 2 diabetes in Europe. In particular, the use of the European Prospective Investigation into Cancer and Nutrition (EPIC)-InterAct study, which was conducted across eight countries in Europe with substantial variation in the intake of flavonoids, enabled us to examine these associations comprehensively in a European population.
Study design and population
The EPIC-InterAct is a large prospective type 2 diabetes case-cohort study (16) nested within the EPIC study (17) with more than half a million adult participants recruited in the 1990s from the following 10 European countries: Denmark, France, Germany, Greece, Italy, the Netherlands, Norway, Spain, Sweden, and the United Kingdom. With the exception of Greece and Norway, all EPIC countries participated in the EPIC-InterAct study (n = 455,680). After the exclusion of individuals without stored blood (n = 109,680) or with prevalent diabetes at baseline (5,821), 340,234 participants with 3.99 million person-years of follow-up were included in this study. All participants gave written informed consent, and the study was approved by the local ethics committee in the participating countries and the Internal Review Board of the International Agency for Research on Cancer.
Type 2 diabetes case ascertainment and verification A pragmatic, high-sensitivity approach for case ascertainment was used in order to identify all potential incident type 2 diabetes cases and to exclude all individuals with prevalent diabetes (16), using at least two multiple sources of evidence including self-report and linkage to primary care registers, secondary care registers, medication registers, and hospital admissions and mortality data. Cases in Germany were additionally validated by diagnostic records. Cases in Denmark and Sweden were not ascertained by self-report, but were identified via local and national diabetes and pharmaceutical registers, and hence were considered as verified. Follow-up was censored either on 31 December 2007, the date of type 2 diabetes diagnosis, or the date of death, whichever occurred first. In total, 12,403 verified incident type 2 diabetic cases were identified. A random subcohort of 16,835 individuals was selected from the 340,234 participants with available stored blood samples, stratified by center. After the exclusion of 681 individuals without information on diabetes status, 16,154 subcohort individuals were included, of whom 778 individuals developed incident type 2 diabetes during follow-up. Of the 27,779 participants (12,403 case subjects, of whom 778 were within the subcohort of 16,154 participants) in the EPIC-InterAct study, we excluded 619 participants within the lowest and the highest 1% of the distribution of the ratio of reported energy intake (determined from the questionnaire) to estimate energy requirements (calculated from age, sex, body weight, and height). In addition, we excluded 1,072 participants with missing information on nutritional intake or other covariates used in the statistical analysis. This resulted in a final sample of 26,088 participants for inclusion in the current analysis with 11,559 case subjects and a subcohort of 15,258 participants, including 729 case subjects in the subcohort.
Flavonoid and lignan intake and other dietary variables
Habitual diet during the 12 months prior to recruitment was recorded using countryspecific validated food frequency questionnaires or diet histories (17,18). Most centers adopted a self-administered questionnaire of 98 to 266 food items. In Spain and Ragusa (Italy), the questionnaire was administered at a personal interview using a computerized dietary program. Questionnaires in France, Italy, Spain, the Netherlands, and Germany were quantitative, estimating individual average portion size systematically. Those in Denmark, Naples (Italy), and Umeå (Sweden) were semiquantitative, with the same standard portion assigned to all subjects. In Malmö (Sweden) and the U.K., a questionnaire method combined with a food record was used. Total energy and nutrient intakes were estimated using the standardized EPIC Nutrient Database (19).
Estimated flavonoid and lignan intake was derived from foods included in the dietary questionnaires through a comprehensive food composition database on flavonoids and lignans, as we have previously described (20,21). Our database on flavonoids was based on U.S. Department of Agriculture databases (22), Phenol-Explorer (23) and the U.K. Food Standards Agency database (24). This database compiles composition data on lignans and the six flavonoid subclasses (Supplementary Table 1). Furthermore, our flavonoid food composition database was expanded by using retention factors when no analytical data were provided by cooked food. The retention factors applied to all flavonoid classes, except isoflavones, were 0.70, 0.35, and 0.25, respectively, after frying, cooking in a microwave oven, and boiling (25).
These retention factors were not applied to isoflavones and lignans because their cooking losses are usually minimal. Our database was also expanded by calculating the flavonoid content of recipes, estimating missing values based on similar foods (by botanical family and plant part), obtaining consumption data for food group items, and using botanical data for logical zeros (when negligible amounts of flavonoids or lignans would be present in a food type, e.g., anthocyanidins in plant foods without red, blue or purple color). In nature, flavonoids and lignans are usually found as glycosides, mainly with glucose or rhamnose moieties, but other sugars may also be involved. Therefore, data on flavonoids and lignans are expressed as aglycone equivalents, after conversion of the flavonoid glycosides into aglycone contents using their respective molecular weights. The final database contains 1,877 food items, including raw foods, cooked foods, and recipes, and 10% of values for these food items are missing.
Other variables
A lifestyle questionnaire was used to collect information about sociodemographic characteristics, smoking status, and medical history (17). Occupational and leisuretime physical activity was assessed by questionnaire and classified according to the Cambridge Physical Activity Index (26). A history of previous illness included hypertension, hyperlipidemia, previous cancers, and/or cardiovascular diseases (angina, stroke, and myocardial infarction). Information on family history of type 2 diabetes in a first-degree relative was collected for all participants except for individuals in Italy, Spain, Germany, and Oxford (U.K.). Height, weight, and waist circumference were measured by trained health professionals using standardized protocols, except in Oxford (U.K.) and France, where selfreported measurements were obtained, and Umeå (Sweden), where waist circumference was not recorded (16). BMI was calculated as weight in kilograms divided by height in square meters. Blood samples were collected at baseline, and hemoglobin A 1c (HbA 1c ) was measured using highperformance liquid chromatography (Diamat Automated Glycated Hemoglobin Analyzer; Bio-Rad Laboratories Ltd., Hemel Hempstead, U.K.).
Statistical analysis
Dietary questionnaire-derived means, SDs, medians, and 5th and 95th percentiles of total intake and intakes of subclasses of flavonoids and lignans were calculated. Total flavonoid intake by country was also visualized in a box-and-whisker plot. Baseline characteristics and dietary intakes in the subcohort were summarized by quintiles of total flavonoid intake using means and SDs or frequencies. Prenticeweighted Cox regression models accounting for the case-cohort design (27) were used to estimate the associations between flavonoid and lignan intakes and type 2 diabetes of each EPIC country. Total intake and intakes of subclasses of flavonoids and lignans were categorized using subcohort-wide quintiles. Tests for linear trend were performed by assigning the medians of each quintile as scores. Intakes were also analyzed continuously, after a log 2 transformation that indicates a doubling in flavonoid and lignan intakes. Hazard ratios (HRs) were calculated using the following modeling strategy. Age was used as the underlying time scale, with entry time defined as the participant's age at baseline, and exit time as the participant's age at diagnosis of diabetes, censoring, or death (whichever came first). All analyses were stratified by center to control for center effects such as follow-up procedures and questionnaire design. Model 1 included age (as underlying time scale), sex, and total energy intake (kilocalories per day). Model 2 was additionally adjusted for the following potential lifestyle confounders: educational level (none, primary school, technical/professional, secondary school, higher education); physical activity (inactive, moderately inactive, moderately active, and active); smoking status (never, former, and current); BMI (kilograms per square meter); and alcohol intake (grams per day). Model 3 was additionally adjusted for the following potential dietary confounders: intakes of red meat, processed meat, sugar-sweetened soft drinks, and coffee (grams per day). Model 4 was additionally adjusted for the following potential mediators: intakes of fiber (grams per day), vitamin C (milligrams per day), and magnesium (milligrams per day). HRs and 95% CIs were estimated within each country and then combined by using random-effects meta-analysis. Betweencountry heterogeneity was assessed using the I 2 statistic.
Effect modification by sex, baseline BMI category (BMI ,25, 25 to ,30, and $30 kg/m 2 ), and smoking status (never, current, former smokers) was assessed by modeling interaction terms, in model 4, between these variables and total flavonoid intake, and conducting stratified analyses. Moreover, the proportional hazards assumption was assessed by testing the interaction between flavonoid intake and age (,60 and $60 years of age), and for all exposures there was no evidence against the assumption.
Sensitivity analyses were conducted excluding 975 diabetes case subjects in whom type 2 diabetes had been diagnosed within the first 2 years of recruitment. In a second sensitivity analysis, model 4 was additionally adjusted for hypertension and hyperlipidemia, after the exclusion of 1,971 participants with cancer and/or cardiovascular diseases at recruitment, because participants in these subgroups may have modified their diets. In a third sensitivity analysis, model 4 was additionally adjusted for history of diabetes in a firstdegree relative (with the exclusion of 12,977 participants with missing data), an important risk factor of type 2 diabetes (28); finally, model 4 was additionally adjusted for waist circumference (exclusion of 1,824 participants without this data), another independent risk factor strongly associated with type 2 diabetes (29). In a further sensitivity analysis, non-case subjects from the subcohort were excluded if they had an HbA 1c level $6.5% (48 mmol/mol), as this cutoff can be used as a diagnostic criterion for type 2 diabetes (as per the American Diabetes Association and the World Health Organization).
All statistical analyses were performed using Stata/SE 12.0 (StataCorp, College Station, TX). All P values were based on two-sided tests, and statistical significance was set at P , 0.05.
Baseline characteristics of the subcohort according to quintiles of total flavonoid intake are shown in Table 2. Participants in the highest quintile of total flavonoid intakes were likely to be older and to have the lowest BMI and waist circumference compared with those participants in the lowest quintile. With increasing total intake of flavonoids, participants tended to have a more healthconscious lifestyle pattern with greater educational level and physical activity; lower tobacco consumption; a higher intake of fruits, vegetables, fiber, vitamin C, and magnesium; and a lower consumption of processed meat. However, participants in the top quintile reported greater alcohol and red meat intake and lower coffee intake. Participants across the quintiles had similar frequencies of prevalent diseases.
The pooled HRs (95% CIs) for type 2 diabetes by quintiles of total intake and intakes of subclasses of flavonoids and lignans are shown in Table 3. Significant inverse associations were observed in model 1 (stratified by center and adjusted for age [as underlying time-scale], sex, and total energy) for total intakes of flavonoids, flavanols (including flavan-3-ol monomers, proanthocyanidins, and theaflavins), anthocyanidins, flavonols, flavones, and lignans. After further adjustment for potential confounders (models 2 and 3), all associations were attenuated but were still statistically significant for flavan-3-ol monomers and flavonols. When fiber, vitamin C, and magnesium intakes were additionally included in the multivariable models (model 4), similar risk estimates were observed between the intake of all flavonoid subclasses and lignans, and the incidence of type 2 diabetes as in model In multivariable analyses (model 4), similar associations of type 2 diabetes were observed when dietary flavonoid and lignan exposures were assessed as continuous variables after a log 2 transformation ( Fig. 1 and Supplementary Fig. 2). No statistically significant heterogeneity between countries was detected for the associations of total intake and intakes of subclasses of flavonoids and lignans with type 2 diabetes, except for flavanones (I 2 = 52.8%, P = 0.038) and flavones (I 2 = 53.3%, P = 0.036) ( Supplementary Fig. 2). No interactions were found with sex (P for interaction = 0.609), BMI (P = 0.680), or smoking status (P = 0.526) for total flavonoid intake.
In sensitivity analyses (Supplementary Table 2), similar results were observed after the exclusion of diabetes case subjects in whom type 2 diabetes had been diagnosed within the first 2 years of follow-up or participants with prevalent cardiovascular diseases. When family history of diabetes was added in model 4, associations were strengthened. After further adjustment for waist circumference, the findings were almost identical. After the exclusion of 84 non-case subjects from the subcohort with an HbA 1c level $6.5% (48 mmol/mol) at baseline, the results were almost identical.
CONCLUSIONSdIn this large European case-cohort study, an inverse trend between dietary total flavonoid intake and incidence of type 2 diabetes was observed. Flavanols, including flavan-3-ol monomers and flavonols, were the flavonoid subclasses significantly related to a lower hazard of type 2 diabetes. To date, there are only two large U.S. cohort studies that have evaluated the association between the total flavonoid intake and incident type 2 diabetes, each using a different update of the U.S. Department of Agriculture database on flavonoids (22). Only the study using the database release 2.1 (year 2007) observed a consistent inverse association between intake of anthocyanidins and type 2 diabetes risk (8,11). This is in line with the crude, but not the multivariable adjusted, findings in our study, based on the database version from 2007. This inconsistency could be due to the different dietary intakes between studies; in our study, the median anthocyanidin intake in the first quintile (7.1 mg/day) was similar to that in the third quintile (8.1 mg/day) in the U.S. study (8). (8). This suggests that the lower risk of type 2 diabetes due to intake of anthocyanidins might reach a plateau at a certain intake level. Two other prospective studies have assessed the relationships between the intake of some flavonoid subclasses and the risk of the development of type 2 diabetes (9,10). The U.S. study reported no association with intake of either flavonols or flavones (9); however, the Finnish study reported inverse significant trends for two individual flavonols (10), as in our study. These differences in the results for intakes of flavonols and flavanols between European and U.S. studies could be a result of European countries having approximately twice the intake compared with the U.S. (8,21). In both Asian studies, inverse associations with isoflavone intakes were reported (12,13), but not in Western studies (8,11). Asian countries still have the highest isoflavone intakes worldwide (;10-fold higher than in European countries) (20,30), which may explain the differences observed in association with type 2 diabetes between Asian and Western countries. In our study, there was no association between lignan intake and risk of type 2 diabetes, although in a U.S. study, lignan levels were significantly associated with a lower fasting insulin level (31). Indeed, in two recent experimental studies lignans have been associated with an improvement of glucose homeostasis by increasing glucose disposal rates and enhancing hepatic insulin sensitivity (14) and an inhibition of a-amylase activity (15). The main food sources of flavonoids were fruits and vegetables, tea, and wine. These foods (2,32,33), as well as the Mediterranean diet, a dietary pattern based on flavonoid-rich foods (e.g., fruits and vegetables, olive oil, and moderate wine consumption) (3) were associated with a reduced risk of type 2 diabetes in the EPIC-InterAct study. Similar results were observed in previous U.S. studies, where anthocyanidin-rich foods (blueberries and apples/pears) (8) and wine consumption (11), a rich source of anthocyanidins and flavanols, were inversely associated with type 2 diabetes risk. Notably, after adjustment for potential compounds co-occurring in flavonoid-rich foods, such as fiber, vitamin C, magnesium, and alcohol, associations between flavonoids and the risk of type 2 diabetes were still statistically significant in the current study, suggesting that it is unlikely that these compounds confound or mediate the association between intake of flavonoids and type 2 diabetes risk.
The potential mechanisms underlying these inverse associations between flavonoids and type 2 diabetes risk may include the modulation of the postprandial glucose levels by reducing the activity of digestive enzymes (e.g., a-amylase and a-glucosidase) (34) and decreasing the active transport of glucose across intestinal brush border membrane, inhibiting sodium GLUT2 (35). Furthermore, some flavonoid-rich extracts improved hyperglycemia and insulin sensitivity in type 2 diabetic mice via activation of AMP-activated protein kinase and accompanied by an upregulation of GLUT4 (36). In vitro, flavonoids also had a protective effect on pancreatic b-cells by reducing the inducible form of nitric oxide synthase gene expression mediated through the suppression of nuclear factor-kB and c-Jun NH 2 -terminal kinase signaling pathways (37,38). Other antioxidant, antiinflammatory, and antiangiogenic activities of flavonoids may also contribute to their potential protective effect against type 2 diabetes (5).
Strengths of the current study include the multicenter design and the large sample size at recruitment, from which a large number of verified incident cases of type 2 diabetes accrued during 3.99 million person-years of follow-up. This study also includes a wide variation in flavonoid and lignan intakes among participants in eight European countries. Furthermore, we were able to control for a number of plausible confounders and factors that may mask the etiological pathway of the association between flavonoid and lignan intake and type 2 diabetes. In all sensitivity analyses, the associations were almost identical, denoting the robustness of our results. Limitations of the current study included the use of a single baseline assessment of diet and other lifestyle variables. Therefore, changes in lifestyle could not be taken into account in these analyses. In addition, our results may be influenced by measurement errors of the dietary questionnaires that may have attenuated our findings, although country-specific validated questionnaires for some flavonoid-rich foods, such as fruits, vegetables, tea, and wine (17,18), were used. Furthermore, flavonoid and lignan intakes are likely to be underestimated since the flavonoid database was incomplete (although an extensive common database was used) (20,21) and herb/plant supplement intakes were omitted in these analyses (up to 5% in Denmark, the highest consumer country) (39). Nutritional biomarkers offer an alternative and objective method for estimating dietary intake and provide more accurate measures than selfreported questionnaires. To date, there are only a few validated biomarkers of flavonoid and lignan intakes, so further research in this field is warranted (40). However, we were unable able to evaluate the association between the intakes of other polyphenols, such as phenolic acids and stilbenes, and type 2 diabetes because data on these are not yet available in the EPIC cohort. Moreover, the association of dietary intakes of flavonoids and lignans with type 2 diabetes risk might be susceptible to confounding since high flavonoid and lignan intake reflects a healthier lifestyle. In our models, we have adjusted for other determinants of healthy lifestyle; however, possible residual confounding cannot be excluded.
In conclusion, this large case-cohort study conducted in eight European countries supports a role for dietary intake of flavonoids in the prevention of type 2 diabetes in men and women. High total intakes of flavonoids, flavanols, flavan-3-ol monomers, and flavonols were associated with a 10, 18, 27, and 19% lower risk, respectively, of type 2 diabetes. These results highlight the potential protective effect of eating a diet rich in flavonoids (a dietary pattern based on plant-based foods) on type 2 diabetes risk. Figure 1dHRs (and 95% CIs) for incident type 2 diabetes for a doubling of total flavonoid (A) and lignan (B) intakes across countries in the InterAct study. The pooled HR is based on a random-effects meta-analysis using Prentice-weighted Cox regression analysis with age as the underlying time scale (model 4; see STATISTICAL ANALYSIS section); stratified by center; and adjusted for sex, educational level, smoking status, physical activity levels, BMI, total energy, and intakes of alcohol, red meat, processed meat, sugar-sweetened soft drinks, coffee, fiber, vitamin C, and magnesium. | 2017-04-13T00:21:18.999Z | 2013-11-13T00:00:00.000 | {
"year": 2013,
"sha1": "2f34ef12e27cc5037b91b00b0ff954229a0bd16e",
"oa_license": "CCBYNCND",
"oa_url": "https://care.diabetesjournals.org/content/diacare/36/12/3961.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "2f34ef12e27cc5037b91b00b0ff954229a0bd16e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
229720431 | pes2o/s2orc | v3-fos-license | Collateral Glucose-Utlizing Pathwaya in Diabetic Polyneuropathy
Diabetic polyneuropathy (DPN) is the most common neuropathy manifested in diabetes. Symptoms include allodynia, pain, paralysis, and ulcer formation. There is currently no established radical treatment, although new mechanisms of DPN are being vigorously explored. A pathophysiological feature of DPN is abnormal glucose metabolism induced by chronic hyperglycemia in the peripheral nerves. Particularly, activation of collateral glucose-utilizing pathways such as the polyol pathway, protein kinase C, advanced glycation end-product formation, hexosamine biosynthetic pathway, pentose phosphate pathway, and anaerobic glycolytic pathway are reported to contribute to the onset and progression of DPN. Inhibitors of aldose reductase, a rate-limiting enzyme involved in the polyol pathway, are the only compounds clinically permitted for DPN treatment in Japan, although their efficacies are limited. This may indicate that multiple pathways can contribute to the pathophysiology of DPN. Comprehensive metabolic analysis may help to elucidate global changes in the collateral glucose-utilizing pathways during the development of DPN, and highlight therapeutic targets in these pathways.
Introduction
Diabetes leads to various neuropathies, such as diabetic polyneuropathy (DPN) [1,2].DPN can be further categorized into diabetic sensory polyneuropathy and autonomic neuropathy.It shows the earliest onset and is the most prevalent diabetic micorangiopathy [1].DPN is a sensory-dominant neuropathy and symptoms include hyperalgesia and/or hypoalgesia, pain, allodynia, paralysis, and ulcer formation.Pain can lead to insomnia as well as incurable ulcerations and infections that may eventually lead to lower-limb amputations, which reduce patient quality of the life.Furthermore, indolent acute myocardial infarction and arrhythmia due to autonomic neuropathy can dramatically worsen prognosis [3].Intraepidermal small nerve fibers of the foot skin can decrease even if subjective symptoms are not evident in the early stages of DPN [4].This reduction of small nerve fibers is progressive in DPN and can subsequently or simultaneously develop into degeneration of large myelinated fiber and demyelination.At present, there is no established radical treatment for DPN, and lifestyle intervention and strict glycemic control play significant roles in preventing DPN progression.
The current understanding of the mechanisms involved in DPN is based on aberrant glucose metabolism [1,5].The incidence of metabolic syndrome modifies the pathology of DPN in type 2 diabetes [6,7].Hyperglycemia can activate multiple collateral glucose-utilizing pathways, such as the polyol pathway, protein kinase C (PKC) pathway, advanced glycation end-products (AGEs) formation, hexosamine biosynthetic pathway, pentose phosphate pathway, and anaerobic glycolytic pathway [1,5,[8][9][10].These mechanisms are assumed to individually or synergistically trigger the onset and progression of DPN.We previously reported that these mechanisms may sequentially contribute to the pathophysiology of small fiber neuropathy related to abnormal glucose metabolism [11][12][13].Initially, low levels of oxidative stress are involved in the pathogenesis of small fiber neuropathy in patients with normal-high HbA1c levels [11,13].Thereafter, oxidative stress level gradually increases in proportion to DPN stage [12,13].The implication of inflammation is evident from impaired fasting glucose levels [12,13], and AGEs are eventually observed at the advanced stages of diabetes [12].Similarly, hyperactivation of the polyol pathway in the peripheral nerves of DPN is implicated in the initial stages of DPN [14,15].These findings suggest that glucose metabolism may be altered in the peripheral nerves depending on DPN stage and state.It is important to understand these changes to establish a suitable therapy for DPN.In the present review, recent developments into the mechanisms involved in DPN are introduced with a focus on glucose metabolism, particularly via the collateral glucose-utilizing pathways, in the peripheral nerves in DPN.
Glucose Metabolism in the Peripheral Nerves
The pathogenesis of diabetes involves insulin resistance in the peripheral tissues, such as adipose tissue, muscle, and liver, and a lack of insulin secretion from pancreatic β-cells, leading to abnormal glucose metabolism.Similarly, abnormal glucose metabolism is elicited by insulin resistance in the peripheral nerves in DPN, and is common between type 1 and type 2 diabetes [16].Nevertheless, there are clear differences in the pathogenesis of DPN between type 1 and type 2 diabetes [2,17,18].In type 1 diabetes, strict blood glucose control is more effective to suppress the development of DPN, whereas it is difficult to deplete the progression of DPN in type 2 diabetes only using glycemic control due to the contribution of factors related to metabolic syndrome, such as dyslipidemia and obesity [2,17,18].
Glucose transportation via Glut-1 and Glut-3 is independent on the effects of insulin, and differs from the effects of Glut-4 expressed in adipose tissue.Transported glucose is metabolized via glycolysis.Glycolysis is a fundamental glucose metabolic pathway in which glucose is step-wisely phosphorylated and metabolized in cells.Almost all the reactions in glycolysis are reversible, although the reactions catalyzed by hexokinase, phosphofructokinase, and pyruvic acid kinase are irreversible.Pyruvic acid (pyruvate), which is the final metabolite of glycolysis, is transported into the TCA cycle in the mitochondria under the aerobic conditions.Pyruvate is metabolized into lactic acid and released from the cells in anaerobic conditions.Glut-4 dysfunction is evident in the adipose tissue of patients with diabetes, resulting in reduced incorporation of glucose.In contrast, dysfunction of Glut-1 and Glut-3 is not apparent in the peripheral nerves in DPN.Our previous comprehensive metabolome analysis revealed that sequential metabolites involved in glycolysis are presrved in the sciatic nerve tissues of streptozotocin (STZ)-induced type 1 diabetic mice [15].Therefore, changes to Glut expression and their dysfunction may minimally contribute to the pathogenesis of DPN, even in the peripheral nerves in DPN.There are several collateral pathways associated with glycolysis, such as the polyol pathway, hexosamine biosynthetic pathway including glucosamine metabolism, PKC pathway, AGEs pathway, pentose phosphate pathway, and anaerobic glycolytic pathway (Figure 1).In the peripheral nerves, collateral pathways are minimally activated in the nondiabetic state.In contrast, since glucose uptake into the cytosol and flux into the glycolytic pathway are increased in diabetes, collateral pathways associated with glycolysis are simultaneously activated.For example, glucose flux into the polyol pathway is increased three to four-fold compared with the non-diabetic state.Activated collateral pathways are known to be differently implicated in the pathogenesis of DPN.
Polyol Pathway
In the polyol pathway, glucose is metabolized to sorbitol in a reaction catalyzed by aldose reductase, which is a rate-limiting enzyme (Figure 2) [21,22].In the polyol pathway, nicotinamide adenine diphosphate (NADPH) is simultaneously converted to NADP + .Activation of aldose reductase consumes NADPH, resulting in a reduced ability to defend against oxidative stress.Subsequently, sorbitol dehydrogenase metabolizes sorbitol into fructose, leading to the conversion of NAD to NADH.This reaction leads to an imbalance of the osmotic pressure in the cytosol, osmotic stress, and subsequent efflux of myoinositole and taurine.The shortage of myoinositole is directly linked with low activation of Na + /K + ATPase and a decline in cell function.Myoinositol insufficiency can also elicit the production of diacylglycerol (DAG), which is an activator of PKC.Aldose reductase inhibitors (ARIs) are the only clinically approved compound used as a therapeutic agent for DPN in Japan.Hotta et al. revealed that the application of ARIs significantly ameliorated the delay of nerve conduction velocity in Japanese DPN patients, although its effects were shown to be limited in patients with poor glycemic control [14].This finding suggests that glucose may shift to another collateral pathway in glycolysis due to inhibition of the polyol pathway by ARIs.On the other hand, ARIs had a marginal effect on the Caucasian subjects with DPN in most double-blinded studies [23], implying differences in the pathophysiology of DPN between Caucasian and Japanese patients.There may be factors associated with metabolic syndrome that are more dominant in Caucasian patients with diabetes compared with Japanese patients.
PKC Pathway
PKC is a serine/threonine kinase involved in various cell signaling pathways [24].Activation of PKC leads to cell differentiation, proliferation, and death.Continuous hyper-glycemia increases the generation of glyceraldehyde 3-phosphate, an intermediate product of glycolysis, which is converted to phosphatidic acid, resulting in the accumulation of DAG (Figure 3).DAG accumulation can activate PKC.Interestingly, PKC activation in DPN is different depending on the organ involved.In endothelial cells and smooth muscle cells in blood vessels, PKC can be activated, while its activity is decreased in neuronal cells and Schwann cells [25,26].Activation of PKC reduces Na + /K + ATPase activity, expression of vascular endothelial growth factor and TGF-β, and ischemia and hypoxia in tissues.In an experimental model of DPN, inhibition of PKC-β ameliorated hyperalgesia in DPN [27].However, since no beneficial effects of inhibition of human DPN were found, clinical application has not been achieved.
AGEs Pathway
During chronic hyperglycemia, the aldehyde base in reducing sugars such as glucose reacts with the amino group in proteins in a non-enzymatic manner, resulting in the generation of nitrogen glucoside, Schiff base, Amadori compound, and AGEs.Intermediate products of AGEs are also generated from glycolysis (Figure 4).Binding of AGEs to their receptor in macrophages leads to nuclear translocation of NF-kB and an inflammatory reaction, and tissue injury [28].AGEs may contribute to the pathogenesis of DPN since the concentration of N(ε)-carboxymethyllysine, a type of AGEs, is increased in the sciatic nerves of STZ-induced diabetes animal models [29].Glycolysis generates dicarbonyl compounds such as glyoxal, methylglyoxal, and 3-deoxyglucosone.Glyceraldehyde 3-phosophate is converted to dihydroxyacetone phosphate, catalyzed by triosephosphate isomerase, and methylglyoxal, catalyzed by methylglyoxal synthase.
Fructose is an end-product of the polyol pathway and is further converted to fructose 3-phospate and 3-deoxyglucosone.Although dicarbonyl compounds are intermediate products of AGEs, they have a higher reactivity than AGEs.Particularly, methylglyoxal induces the activation of nociceptive receptors via the voltage-gated sodium channel, Na(v)1.8,resulting in hyperalgesia and delayed nerve conduction velocity in mice [30].Nav1.8 knockout in mice prevented the effects of methylglyoxal on the peripheral nerves in experimental DPN.Therefore, suppression of methylglyoxal generation is expected to be a promising treatment for painful DPN.On the other hand, excessive activation of Nav1.8 may be prevented by suppression of glyceraldehyde 3-phoshphate production in glycolysis.
The final metabolite of this pathway is uridine diphosphate (UDP)-N-acetylglucosamine (UDP-GlcNAc), which can bind to serine or threonine residues in proteins via O-linkedglycosylation, resulting in post-translational modifications by monomeric O-linked Nacetyl-d-glucosamine (O-GlcNAc).The hexosamine biosynthetic pathway has been suggested to function together with O-GlcNAc modification of proteins as a nutrient sensor in cells [33,34].This reversible glycosylation can modulate the activity of many types of protein, such as the phosphorylation of kinases and transcriptional activity of transcriptional factors [35].The hexosamine biosynthetic pathway is also activated by the metabolism of glucosamine into glucosamine 6-phosphate, which is directly incorporated via glucose transporters into the extracellular matrix.Increased expression of GFAT in pancreatic β-cells led to generation of hydrogen peroxide and decreased expression of insulin, Glut-2, and glucokinase, resulting in β-cell dysfunction [36].In complications of diabetes, the hexosamine biosynthetic pathway is implicated in the upregulation of TGF-β expression and increase in mesangeal matrix [37].This is ascribed to the activation of the transcriptional factor, sp1, due to the activation of the hexosamine biosynthetic pathway, which triggers endoplasmic reticulum (ER) stress [38].Similarly, ER stress is involved in the pathogenesis of DPN [39].Although, there is indirect evidence for the implication of the hexosamine biosynthetic pathway in the pathogenesis of DPN, no direct evidence has been shown.
Glucosamine Pathway
Glucosamine is a precursor of UDP-GlcNAc, the main product of the hexosamine biosynthetic pathway, and is often used to mimic its activation (Figure 5).In addition to activation of the hexosamine biosynthetic pathway, Glucosamine flux competitively inhibits glucose uptake, downregulating glucokinase expression in pancreatic β-cells or hepatocytes, resulting in induction of ER stress or apoptotic cell death independent of the hexosamine biosynthetic pathway [36].Lim et al. showed that glucosamine acutely reduces cellular glucose uptake, glucokinase activity, and intracellular ATP levels in induced motor neuronal cells [40].As a result, AMP-activated protein kinase activity and ER stress increased, leading to a reduction of cell viability [40].Although these results suggest a neurotoxic role of glucosamine, even in the peripheral nerves, detailed studies have not been conducted due to a lack of information concerning the amount of glucosamine in nerve tissues.
We recently reported the detailed roles of glucosamine in experimental DPN models of type 1 diabetes [15].We evaluated the amount of glucosamine accumulation in the sciatic nerve of STZ-induced type 1 diabetic mice [15].Comprehensive metabolomic analysis revealed a marked accumulation of glucosamine in the sciatic nerves, regardless of the presence of aldose reductase gene.In vitro analysis showed that glucosamine stimulation induced cell death of a Schwann cell line and inhibition of neurite outgrowth in primary cultured dorsal root ganglia (DRG) neuronal cells.Glucosamine was shown to excessively activate glucokinase, which is a rate-limiting enzyme of glucosamine metabolism, resulting in the depletion of ATP.Short and long-term administration of glucosamine into nondiabetic mice resulted in sensory-dominant neuropathy similar to DPN.These results suggest that glucosamine contributes to the onset and progression of experimental DPN.Concurrently, our results may imply that the hexosamine biosynthetic pathway may partially contribute to the development and onset of DPN, as well.
Pentose Phosphate Pathway
In the pentose phosphate pathway, glucose 6-phosphate is metabolized to a series of pentoses, such as ribose 5-phosphate (Figure 6).
As final metabolites in pentose phosphate pathway, fructose 6-phosphate and glyceraldehyde 3-phosphate are produced, generating NADPH, which is an antioxidant substance necessary for lipid synthesis.Ribose 5-phosphate is a source of nucleic acid.One cycle of the pentose phosphate pathway generates one molecule of carbon dioxide and two molecules of NADPH from one molecule of glucose 6-phophate.Transketolase depends on thiamine 2-phosphate, which can catalyze several important steps in the pentose phosphate pathway.In a physiological state, transketolase is activated depending on the concentration of the substrate [41].When glucose 6-phosphate and fructose 6-phosphate are saturated in a hyperglycemic state, transketolase can be activated and reused to produce pentose 5-phosphate and erythrose 4-phosphate.In experimental diabetic retinopathy, symptoms were ameliorated by activation of transketolase [42].In new onset DPN within one year, the single nucleotide polymorphism (SNP) rs7648309 in transketolase was significantly correlated with an elevated total symptom score, and the rs62255988 SNP was correlated with delayed thermal threshold [8].In an experimental DPN model, activation of glucose 6-phosphate dehydrogenase facilitated flux into the pentose phosphate pathway, resulting in inhibition of neuronal death due to hyperglycemia [9].Thus, activation of the pentose phosphate pathway could ameliorate DPN.Benfotiamine (S-benzoylthiamine O-monophosphate) is an antioxidant compound that can ameliorate DPN symptoms.It has anti-DPN effects with multiple sites of action, including the activation of transketolase in the pentose phosphate pathway [42].The efficacy and safety of benfotiamine has been investigated in patients with DPN in several randomized, double-blind clinical trials for 3-12 weeks [43][44][45][46].All trials using benfotiamine showed significant improvement of neuropathic symptoms in DPN patients [43][44][45][46].Nevertheless, there is a paucity of promising data regarding the long-term use of benfotiamine treatment for human DPN.This suggests that the effects of monotherapy may be less remarkable during the course of DPN since other pathways may be activated due to the multifactorial pathophysiology of DPN.
Anaerobic Glycolytic Pathway
Pyruvate is the final metabolite in the aerobic glycolytic pathway (Figure 7).Pyruvate is transferred to the TCA cycle and electron transport chain reaction in mitochondria, resulting in the generation of 32 ATP molecules as a final product.Conversely, pyruvate can be metabolized to lactic acid by lactate dehydrogenase in anaerobic conditions.Anaerobic glycolytic pathway generates two ATP molecules and one lactate molecule from one glucose molecule.In DPN, peripheral nerves are under chronic ischemia because of the thickness and narrowing of microvessels due to diabetic microangiopathy, which lowers the intranervous blood flow [47].Thus, anaerobic glycolytic pathway tends to be activated in the peripheral nerves with DPN.Pyruvate dehydrogenase (PDH) is a rate-limiting enzyme in TCA cycle [48].It can oxidize carboxyl groups resulting in the production of acetyl-CoA, NADH, and carbon dioxide.PDH is phosphorylated by PDH kinase (PDK), which can suppress the activation of PDH.There are four subtypes of PDK: PDK1-4 [49].PDK is widely expressed in human tissues [50].Specific PDK subtypes are activated in specific situations (e.g., PDK1 in anaerobic conditions, PDK2 in high acetyl-CoA and NADH conditions, PDK3 in high ATP conditions, and PDK4 in starvation and a state of low nutrition) [51].Activation of PDK subsequently suppresses activation of the TCA cycle, resulting in activation of anaerobic glycolysis and an increase in lactic acid production.The four different PDK isoforms are expressed in diverse peripheral and central tissues [52].Activation of PDK in the peripheral nerves of DPN was reported [10].In an STZ-induced diabetes model, expression of PDK2 and PDK4 were increased with activation in DRG.Double knockout of PDK2 and PDK4 ameliorated hyperalgesia, an activation of ion channels associated with pain sensation and satellite glial cells, and macrophage infiltration accompanied by suppression of increase in lactate.In primary culture of DRG cells, the addition of lactate to the medium increased the expression of ion channels and decrease cell viability.Thus, these results suggest that anaerobic metabolism is implicated in the pathophysiology of DPN in an STZ model.Anaerobic metabolism can decrease the viability of DRG neurons and trigger hyperalgesia.Furthermore, dichloroacetic acid, a PDK inhibitor, or FX11, a lactate dehydrogenase inhibitor, could partially alleviate hyperalgesia in an STZ model.These results indicate that PDK may be a target for painful diabetic neuropathy treatment.
Future Perspective for Therapeutic Application
The pathophysiology of DPN changes during its course [11][12][13].Because of these changes, the efficacy of ARI can be limited and benfotiamine shows no long-term efficacy in a real-world clinical setting of DPN.These suggest that each collateral glucose-utilizing pathway may play a partial role in the certain periods of DPN development.In other words, each pathway can be activated in the different stages or situations of diabetes and DPN.Furthermore, it is also possible that synergistic effects of the concurrent activation of these pathways may be ascribed to DPN development.In addition, from the view that the pathogenesis of DPN is multifactorial, it is important to consider the contribution of factors other than glucose metabolism to the pathogenesis of DPN.The factors of metabolic syndrome such as obesity, dyslipidemia, and hypertension are known to be risk for DPN, which may trigger inflammatory reactions [6,12].These factors can also influence the activity of each collateral glucose-utilizing pathways.Therefore, it is important to know the whole changes of the metabolic pathway in the peripheral nerve tissues in human DPN to introduce the therapies to which the collateral glucose-utilizing pathways are applied.To achieve this end, comprehensive metabolomic analyses should be performed in the peripheral nerves of human DPN, which would improve our understanding of the whole changes in the metabolism including glucose flux into the collateral pathways.Such comprehensive analyses may give us the information how to apply the therapies, and further elucidate the hidden changes in glucose metabolism in DPN, such as glucosamine accumulation [15].
Conclusions
Various collateral glucose-utilizing pathways of glycolysis can be differently or synergistically activated in response to increases in glucose flux in the peripheral nerves of DPN, which can contribute to the pathophysiology of DPN.It is crucial to identify the timings of activation and selections of those pathways for the therapeutic application.In order to establish the radical treatment of DPN including the application of collateral glucoseutilizing pathways, we should also consider the whole changes of metabolism including other factors than glucose in the peripheral nerves of DPN.Therefore, it is expected to identify compounds that have multiple and heterochronous targets and comprehensively treat DPN patients.
Figure 3 .
Figure 3. PKC pathway.Glyceraldehyde 3-phosphate is converted to phosphatidic acid, resulting in the accumulation of DAG, which activates PKC.On the other hand, the polyol pathway can activate the PKC pathway by increasing the production of NADH.Glut, glucose transporter; DAG, diacylglycerol; NAD(H), nicotinamide adenine dinucleotide; PKC, protein kinase C; TCA cycle, tricarboxylic acid cycle; TGF-β, transforming growth factor-β; VEGF, vascular endothelial growth factor.
Figure 4 .
Figure 4. AGEs pathway.Fructose, which is an end-product of the polyol pathway, is converted to 3-deoxyglucosone and glyceraldehyde 3-phosphate is converted to methylglyoxal.3-deoxyglucosone and methylglyoxal are intermediate products of AGEs.AGEs, advanced glycation end-products: Glut, glucose transporter; TCA cycle, tricarboxylic acid cycle. | 2020-12-31T06:18:17.412Z | 2020-12-24T00:00:00.000 | {
"year": 2020,
"sha1": "ae17bbfb233f724d3147c4c436a4772a55e9ed86",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/22/1/94/pdf?version=1609759169",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "99a54679b4b67a2bb6fc20c158c7e5dc9d293f4f",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18780970 | pes2o/s2orc | v3-fos-license | Trends in Child Poverty in Sweden: Parental and Child Reports
We use several family-based indicators of household poverty as well as child-reported economic resources and problems to unravel child poverty trends in Sweden. Our results show that absolute (bread-line) household income poverty, as well as economic deprivation, increased with the recession 1991–96, then reduced and has remained largely unchanged since 2006. Relative income poverty has however increased since the mid-1990s. When we measure child poverty by young people’s own reports, we find few trends between 2000 and 2011. The material conditions appear to have improved and relative poverty has changed very little if at all, contrasting the development of household relative poverty. This contradictory pattern may be a consequence of poor parents distributing relatively more of the household income to their children in times of economic duress, but future studies should scrutinze potentially delayed negative consequences as poor children are lagging behind their non-poor peers. Our methodological conclusion is that although parental and child reports are partly substitutable, they are also complementary, and the simultaneous reporting of different measures is crucial to get a full understanding of trends in child poverty.
Child Poverty in Sweden
We are constantly reminded by internationally comparative studies that child poverty is widespread also in rich countries (EU 2008;UNICEF 2012). Moreover, there has been a fear during recent decades, following economic recessions and growing income inequality, that child poverty has increased, and some studies have also uncovered trends supporting such worries (e.g., EU 2008;UNICEF 2014).
Child poverty has also become a hot political topic in Sweden, despite low levels of poverty within an international perspective (Bradbury and Jäntti 2001;UNICEF 2012;Eurostat 2009). Trends in child poverty are closely monitored by political parties, authorities and interest organisations (Swedish Social Insurance Agency 2012; Swedish Save our Children 2012; Swedish Ministry of Health and Social Affairs 2004), and the government has identified a set of indicators for following up child policies, including several indicators of economic resources (Swedish Ministry of Health and Social Affairs 2007). The increasing interest in child poverty has come in the wake of the economic depression in Sweden at the beginning of the 1990s, and has been accentuated by reports of growing income inequality since the 1990s (Gustafsson et al. 2007;, a trend shared with many Western countries (e.g., OECD 2008; Atkinson 2013). Studies of trends in child poverty are indispensible for assessing progress and setbacks of welfare states, but defining and measuring child poverty is not a trivial task and differences in concepts and measures may result in different time trends. There are two main issues. The first one applies to poverty research in general, asking whether poverty is best measured in terms of deprivation or in terms of income, and, if the latter, whether a relative or an absolute income measure is preferred. The second issue is whether it is sufficient to measure child poverty in terms of household income or deprivation, or whether we should assess child poverty in terms of children's own conditionsideally reported by children themselves.
There is a dearth of studies of trends in child poverty that use several definitions simultaneously. This is unfortunate because such studies can show how robust any trends are, and differences in trends across definitions can give valuable insights into the social forces behind them. Motivated by these advantages, this article aims to study poverty trends in Sweden using both income and deprivation definitions, measured at both the family and the child level, and using reports from both parents, income registers and from children themselves. It is our belief that we are able to improve on earlier studies by this comprehensive approach, and add a more nuanced picture of how children's economic situation have developed across several macroeconomic changes.
Based on the empirical study of the period 1980/2000-2012, we ask: (i) whether trends are as gloomy as the present political discussion suggests; (ii) whether and how child poverty varies with economic recession and growth; (iii) whether vulnerable groupsin particular children of lone parents and immigrantsbear the burden of increasing poverty, in case such trend could indeed be proven; and (iv) whether household-or child-reported poverty is more strongly associated with non-economic outcomes for children, where we concentrate on here-and-now outcomes, in other words outcomes when children are children. This comparison between householdand child-reported poverty indirectly enables us to evaluate the substitutability or complementarity of parental and child reports on poverty.
Our methodological conclusion is that the simultaneous reporting of different measures is crucial to get a full understanding of trends in child poverty, and any report based on only one indicator or from only one source should be interpreted carefully and regarded as incomplete. In substantive terms, we conclude that the economic situation of child households in Sweden has improved markedly since the beginning of the 2000s, but there are worrying signs that children in families at the bottom of the income distribution are falling further behind the rest. Parents' connection to the labour market has become more predictive of child poverty, leaving particularly children of lone parents and recent immigrants in a precarious situation. However, although their levels of poverty are very high, their trends are similar to those of other children. Childreported poverty levels, both in relative and absolute terms, are largely unchanged since the beginning of the 2000s despite trends in parental absolute and relative poverty rates. This is an important insight that could imply intra-household compensation for children in poor families or delayed problems. We conclude that future studies should closely follow the development of relative deprivation experienced by children themselves.
What is Child Poverty?
According to a definition commonly adhered to, a person is poor who cannot live a life on par with others in the society in which they live (e.g., Sen 1983;Townsend 1979). Thus, poverty is not only a matter of survivalhaving food, clothes and shelterbut of having the economic means to participate in social life and to meet fellow citizens without shame. Following this lead, child poverty could be defined as a lack of economic resourcesstemming from the household's economy or their ownthat prevents children from participating as equals in social life.
Obviously, it is difficult to determine the level of poverty that leads to adverse social outcomes, this being dependent on age, neighbourhood and social network, for example, and it is neither reasonable nor possible to identify local or individual poverty limits. Instead, the normal procedure is to identify the poor indirectly by relating the economic situation to some general idea of Bnecessary consumption^in a given society at a given time; thus, emanate measures of poverty in terms of income poverty (often categorized into absolute or relative) and measures in terms of material or economic deprivation. At a macro-level, using rates of social assistance (SA)/welfare benefits is also a common indicator of poverty, although open to political tinkering (cutbacks on benefits will register as decreased poverty, for example). Also, receipt of such benefits is not an ideal measure at the household or individual level because the benefits are tailor-made for lifting individuals out of poverty.
Measures of Family Poverty
The most common approach to child poverty defines children as poor if they live in poor families. This is not unreasonable, as the family economywhether measured in terms of income poverty or deprivationsets important limits for the quality of housing, the area of residence, and family activities and amenities, and is likely to affect the material conditions and quality of life of all its members. Measures of income poverty classify families as poor if their incomes fall below a pre-determined poverty threshold. In developed countries, this threshold is normally meant to approximate the income necessary for living a life on par with others, a life in Bdecency^, as Galbraith (1958) put it, and usual practice is to use either a relative or an absolute income poverty threshold.
Relative income poverty assumes that that there is a threshold in the income distribution under which living standards are not acceptable (e.g., Townsend 1979;OECD 2008). This is commonly, and rather arbitrarily, set at either 50 % (European Union) or 60 % (OECD) of the median income. Because of the relation to the median income, this indicator captures inequality in the bottom half of the income distribution. An alternative relative measure, less often used, is to define the poor as a given percentile group of the income distribution (but this can obviously not be used for studying trends).
Absolute income poverty, sometimes termed Bminimum income standard^, sets the threshold at the estimated cost of a given basket of necessities. Because this basket reflects what is seen as acceptable in a given society at a given time, the measure is relative in this time/place sense, and the label Babsolute^refers to the assessment of income against a given level of consumption, in contrast to the relative measure's assessment against other people's incomes.
Staying closer to the theoretical definition of poverty, one strand of research, rather than taking the indirect route via income, measures living conditions directly (e.g., Townsend 1979;Ringen 1988;Nolan and Whelan 2011;Gordon et al. 2013;Whelan and Maitre 2013). This results in measures of (economic or material) deprivation such as subjective measures of economic hardship or more objective information on possessions and cash margin. A branch of this field seeks to integrate the deprivation and income aspects through survey questions on whether families lack some good or activity and, if so, whether this is because they cannot afford it. This is commonly combined with questions on whether the respondent sees the good or activity as a necessity in a given society (Mack and Lansley 1985;Lansley and Mack 2015;Gordon et al. 2013), allowing the estimation of a poverty line based on majority opinions (Bsocially perceived necessities^), leading to so-called Bconsensual poverty^estimates saying whether someone lacks a given number of items socially perceived as necessary at a given time and place. An alternative way of combining income and deprivation approaches is to simply define as poor those who fall below both an income poverty line and a deprivation poverty line, so-called Bconsistent poverty^(e.g., Callan et al. 1993).
Measures of Child-Level Poverty
Poverty definitions based on the income or deprivation of families assume an equal distribution of resources within a household. This assumption has been criticized from a gender perspective (Millar 2003;Pahl 1990) but is equally questionable from a child perspective. Two children whose parents have equal incomes may themselves have different economic margins and material standards depending on what proportion of the household income they command, and the within-family distribution of economic resources can differ between different types of families. Young people can also have an economy that is partly independent of the parental incomes, for example through their own work for pay, and the access to own money can give a stronger sense of control and freedom than the access to a parent's money. In order to measure young people's wellbeing, or level-of-living, direct measures of their total incomes and other economic possessions are therefore necessary (Jonsson and Östberg 2010).
Findings from representative surveys and qualitative interviews suggest that withinfamily redistribution in families with a strained economy tends to be to the advantage of children, as parents often prioritize children's needs over their own (Ridge 2002;Main and Bradshaw 2014;Middleton et al. 1997;Gordon et al. 2013). Most of these findings are, however, based on parental reports, either from qualitative accounts or in terms of parental assessments of theirs and their children's access to necessities. Child-reported data on economic and material resources are scarce but vital, as they give direct information about poverty and deprivation of children in a way that is not filtered through the perceptions of their parents. Letting children convey information about their own situation is also an ethical question, as it is preferable not to let a group's living conditions be represented by others.
Several qualitative studies use children as informants (for reviews, see Ridge 2011;Redmond 2008a), while few studies use child-informant survey data to construct childcentred poverty measures (for exceptions, see Skevik 2008;Jonsson and Östberg 2010;Main and Bradshaw 2012;Main 2014;Gross-Manos 2015), and even fewer have data that allow for studying trends over time. Data elicited directly from children on several dimensions of economic resources are available for Sweden since 2000 (Jonsson and Östberg 2010), which means that we can now study not only levels but also trends over time in poverty as reported by children themselves.
Our Measures of Child Poverty
The measures discussed above all have their pros and cons, and our view is that there is no strong theoretical argument to prefer one over another. Using a children's rights perspective, Redmond (2008b) similarly concludes that none of the dominant theoretical frameworks on poverty suggests a clear-cut definition of child poverty. Many empirical studies also show a relatively small overlap between poverty using different measures (e.g., Mood and Jonsson 2014;Halleröd and Larsson 2008;Whelan et al. 2001), which could be interpreted as poverty being generically multi-faceted, although different definitions of poverty tend to be similarly related to sociodemographic characteristics (Jonsson and Östberg 2004). In light of all this, our preference is to show a comprehensive picture of trends in child poverty, using several indicators and reports from parents, registers and children themselves. This allows us to see how sensitive trends are to different definitions, and any observation of systematic differences in trends across definitions opens up for an increased understanding of what aspects of poverty they capture and how these different aspects vary with other societal changes. Table 1 outlines our course by showing the types of indicators we base our analyses on.
Two of our variables intend to capture the relative deprivation element of poverty by explicitly building comparisons with others into the measure: at the parental level, we use relative income poverty, while on the child level we ask children whether they can afford a social participation and consumption level on par with their friends. We also use a range of measures to capture absolute poverty, by which we mean measures that make no reference to other people's living standards: at the family level, we use a measure of absolute income poverty (minimum income standard), several indicators of parent-reported deprivation, and one measure of recipiency of social assistance. At the child-level, we use child-reported pocket money, income from work, material possessions, and cash margin. Data sources and variables are presented in sections 3 and 4.
Data Sources
We draw on several data sources, mostly recent Swedish survey data, complemented with register information on social assistance and income from tax registers. The adult surveys used have sample sizes of around 6000 to 11,000 respondents per wave/year, while the child surveys (for ages 10-18) are around 1000 respondents per wave. The data sources are listed in Table 2, with details in Appendix 1.
Variables
Our analyses focus on the following poverty indicators, which are described briefly here, with more details in Appendix 2: Deprivation, 1980Deprivation, -2012 In the public debate on child poverty, the focus is often on the most recent annual ups and downs, which certainly can be felt by families living on the margin. However, more important than the temporary (often Baccidental^or stochastic) bumps are the longerterm trends. Finding data that can cover long time periods is difficult, but we use two surveys on the basis of which we can study economic deprivation since 1980, although not all indicators are available for the whole period. The annual ULF surveys contribute to the information underlying the curves in Fig. 1, except for those beginning in 2004 or later, which are part of EU-SILC. The LNU survey results are shown as dots for the years the study was conducted (1981, 1991, 2000, and 2010). 1 Figure 1 shows the development of child poverty over time in terms of the economic deprivation of families with children. The statistics are based on information from parents who respond to questions about the household, but we recalculated the percentages according to the number of children in each family, thus making children the basis of the analysis. The exception is the 2004-2012 estimates of Bmaking ends meet^from EU-SILC, which refer to the proportion of child households.
In the longer time perspective, it is not any specific trend but the fluctuations in economic deprivation that stand out. Such problems were uncommon during the economic boom of the late 1980s but increased rapidly following the economic crash that started in autumn 1991 in Sweden andhad repercussions up to 1996-1997. This downturn mirrored what many other countries experienced in 2008-2009. For example, in 1990, 12 % of children (aged 0-18) lived in households with no cash margin, while the corresponding figure for 1997 was 25 %. During this period, the proportion of children living in families who had difficulties meeting their economic needs increased from 20 to 30 %. This was an unprecedented worsening of the economy of families with children.
Following the high poverty rates in 1996-1997, the recovery was slow at first, and it was not until 2005-2006 that these economic problems had returned to prerecession levels. All in all, poverty rates in terms of deprivation remained unusually high for a good decade. For many children this period represents much of their childhood.
On a positive note, the two surveys support the conclusion that the proportion of children in households with economic troubles has decreased rather steadily since 1996. In the ULF studies, it is unfortunately impossible, due to changes in methods and survey questions, to record comparable data after 2005, but we can analyse trends from 2008 and onwards. From 2004, EU-SILC data can be used. The overriding conclusion is that the recent period, from around 2004 to 2010-2012, is characterized by subsiding rates of economic deprivation in child households. The results from the LNU survey are important here because they add comparability in indicators of cash margin and subjective economic crisis, and they support the conclusion that the proportion of children in households facing economic problems in 2010 is somewhat lower than in 1991, before the recession. 2
Changes in Household Income Poverty and Social Assistance
While the trends in economic deprivation can be studied quite far back, it is difficult to find comparable data on income poverty predating 1991, from which year there are reliable data from the annual Household Finances Survey (HEK). We use this survey for studying income poverty and social assistance between 1991 and 2012. Again, measures are at the household level, but we express them as the proportion of poor children. Figure 2 shows both the household absolute and relative income poverty, in addition to SA andtaken from Fig. 1 as a benchmarkeconomic deprivation in the form of lacking cash margin. It is evident that this indicator teams up with absolute income poverty and SA recipiency in following a counter-cyclical pattern: during the recession poverty levels increased, and with the economic improvement following this they decreased. Many families with children suffered economically during the crisis: the proportion of children in families in absolute income poverty (that is below the minimum income standard) increased from 8 to 19 % in a few years. Just as we saw in Fig. 1 for economic deprivation, the period after 2008 has not witnessed much change.
The results above contrast sharply against the trend for relative household poverty rates, which tended to fall during the recession (1991)(1992)(1993)(1994), but showed, overall, increasing figures during the long recuperation. Thus, relative income poverty has been, for most of the period we study, pro-cyclical: decreasing in bad times and increasing in good. This is not a very good quality of an indicator of poverty, as it lacks face validity. 3 However, it is still an interesting measure, we believe, in conjunction with statistics on absolute income poverty. The reduction of child poverty in the 1996-2006 period in Sweden was due to rising real incomes in child households, but from 2006 real incomes stalled for those at the lower part of the income distribution at the same time as those higher up could maintain some growth. The result was a noteworthy increase in relative but no change in absolute income poverty, reflecting the important fact that children in poorer homes slid further behind other children without being Bcompensated^by improving purchasing power. One can hypothesize that the relative dimension becomes all the more important in a stagnating economy, but whether the increasing income inequality trickles down to children is an empirical question which we turn to when studying children's own economy.
While we find it natural to relate poverty trends to economic up-and downturns in a descriptive sense, it is obvious that the causal story behind is more intricate and far beyond the scope of this analysis. Apart from macroeconomic factors, child poverty trends are dependent on, at the least, demographic factors (e.g., immigration, divorce rates, fertility) and policy changes (e.g., family, social, and tax policy). In Fig. 2, the pattern of increase in relative income poverty since 2006 in Sweden, for example, is likely be a result of a rolling tax reform that had as its primary goal to reduce tax on employment at the expense of benefits of various kinds (the main source of income for non-employed). During this period, relative household poverty increased from 60 to 90 % for children with non-employed parent(s)in contrast, it was merely 3 % for children with two employed parents (Mood and Jonsson 2014).
Because non-employment is such a major, and growing, source of child poverty in Sweden, the dependence on market rewards for the disposable household income is critical. A consequence is that children of lone parents and immigrants are particularly vulnerable groups in Sweden just as in most studied countries (Gornick and Jäntti 2011;Smeeding et al. 2009;Eurostat 2015). Closer inspection of our data verifies that the levels of poverty are much higher for these groups; but the trends are quite similar to other groups'. Children in these at-risk categories are exceptionally sensitive to economic up-and downturns: absolute income poverty affected every third child of lone parents and every second child of immigrants in 1996, dropping to half of these discouraging figures in 2010. While the improvement was great, both the level of and amplitude in poverty rates make these groups a prime concern for policy.
Note: 1992 values are interpolated for absolute and relative income poverty.
Recent Child Poverty Trendsan International Perspective
Our results in Figs. 1 and 2 are interesting in an international macroeconomic perspective: they demonstrate how little the 2008-2009 recession hit Swedish children, especially as compared with the recession in the 1990s. The Swedish economy was restructured following the depression in the early 1990s, and although there was a temporary drop in gross domestic product (GDP) in 2008-2009, economic recovery was swift and left almost no traces in economic deprivation or income poverty. In comparison, in some other European nations, such as Ireland, Iceland and Greece, the increase in child poverty rates was substantial (UNICEF 2014).
In difference to the most recent international recession, the one in the 1990s had both sudden and long-term negative effects on child poverty in Sweden. If this experience is anything to go by, it might take a whole decade for child poverty in the economically most affected countries in 2008-2009 to return to pre-recession levels.
Poverty Among Children
When measuring poverty directly among children, the method must be partly different from the study of families with children. It is difficult to know which economic resources children command because only a small minority of them earn their own incomes, and it is impossible for them (and probably for their parents) to estimate how large a proportion of the household income goes to them. Some get regular (weekly or monthly) allowances from their parents, or they receive the child allowance, which is a universal benefit in Sweden of around €110 per month and child. Some instead get money when they need, some work regularly, while others hardly have any cash at all. Even if it is important to measure children's economy directly, it is not possible to do so with great precision. We will therefore use several complementary measures. Each has some weakness, but we believe that together they give insights about trends in children's economic precarity that would not be possible without reports directly from children themselves.
Relative Poverty: Consumption and Participation
We begin with indicators of relative poverty among children. Questions about whether a responding young person can join their friends in taking part in events (participation) and whether they can afford to buy things that their friends have (consumption) both relate to the economic situation of children's most tangible reference group. These indicators thus capture the social dimension of poverty.
Comparable indicators are available from Child-ULF 2002-11, studying children aged 10-18, and Fig. 3 shows that 8-12 % in this group experienced economic problems with both participation and consumption. 4 Between 2003 and 2007, this proportion decreased to 6 %, but has since increased somewhat. The trends for participation and consumption diverged between 2007 and 2011, but the sample was quite small in 2011, making the estimates for the end of the period less reliable.
The take-home message from Fig. 3 is that children's relative poverty fluctuated during the 2000s but without any discernable trend. During the same time, as we noted in Fig. 2, the relative income poverty among families with children increased rapidly. 5 The theory behind the relative poverty measure says that when low-income earners are amassed at levels far away from the median income earners, their problems with living a life on par with others will grow. The fact that this has not happened for youths is therefore notable, and raises the question why the trends are not aligned.
There are at least five possible answers. First, parents (and perhaps grandparents) may compensate their children economically, so when the family economy falls behind they allocate relatively more resources to their children. Secondly, the social Consumption & participation poor *Has several times during the last 6 months been unable to afford to do something with friends that one wanted to do. **Has several times during the last 6 months been unable to afford something that one wanted to buy and that many of the same age have. Data: Child-ULF. In 2006, there was a change in the ULF data collection method (from interview to phone), the first year only for half of the sample. This break in the curves is shown by different markers. consequences of increasing poverty may be muted because children's aspiration levels follow their dwindling resources. Thirdly, if the reference group for children is not the Bmedian kids^but their equally poor schoolmates or neighbours, residential and school segregation may be a mitigating factor. Fourthly, the validity of the relative income poverty measure may be wanting, as the experience of poverty may simply be picked up better with a measure reflecting purchasing power. Fifthly, there is a risk for a lag in the social consequences of increased relative poverty, meaning that it is vital to follow child outcomes for some time after an upturn in relative income poverty rates.
Material and Economic Deprivation
Another important aspect of children's economy is material possessions, though notoriously difficult to measure, in particular as we have no information on brands or prices, just products (e.g., a mobile phone rather than an iPhone). We choose to show the possessions by item, which helps in understanding the trends, as the necessity, price and preference for different items change over time, and not necessarily at the same pace. Because our data covers only a few items we caution against interpreting them as indicators of a complete set of material possessions. 6 Previous research has established that Swedish children enjoy a high material wellbeing in an international perspective (UNICEF 2012;Bradshaw and Richardson 2009), and there is no sign of deterioration during the period we study (Fig. 4). Around 90 % of 10-18-yearolds have their own room, a proportion that has been constant over time. More than half have their own TV, and the technical development is reflected in the fact that the possession of a mobile phone and personal computer has grown tremendously. 7 An indicator of economic deprivation that we used at the household level is cash margin, which is also available as a child-reported measure based on a question of whether the child can raise a sum of money (around €10) on short notice if necessary. Slightly more than 10 % lack this possibility, a proportion that naturally is highly agedependent (not shown). However, this proportion is more or less stable over time.
Access to an own room and a private TV is fairly constant over time, but there is an enormous increase in the proportion of those having their own mobile phone or their own computer. Even if conclusions must be tentative, this is in line with the growth of real incomes during the period, with a concomitant decrease of absolute poverty, but it also reflects the increasing use of and perceived need for mobile phones and computers in everyday life.
Children's Access to Own Money
Not having access to own money is an important aspect of poverty, especially for children old enough to be out on their own, without parents. Recurrent incomes most often come 6 The problem of finding a comprehensive set of indicators for constructing a valid index of material deprivation could be overcome by a thorough inventory of possessions (cf. Gordon et al. 2013;Gross-Manos 2015), but the multipurpose datasets that we use do not have such an inventory. The items included have been chosen because they are central in children's lives, have significant economic value, and are relatively reliably measured, but they do not form an exhaustive set of possessions. 7 The slight recent decrease in the proportion with their own TV in the Child-ULF data can probably be accounted for by the increasing availability of television via computers. from parents, but older children also work during weekends, in the summer, or when school is out. For our trend analysis, we draw on questions to children about their incomes, divided into pocket money/allowances and own work income. It is difficult to know how well we cover their total net income, both because young people acquire money from other sources (e.g., grandparents) and because we do not know whether they also have to cover costs (e.g., sharing their work income with parents). Neither can we assess to which extent children who lack financial resources instead get material resources (something which would reduce their deprivation but still not be equivalent to money in terms of freedom of action). However, while conclusions about levels of income are prone to such measurement problems, the study of trends in incomes should be more reliable as measurement problems are unlikely to vary much over time.
More than 80 % of 10-18-year-olds have regular incomes from their parents, a proportion that has been more or less constant during the period 2001 to 2008 (the last year the question was asked). The average sum was also relatively constant, around €40 per month. In the category without regular incomes, most claim that they get money from their parents when needed (an Bon-demand economy^), but we were not able to ascertain how much money this entails. Both the regularity and size of the incomes from parents are strongly dependent on the child's age. The small group (3-5 %) that report that they never get any money from their parents, for example, is dominated by younger children. Figure 5 also reveals that while the level of income is age-specific, there is no change over time in either age group. The oldest (16-18 years old) receive around €80 per month and the youngest (10-12 years old) around €11 throughout the period. 0 It is a characteristic of the Swedish labour market that virtually no one up to the age of 18 is gainfully employed, although, in theory, children are allowed to leave school after the age of 16. Getting money from extra work is an alternative for older children. 8 Around 16 % of 16-18-year-olds work every week during school terms, 13 % some time during the month, and 70 % hardly work at all (results not shown). It is more common to work during summer breaksaround half of all 16-18-year-olds have done so during the most recent break. Important for our purposes here, this extent of extra work for pay has not changed during the period 2001-2011.
Poverty Among Children: Vulnerable Groups
In the analyses of household poverty, we identified as particularly vulnerable groups children residing with only one parent and children of immigrants. Is this pattern reflected when measuring poverty directly at the child level? The answer is yes, but not unambiguously so.
As demonstrated in the upper graph of Fig. 6 children who have experienced a parental separation clearly have higher risks for facing problems with consumption or participation, and they also lack a cash margin somewhat more often. It is interesting to note that children in reconstituted families have the same degree of economic problems as children to single parents, despite the fact that the household economy among children with stepparents is almost identical to the one in families with two original parents. This is a strong indication that household income as a liquid resource is not shared with step-children to the same extent as with biological children. When it comes to material standards, there is however no systematic disadvantage for children in reconstituted families. The lower graph in Fig. 6 describes differences in economic hardship between immigrants' and the majority's offspring (excluding children with Bmixed^origins). Both consumption and participation problems are relatively equally shared, even though children of immigrants more often have both these problems. Material standards are fairly similar too, with one striking exception: more than 30 % of children of immigrants lack their own room, while this is true for less than 4 % of children of Swedish-born parents. Another difference (not shown) pertains to the regular income from parents, in whatever formchildren of immigrants have such money flow less often. In addition, they work less often and therefore have less earned money themselves.
The trends over time in child-reported economic conditions appear to be roughly similar for children in different family types and between immigrants and natives, but our data are somewhat too sparse to draw any firm conclusion about these sub-groups.
Household Poverty and Child Economic Resources
We studied the economic situation at the household and child levels, respectively, and it is quite natural to expect a rather strong association between them. Figure 7 tests this by comparing the economic deprivation of parents and children for the period 2008-2011. Indeed, if parents lack cash margin, their children are twice as likely as other children to do so as well (15 % compared to 7.5 %), and the differences are large for the relative child poverty measures. However, we must note that the associations between parent and child deprivation are far from perfect. In fact, a majority of children of economically deprived parents do not report problems, and as many as 85 % have a cash Trends in Child Poverty in Sweden: Parental and Child Reports margin in spite of their parents' economic problems. Again, we see that the material standard is rather high, and the situation for children of poor parents does not differ much in the measured respects from other children's, with one important exception: the former less often have their own room.
Parental and Child Poverty as Predictors of Child Outcomes in Other Domains
While economic resources are by themselves an important part of the level of living, much of the research interest in child poverty is motivated by the assumption that poverty has negative consequences in other domains. Traditionally, research on the consequences of poverty focused on long-term outcomes such as educational attainment, nest-leaving and teenage pregnancy (e.g., Duncan and Brooks-Gunn 1997;Mayer 1997), but there is now a growing recognition of the importance of studying outcomes for children while they are children (Bwellbeing^in contrast to the traditional Bwell-becoming^). If poverty impedes children's chances of making and keeping friends, taking part in social activities, or if it is detrimental to their psychological wellbeing, then this is unquestionably sufficient for child poverty to be regarded as a serious societal problem, no matter whether it has longterm consequences or not (cf. Ben Arieh et al. 2001).
Taking this Bhere-and-now^perspective on child poverty, an important question is which of household and child-level poverty is the more powerful predictor of negative outcomes in other domains. Main and Bradshaw (2012) point out that the surprisingly weak associations found between poverty and child wellbeing in the previous literature may suggest that parent-reported poverty is not a valid representation of child poverty, and their results reveal that child-reported poverty is indeed more strongly related to wellbeing outcomes. There is as yet only a small but growing literature on the relationship between child-reported poverty and their wellbeing in other domains (Jonsson and Östberg 2004;Olsson 2007;Bradshaw and Main 2012;Main 2014;Gross-Manos 2015), and the findings generally show substantial associations with wellbeing in several domains of life.
In Table 3, we compare poor and non-poor children in terms of their participation in leisure time activities, relations with friends, health, health-related behaviour, safety, and crowded housing (variables described in Appendix 3). We define poverty as (i) household-level poverty, measured as parent-reported lack of cash margin 9 and (ii) child-level poverty, measured as child-reported lack of own cash margin. Note that it is only the poverty variable that differs, while the child outcome variables are identical in the two models, with all except two being child reported.
As in previous studies in the area, what we can study here are associations rather than causal effects. Nevertheless, in order to exclude some obvious alternative explanations, we control statistically for a number of background variables, and adjusted differences between poor and non-poor are shown in the rightmost column for each poverty definition in Table 3. These figures can thus not be accounted for by compositional differences between the poor and non-poor in terms of gender, age, residential region, parents' education, parents' health, family type, or immigrant background. 10 Table 3 reveals systematic differences across most domains: poor children, regardless of whether we define the group in terms of their own or their household's conditions, report less participation in sport activities, 11 perceive their neighbourhoods as more unsafe at night, live in more crowded homes, have worse health-related behaviours, and report more bullying and worse psychological and somatic health. Even if the sizes of some of these differences are not alarming, they remind us that everyday experiences of poor youth could be burdensome.
The pattern of association between poverty and child outcomes is similar regardless of whether we use parent or child reports on poverty, and associations are of roughly similar size. There are, however, some noteworthy differences such as clearly stronger associations with health problems when using child-reported poverty. This may suggest that the child's own economy is particularly important for health outcomes, but we cannot rule out the possibility that children with more health problems have a more negative outlook on life and over-report economic problems. 12 We also see a clearly higher level of reported bullying among poor children when defining poverty by their own economy, but no correspondingly large differences in the friendship variables. Parent-reported poverty is more strongly associated with overcrowding and perceived problems in the neighbourhood, which seems natural because parental economy determines housing choices (but also because these variables are the only parent-reported outcomes).
Our results for child-reported poverty are in line with those from studies from other countries, who also find clear effects on various wellbeing dimensions (Bradshaw and Main 2012;Main 2014;Gross-Manos 2015), but in contrast to Bradshaw and Main (2012) and Main (2014), we find that parent-reported poverty is also rather strongly associated to child wellbeing. This difference can potentially be explained by differences in poverty definitions and in model specification, as we see no theoretical reasons that parental poverty should be more strongly related to child outcomes in Sweden than in England.
Causal effects of economic resources on child outcomes are notoriously difficult to estimate, but previous results suggest that they may not be as severe as is often thought (Dahl and Lochner 2012;Duncan et al. 2011;Mayer 1997). Our aim here is primarily to see whether parent-and child-reported poverty are predictive of problems in different domains, and our analysis does not permit causal conclusions. We believe, however, that for housing-and neighbourhood-related problems, such as overcrowding and safety, a causal effect is plausible as economic resources are fundamental to where 10 In our models we control for variables that are likely to be exogenous, in other words come prior to poverty, but family type and health may to some extent be endogenous (mediators), which would result in an underestimation of the effect of poverty on outcomes. The poor/non-poor differences are in most cases somewhat higher when not adjusting for these variables. We use OLS regressions for metric/index outcomes and linear probability models (LPM) for dichotomous outcomes, because they, in difference to logistic models, give estimates that are comparable across groups and interpretable in percentage unit terms (Mood 2010). 11 However, only a small group of poor children report not being able to afford some activity that they would like to do, meaning that the economic situation may not be the direct cause for poorer children not to participate. A downward adjustment of aspiration levels among poor children may contribute to the small difference to the non-poor. 12 Although we use the most objective indicator in our set, cash margin, there is still a subjective element in the response, which may be related to some of the outcomes. When we instead use the child indicators of relative poverty (participation and consumption) we get stronger associations for some outcomes, but this may be because of reverse causality or some common underlying factor. and how families live. Outcomes having to do with social relations and participation are also likely to be causally affected by poverty to some extent (cf. Mood and Jonsson 2015, who find a likely causal effect on such outcomes among adults). However, for health and health-related behaviour we find it more difficult to draw conclusions. Maybe children in poorer families exercise less often because it is more difficult to find an attractive form of training on a limited budgetbut why do they skip (the free) lunch more often and smoke more? Here, we cannot rule out that economically vulnerable families tend to have other characteristics that we do not capture in our analyses, and which, potentially, are the real causes of behaviour that may affect children's health.
Conclusions and Discussion
We use family-based indicators of household poverty as well as child-reported economic resources and problems to unravel child poverty trends in Sweden based on different measures. Our results show that absolute household income poverty (minimum income standard) increased with the recession from 1991 to 1996, and that increasing real incomes reduced poverty among families with children between 2000 and 2006, after which it has remained largely unchanged. While it took around 10 years for poverty rates to return to pre-recession levels following the 1990s macroeconomic collapse, Swedish children did not suffer visibly from the international recession in 2008-2009. Material deprivation and social assistance rates followed these absolute child poverty trends fairly closely, and these are all counter-cyclical in relation to the macroeconomy. However, increasing income inequality has led to growing rates of relative income poverty since the mid-1990s. A worry is that real income growth, that for a long time offset increasing income inequality for the poorest families, has stalled and therefore the period after 2006, approximately, is characterized by poor children lagging more and more behind those of more economically fortunate backgrounds, without experiencing any improvement in purchasing power.
However, when we instead turn our attention to child poverty as measured by children's own reports, where we have data for the 2000-2011 period, we could not find any increase in relative poverty that matches the pattern for household poverty. This contradictory pattern may be a consequence of poor parents distributing relatively more of the household income to their children in times of economic duress, as suggested by previous findings (Middleton et al. 1997;Ridge 2002), but it may also be a sign that relative income poverty lacks validity as an indicator of poverty. One mechanism that may slow down negative effects of relative poverty is socioeconomic segregation, which leads poor children to live near, go to school with, and compare themselves predominantly with other poor children.
All in all, we find few negative trends in child poverty as reported by children themselves for the period 2000-2011. This is true for relative poverty, for material possessions, for cash margin, and for income from pocket money and from work for pay. Actually, we find very few trends at all. It appears that, during the period we study, children's economy was hardly affected by changes in either relative or absolute income poverty trends at the household level, with the exception of an improvement in some indicators of material possessions.
The overall lack of trends in child-reported poverty may raise doubts about the validity of child-reports as measures of economic hardship, but our results clearly show that childreported poverty is associated with lower quality of life in a variety of domains, and that the pattern is overall similar to the one based on household poverty indicators. We believe that the most reasonable interpretation of our results in this respect is that household-and child-reported poverty mainly capture similar and also partly different dimensions of economic hardship, both being of relevance for children's everyday lives.
Our results suggest that the trends in both absolute and relative household poverty, and their relation to children's outcomes, should be followed particularly closely in countries that, like Sweden, have experienced macroeconomic stagnation and growing income inequality. Although we do not register any signs of increasing poverty from our child reports, it may be that negative consequences for children of increasing economic differences emerge only gradually. If real incomes around the median but not the bottom of the income distribution continue to rise, there is a risk that the normal consumption level among children escalates (e.g., in terms of leisure time activities, electronic gadgets or fashion clothing), so that the poorest can no longer keep up with their more fortunate friends and schoolmates.
In Sweden, child poverty measured at the household level is highly dependent on the extent to which a household relies on benefits rather than market incomes. This means that (especially newly arrived) immigrants and single parents are vulnerable groups, something that has been shown repeatedly on one-shot cross-sectional data (e.g., Smeeding et al. 2009;Eurostat 2015). We found that children in these groups experienced similar poverty trends as other children, but also that they are more sensitive to macroeconomic up-and downturns: every third child of a single parent and almost every second of immigrant background fell below the (absolute) poverty line during the last great recession in Sweden, in the mid-1990s. However, tax and family policy are important in determining child poverty rates, so targeting these groups for poverty reduction is possible if the political will could be mustered.
The UN Convention on the Rights of the Child and the associated demands to monitor child wellbeing has been an important force behind putting child poverty on the agenda. Our analyses show the usefulness of studying child poverty using different poverty definitions. There will never be one single poverty measure to capture all of the dimensions that poverty entails and trends may differ according to the measure used. The household poverty measures used hereabsolute income poverty, social assistance, relative income poverty, and economic and material deprivationare informative and relatively simple to follow over time for monitoring purposes. They are, however, not sufficient, but must be complemented with nationally representative data on children's conditions as reported by children themselves. Such information is crucial for detecting child-relevant trends, indispensable for evaluating the consequences of change in household poverty for child outcomes, and necessary for the further study of the relation between macro-and household economy and child wellbeing.
Collection and analysis of child-reported data on children's own economic resources is a field that holds great promise for developing our understanding of poverty. It can also make child poverty visible in a way that is both relevant for the target group and easy for policy makers and the general public to comprehend, as it portrays poverty in terms of the lack of tangible everyday resources and opportunities that most people can relate to.
& The Household Finances Survey (HEK, Statistics Sweden) is an annual Swedish survey running since 1975. The earliest comparable information in HEK pertains to 1993. However, in some cases, one can use a specially calculated value for 1991, and in these cases the value for 1992 is interpolated as the average of 1991 and 1993. Data is collected by phone interviews covering, for example, household composition, housing, housing costs, childcare, employment, working time, occupation and medical expenses. The survey data are matched to register data on, for example, incomes, benefits and taxes. The population consists of Swedish residents 18 years or older during the survey year, excluding people in institutions or in military service. Data is collected for the sampled persons and those in his/her household, and the sample size has varied between 10,000 and 19,000 households. Sweden, Czech Republic, Germany, Hungary and Austria) but also Iceland, Croatia, Norway, Switzerland and Turkey. EU-SILC is representative of the population in each country and collects information on the respondents' social and economic situation, such as income, deprivation, social exclusion and standard of living. The survey contains both cross-sectional and longitudinal data, and data is collected for individuals but in some cases also for households. The Swedish data is collected by Statistics Sweden.
Social Assistance
Social Assistance (SA) is a means-tested benefit given to those who lack sufficient own incomes to have an adequate living standard. The income limits for SA are set for different household types each calendar year by the government, and should reflect reasonable costs for food, clothes, shoes, household items, free-time activities/equipment, health, hygiene, newspaper, TV and telephone. In addition, reasonable costs for housing costs, insurance, electricity, travel and union membership are judged for each household. Costs for special needs, e.g., dental care, glasses, childcare and medicines can also be covered. Our measure indicates whether the household has received SA some time during the year (irrespective of duration or volume) or not.
Household Income Measures
Median income is the income in the middle of the income distribution. Disposable income consists of incomes from work, capital, transfers and benefits, while taxes are subtracted.
Equivalized disposable income or disposable income per consumption unit adjusts the disposable income by an equivalence scale to reflect the needs as estimated by household size and composition. The equivalence scale used in the Household Finances Survey (HEK) is: 1one adult 1.51two adults 0.52first child 0-18 0.42later children 0-18 0.60children over 19 and other adults in the household Real income, reflecting purchasing power, is nominal income that is corrected for inflation (per consumer price index, CPI) in order to be comparable over time.
Household Income Poverty
Poverty measure Definition
Low income standard
The household's equivalized disposable (absolute poverty) income is below the threshold for low income standard. 13 Relative poverty The household's equivalized disposable income is below 60 % (or 50 %) of the median income in the country.
…spend time with friends in some other place (e.g. outside) …participate in some organized sports activity For each item those who report doing it at least one a week are coded 1, others are coded 0. Using the three variables measuring time with friends, we also construct one variable saying whether one spends time with friends at all during a normal week (regardless of whether it is at home, in their home or in some other place).
The child is asked how often, during the last 6 months, that they have… …skipped breakfast. If at least once a week it is coded as 1, otherwise 0. …skipped lunch. If at least once a week it is coded as 1, otherwise 0.
…exercised. If at least once a week it is coded as 1, otherwise 0.
The parent is asked if vandalization, violence or theft is common in their neighbourhood. If yes, it is coded 1, otherwise 0.
The parent is asked how many persons there are in the household and how many rooms in the home, and this information is used to calculate the number of persons per room.
Psychological complaints is an index based on child responses to the following questions: Response options: Matches exactly, matches roughly, matches poorly, or does not match at all. Responses are coded 0 (no problem) to 3 (big problem) and are summed to an index with a scale of 0-24; minimum observed value=0; maximum observed value=23; mean=6.4; standard deviation=3.6.
Somatic complaints is an index based on child responses to the following questions: The past 6 months, how often have you had the following? Response options: Every day, several times a week, once a week, a couple of times a month, or more seldom. Responses are coded 0 (no problem) to 4 (big problem) and are summed to an index with a scale of 0-16; minimum observed value=0; maximum observed value=16; mean=4.3; standard deviation=2.9.
Bullying is an index based on child responses to the following questions How often do you usually experience the following things in school?
& Other students accuse you of things you have not done or things you cannot help & No one wants to be with you & Other students show they do not like you somehow, for example by teasing you or whispering or joking about you & One or more students hit you or hurt you in some way Response options: Almost every day, at least once a week, at least once a month, once in a while, and never. Responses are coded 0 (no problem) to 4 (big problem) and are summed to an index with a scale of 0-16; minimum observed value=0; maximum observed value=16; mean=1.5; standard deviation=2.2.
Control Variables
Parental health is the self-rated health of the responding parent coded into three dummy variables (good, bad and in between) Immigrant background is defined as having two parents born abroad (for children in two parent families), or, if one lives with a single parent, as this parent being born abroad.
Region of residence is classified into seven areas (so-called h-regions: Stockholm; Gothenburg; Malmö; Other larger cities; Northern remote areas; other Southern, other Northern) Parental education is the responding parent's highest out of seven levels of education.
Gender is coded 0/1, and age is coded in years. Their interaction is also included in the models.
Family type is based on information from the parent and is coded into three categories: (1) Child lives with both biological/adoptive parents, (2) Child lives primarily in single parent household, (3) Child lives primarily in reconstituted family.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 2018-04-03T05:55:51.402Z | 2015-09-23T00:00:00.000 | {
"year": 2015,
"sha1": "378ee618b6407b85a401bb2c348299d386870d21",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12187-015-9337-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "378ee618b6407b85a401bb2c348299d386870d21",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics",
"Medicine"
]
} |
6374727 | pes2o/s2orc | v3-fos-license | Molecular reordering processes on ice (0001) surfaces from long timescale simulations
We report results of long timescale adaptive kinetic Monte Carlo simulations aimed at identifying possible molecular reordering processes on both proton-disordered and ordered (Fletcher) basal plane (0001) surfaces of hexagonal ice. The simulations are based on a force field for flexible molecules and span a time interval of up to 50 {\mu}s at a temperature of 100 K, which represents a lower bound to the temperature range of Earth's atmosphere. Additional calculations using both density functional theory and an ab initio based polarizable potential function are performed to test and refine the force field predictions. Several distinct processes are found to occur readily even at this low temperature, including concerted reorientation (flipping) of neighboring surface molecules, which changes the pattern of dangling H-atoms, and the formation of interstitial defects by the downwards motion of upper-bilayer molecules. On the proton-disordered surface, one major surface roughening process is observed that significantly disrupts the crystalline structure. Despite much longer simulation time, such roughening processes are not observed on the highly ordered Fletcher surface which is energetically more stable because of smaller repulsive interaction between neighboring dangling H-atoms. However, a more localized process takes place on the Fletcher surface involving a surface molecule transiently leaving its lattice site. The flipping process provides a facile pathway of increasing proton-order and stabilizing the surface, supporting a predominantly Fletcher-like ordering of low-temperature ice surfaces, but our simulations also show that proton- disordered patches on the surface may induce significant local reconstructions. Further, a subset of the molecules on the Fletcher surface are susceptible to forming interstitial defects.
We report results of long timescale adaptive kinetic Monte Carlo simulations aimed at identifying possible molecular reordering processes on both proton-disordered and ordered (Fletcher) basal plane (0001) surfaces of hexagonal ice. The simulations are based on a force field for flexible molecules and span a time interval of up to 50 µs at a temperature of 100 K, which represents a lower bound to the temperature range of Earth's atmosphere. Additional calculations using both density functional theory and an ab initio based polarizable potential function are performed to test and refine the force field predictions. Several distinct processes are found to occur readily even at this low temperature, including concerted reorientation (flipping) of neighboring surface molecules, which changes the pattern of dangling H-atoms, and the formation of interstitial defects by the downwards motion of upper-bilayer molecules. On the proton-disordered surface, one major surface roughening process is observed that significantly disrupts the crystalline structure. Despite much longer simulation time, such roughening processes are not observed on the highly ordered Fletcher surface which is energetically more stable because of smaller repulsive interaction between neighboring dangling H-atoms. However, a more localized process takes place on the Fletcher surface involving a surface molecule transiently leaving its lattice site. The flipping process provides a facile pathway of increasing proton-order and stabilizing the surface, supporting a predominantly Fletcher-like ordering of low-temperature ice surfaces, but our simulations also show that proton-disordered patches on the surface may induce significant local reconstructions. Further, a subset of the molecules on the Fletcher surface are susceptible to forming interstitial defects which might provide active sites for various chemical reactions in the atmosphere. a) Electronic mail: andped10@gmail.com
I. INTRODUCTION
The surface of water ice plays a key role in atmospheric sciences. It moderates the temperature on Earth through reflection of sunlight, traps trace gases and catalyzes various chemical reactions that are inefficient in the gas phase. An important example of the latter is the production of active chlorine species that destroy ozone molecules in polar stratospheric regions 1 . The molecular-level structure and dynamics at ice surfaces are, however, difficult to investigate experimentally and a range of fundamental questions remains unanswered.
At the same time, classical dynamics simulations are challenging because of the need for an accurate description of the molecular interactions and capturing processes that take place on a timescale that is very long compared with vibrational motion.
Ice Ih is the most common ice phase on Earth. In this phase, the oxygen atoms of the water molecules form a hexagonal tetrahedrally ordered lattice. The H 2 O molecules can have a random orientation as long as the ice-rules are obeyed 2 -every H 2 O donates and receives two hydrogen (H-) bonds and there is one H-atom between each nearest neighbor oxygen-oxygen pair. At moderate cooling below the freezing point, i.e., from 273 K down to about 240 K, the ice surface is characterized by a disordered quasi-liquid layer, but the exact temperature at which this layer forms and how its thickness depends on temperature remains to be determined 3 . Nonetheless, timescales involved in this temperature range are within reach of conventional simulation techniques, and classical dynamics simulations using force fields have provided insights into such disordering and pre-melting phenomena [4][5][6] . At lower temperature, where polar stratospheric clouds form, below ∼ 200 K, the ice surface is more rigid but molecules in the top bilayer are still significantly more mobile than molecules in the crystal. Laser-induced thermal desorption measurements on isotopically substituted ice films have indicated a mean residence time in the range of 10 −3 s to 10 −6 s at 180-210 K before surface molecules diffuse into the crystal 7 . At even lower temperature, the surface structure and dynamics of ice is relevant for understanding chemical processes in polar mesospheric clouds that form at temperature down to around 140 K. Ice is also abundant in the interstellar medium where the temperature can be as low as ∼10 K. Although the majority of ice under such conditions is believed to be amorphous, observations suggest that also crystalline ice is present 8 .
The molecular-level surface structure of ice at low temperature is still a subject of debate.
Low-energy electron diffraction (LEED) 9 and helium atom scattering experiments 10 has given strong support for a full-bilayer termination, i.e., an intact surface bilayer, of the (0001) surface below 100 K although the LEED measurements at 90 K indicated sufficiently strong vibrational motions of the outermost molecules to render them undetectable 9 . If before returning to its crystalline position. We then compare the energetics of the reordering processes observed using TIP4P/2005f to more advanced approaches, i.e., DFT as well as the ab initio-based single-center multipole expansion (SCME) water model 21 , to test and refine the force field predictions.
This paper is organized as follows. In the next section the ice surfaces and interaction potentials used in this study are described along with details of the AKMC method. We then present our results on both proton-disordered and ordered Fletcher surfaces and compare the force field predictions to more elaborate interaction models, and finally give concluding remarks in the final section. The c-axis was chosen along the z-direction and periodic boundary conditions were applied in the x-and y-directions to mimic an infinite surface. The ionic and simulation cell degrees of freedom were then optimized using the TIP4P/2005f force field 20 under the constraint that the ratio between the side-lengths in x-and y-directions remained constant (a and c are allowed to vary). The resulting ratio between c and a is 1.738, which is 6.4% larger than the ratio for a perfect hcp lattice and 6.8% larger than the experimentally observed ratio for ice Ih 23 . For the energy-minimized structure, the lowest two bilayers were then frozen in the crystal configuration and the surface was created by adding a vacuum layer in the z-direction. This method generated an ice Ih substrate with a disordered d-H pattern.
To create a Fletcher surface, the same method was applied, but with an initial crystal sample with a Fletcher-like H-bond pattern between two of the bilayers. The d-H patterns of both substrates are shown in Fig. 1. The calculated surface energy for the top bilayer is 8.40 meV/Å 2 and 9.13 meV/Å 2 for the Fletcher and disordered surfaces, respectively, calculated as where the sum runs over the binding energy E b of all molecules in the top bilayer, E b is the cohesive energy of the crystal, 635 meV, and A is the surface area, 23.08Å × 22.21Å.
B. Adaptive kinetic Monte Carlo
To sample configuration space, the adaptive kinetic Monte Carlo method 18,24 was used as implemented in the EON software 25 . This involves sampling potential energy minima by traversing through saddle points (SPs) on the potential energy surface according to the kinetic Monte Carlo (KMC) algorithm. To locate SPs, the iterative minimum-mode following method was applied [26][27][28] . This method uses the lowest eigenvalue mode of the Hessian matrix which was, in this case, estimated by the Lanczos method 29,30 . The search for a SP starts by slightly displacing the system at random from its initial potential energy minimum, referred to here as reactant state. Then, the force component parallel to the minimum-mode is inverted and a climb on the potential energy surface up to a SP is conducted by applying an ordinary minimization algorithm. The two minima separated by the located SP, the reactant and product states, are determined by displacing the system along and in the opposite direction of the minimum-mode at the SP followed by minimization. Searches for SPs and minima were considered converged when the maximum force acting on any atom decreases below 1 meV/Å. Transition rates through the located SPs were estimated using Harmonic Transition State Theory (HTST): where E SP is the energy of the SP, E R is the energy of the reactant configuration, ν SP,i and ν R,i are the frequencies of the vibrational modes at the SP configuration excluding the unstable mode and the reactant configuration, respectively, and D is the number of degrees of freedom. For a given reactant state, a successful SP search that identifies a unique SP connecting to a product state is termed a transition and is entered into the event table of that state. When sufficiently many transitions for a reactant state have been found and the event table is considered to be complete enough, the simulation proceeds by picking one of the transitions at random with probability proportional to their relative rates. Time is then advanced by where µ is a random number on the interval (0, 1] and j runs over all distinct transitions in the event table, and the system is moved to the product state.
The initial displacements to start the SP searches were made by rotating and translating for grouping together states connected by fast transitions was applied 32, 33 . The temperature was set to 100 K in the AKMC simulation, but the energy barrier and pre-exponential factor for each transition were saved and could in principle be used to simulate the system at other temperature within the HTST approximation.
C. Interaction potentials
In all the AKMC simulations, the inter-and intramolecular interactions were modeled with the TIP4P/2005f potential 20 , which is a flexible version of the TIP4P/2005 potential 34 .
The system is subject to periodic boundary conditions and all interactions are smoothly truncated at distances between 9 and 10Å, based on the separation between the centers of mass of the molecules. i.e., around 10% 36 . SCME also predicts the equilibrium density of ice to be within 2% of experiments and, furthermore, accurately reproduces the experimental cohesive energy 21 .
When evaluating DFT and SCME energies of the stable structures resulting from the AKMC simulations, the size of the cell was first rescaled, since neither optPBE-vdW nor SCME predict the same lattice parameters as TIP4P/2005f, and all ionic positions were then relaxed. The lattice parameters of SCME were reported in ref. 21 where the electronic structure is described with a dual Gaussian and plane-wave basis set.
III. RESULTS
We now discuss the results of the AKMC simulations for the two models of the (0001) surface. Starting from both a disordered and a Fletcher surface the long timescale dynamics at 100 K was simulated. For the disordered surface, the simulation was terminated after a surface roughening process occurred at 0.11 µs when a total of 15330 unique transitions had
Interstitial formation
The simplest type of process observed was the creation of an interstitial defect. Although For the transition to take place a barrier of 130 meV must be surpassed and the resulting change in energy is a lowering of 60 meV. The pre-exponential factor for this process was calculated to be 1.3 · 10 13 s −1 .
It might be expected that the molecule undergoing the transition to become an interstitial is in an energetically unfavorable geometry, particularly since it has four nearest neighbor d- Calculated energy differences between reactant and product states of the interstitial formation process by DFT/optPBE-vdW and the polarizable SCME potential differ somewhat from the TIP4P/2005f value, as can be seen in Table I. SCME predicts qualitatively similar energetics with an energy lowering of 25 meV, while DFT predicts an energy increase of 64 meV. Since applying higher levels of theory for a system of this size would be prohibitively expensive, it can only be concluded at this point that the interstitial configuration is a local energy minimum with an energy close to the pristine surface. A more extensive discussion of the comparison between interaction potentials will be presented in section III D below.
Concerted reorientation
Another possible process on the disordered surface revealed by the AKMC simulation is a concerted reorientation of three molecules which shifts a d-H from one surface molecule to another. This process is analogous to the "relay" mechanism observed by Bishop et al. 6 in classical dynamics simulations using the NvdE six-site potential at 230 K, where it occurred on a ns timescale. Here, we find that it can occur also at 100 K on a µs timescale. Similarly as for the interstitial formation, several instances of this type of flipping process were found in the event tables constructed from the SP searches but only one was chosen as part of the simulated time evolution of the system. This transition is depicted in Fig. 3 exothermic by around 100 meV for SCME and by around 40 meV for DFT.
Surface roughening
The third structural change causing an energy lowering results from a series of 15 transitions starting from a surface with an interstitial defect, shown in Fig. 4, where the interstitial is marked by a white star. A distinct difference as compared with the previously discussed This process, which is composed of 15 distinct transitions, disrupts the crystalline order of the surface to a surprisingly large extent and it is important to check its feasibility when the molecular interactions are described using the alternative methods, DFT/optPBE-vdW and SCME. As seen in Table I, SCME predicts energetics similar to TIP4P/2005f with an energy lowering of 177 meV, while DFT also predicts a stabilization but of smaller magnitude, around 62 meV. This roughening of the hexagonal lattice thus appears to be a realistic process that might take place on proton-disordered ice surfaces at low temperature.
B. Fletcher surface
The AKMC simulation starting from a perfectly ordered Fletcher surface spanned a total of 49 µs and revealed two main types of processes. As in the case of the proton-disordered surface, interstitial defects can be created in single transitions, involving just one SP. Around 10% of the surface molecules shift frequently between the normal and interstitial positions.
A process where one molecule temporarily leaves its lattice site, forming a possible precursor state to a vacancy defect on the surface, was also observed.
Interstitial formation
One representative example of an interstitial formation transition is shown in Fig. 5. For the process to take place, the system must overcome an energy barrier of 160 meV and the corresponding pre-exponential factor was calculated to be 3.0 · 10 13 s −1 . The barrier for interstitial formation is thus 30 meV larger than for the disordered surface, the preexponential factor is smaller by a factor of 3.7 and the stabilization of the surface is only Considering the previously assumed stability of the Fletcher surface 12-14 , the slightly lower energy of the resulting interstitial defect surface compared to the pristine surface is interesting. We find that SCME predicts a very small stabilization, -5 meV, similar to TIP4P/2005f, while DFT/optPBE-vdW predicts a destabilization of 145 meV. Taking into account also the interstitial formation on the disordered surface, there thus appears to be a consistent trend where the TIP4P/2005f force field favors the formation of interstitial defects in order to reduce the number of d-H, while DFT disfavors it and the ab initio-based SCME potential gives results that are in between the other two.
Vacancy formation
The second type of process on the Fletcher surface can be viewed as the first series of transitions towards the formation of a surface vacancy, shown in Fig. 6. This process starts with the rate-limiting transition where a molecule (blue) rotates its d-H into the surface plane forming a H-bond to a neighboring upper-bilayer molecule with a dangling blue. This process leads to a structure where one molecule has left its crystalline site, forming two heptamer and one pentamer rings, but in the AKMC simulation the molecule returned to its crystalline site after ∼10 ns.
oxygen lone-pair. The barrier for this transition is 170 meV, the pre-exponential factor is 3.6 · 10 13 s −1 , and the product state, b, is 135 meV less stable than the initial state. Through the following two transitions, b→c→d, the molecule leaves its lattice site and finds a more stable position which is, however, 75 meV less stable than the initial state, a. An analysis of the binding energy of the top bilayer molecules reveals that the major contribution to the destabilization is the less favorable position of the moving molecule which leaves a strongly bound site, E b =503 meV, to enter a less favorable site, E b =451 meV, in the final state. No further reordering transitions took place in the final state. Instead, the affected molecule returned to its initial crystalline position after around 10 ns.
For this process, the SCME and DFT/optPBE-vdW calculations predict a destabilization by 191 and 334 meV, respectively. Again, the more disordered nature of the final state leads to a larger energetic preference in DFT for the perfectly ordered Fletcher surface as compared to the TIP4P/2005f force field, with SCME giving results that are in between the other two.
C. Comparison of disordered and Fletcher surfaces
The reordering processes discussed above are those selected by the AKMC algorithm
D. Comparison of interaction potentials
The AKMC simulations using the TIP4P/2005f force field identified several processes that can take place on the basal plane surface of ice Ih at low temperature, and the viability of these has been tested using more elaborate approaches, i.e., DFT/optPBE-vdW and the SCME ab initio-based model. Some conclusions can be drawn from this comparison. Firstly, it is worth highlighting that the product states found by AKMC with the TIP4P/2005f force field are also stable local energy minima when DFT/optPBE-vdW and the SCME potential are used to compute interatomic forces. After rescaling the simulation cell to conform to the DFT/optPBE-vdW or SCME equilibrium lattice constants and relaxing the atomic coordinates, it is found that the geometrical changes occurring in the transitions found by AKMC are similar for the three different descriptions of the molecular interactions. For instance, in the two interstitial formation processes discussed above, on the disordered and 36 . In order to predict the abundance of interstitial defects on basal plane ice surfaces, it will be important to obtain more accurate theoretical values for these energy differences.
All methods applied here agree in that the surface roughening process lowers the interac- Our observation of a series of transitions that lead to a large distortion of the hexagonal lattice of O-atoms at the proton-disordered surface is difficult to reconcile with existing helium atom scattering experimental data 10 which indicate a smooth basal plane ice surface at 90 K. The initial proton-disordered surface, constructed with a commonly used method for generating proton-disordered ice simulation cells, was metastable with respect to a disordering reconstruction which is hardly reversible due to the associated increase in structural entropy. Further, the flipping process that occurred on the proton-disordered surface provides a possible low-barrier exothermic pathway to increase proton order. Hence it appears unlikely that completely proton-disordered surfaces are representative models of real ice surfaces at low temperature. Although by no means conclusive, the lack of any processes leading to significant distortions of the lattice on the Fletcher surface, despite ∼500 times longer simulation time than for the proton-disordered surface, suggests that proton-order inhibits such processes at low temperature. Furthermore, our analysis of the event tables constructed by AKMC showed that the Fletcher surface is significantly more robust against local reordering transitions than the disordered substrate. In combination with the above mentioned experiments, our simulation results, therefore, suggest that the surface is to a large extent stabilized by stripes of dangling H-atoms as suggested by Fletcher 15 . This is particularly interesting when considering the relatively small difference, less than 9%, in the energy of the two surfaces. Further work, however, will be required to reveal in more detail what type of patterns of dangling H-atoms might emerge and how they depend on temperature.
Our results also highlight the importance of carefully considering the surface proton order in simulation studies of ice, e.g., when investigating atmospheric chemical reactions. Using proton-disordered surfaces with a large amount of repulsive interactions between dangling H-atoms may lead to unrealistic predictions of the energetics and reaction pathways.
On the other hand, the formation of proton-disordered patches on largely proton-ordered ice surfaces might be considered as rare events that can occur locally. Previous experimental measurements may have been insensitive to the type of roughening of crystalline order that we observe on the proton-disordered surface, but its possible importance for the chemical reactivity of ice surfaces motivates further experimental efforts to characterize in more detail the surface structure of ice at low temperature.
Simpler reordering processes are observed on the ordered Fletcher surface over a time interval of 49 µs, where several molecules frequently rotate around their molecular axis and shift into the surface to become interstitial defects. Some of the surface molecules are particularly susceptible to these processes and spend a significant portion of the simulated time in the defect geometry. Moreover, our observation of a transient process on the Fletcher surface lasting about 10 ns which leads to a possible precursor state to a surface vacancy defect, suggests that surface vacancies may form on a µs timescale even at low temperature, i.e., around 100 K. Future experimental and theoretical work would be required to quantify the propensity of defects and elucidating how they affect the chemical reactivity of ice surfaces.
A. Acknowledgement | 2014-11-28T09:29:34.000Z | 2014-09-26T00:00:00.000 | {
"year": 2014,
"sha1": "15db74e832dc9c5ac3506d2e36504cf90305868a",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1409.7553",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "15db74e832dc9c5ac3506d2e36504cf90305868a",
"s2fieldsofstudy": [
"Environmental Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Chemistry",
"Medicine",
"Physics"
]
} |
229607523 | pes2o/s2orc | v3-fos-license | CREATIVE MARKETING AND INNOVATIVE BRANDING: AN EFFECTIVE WAY TO ATTRACT CUSTOMERS
Article History Received: 18 September 2020 Revised: 15 October 2020 Accepted: 2 November 2020 Published: 23 November 2020
organization's so-called dynamic capabilities (Ferreira, Coelho, & Moutinho, 2020;Lawson & Samson, 2001) which are required for a competitive advantage and high performance in unstable surroundings (Teece, Pisano, & Shuen, 1997;Teece., 2017). So in the hotel industry, creativity has been applied to a large extent by managers to govern hotels (Kattara & El-Said, 2013). Tourism is defined as the social, cultural and economic phenomenon which entails the movement of people to countries or places outside their usual environment for personal or business/professional purposes (UNWTO, 2008). Bangladesh is one of the few countries in South Asia that is certainly not on the hunting list of tourists such as Nepal, India, Maldives or Sri Lanka; but it has its own delicate and distinctive attraction to offer (Nabi & Zaman, 2014). Bangladesh Parjatan Corporation has come up with a study, which once again confirmed immense prospect of country's tourism. The study Called 'Bangladesh Tourism Vision 2020' forecast those tourist arrivals to Bangladesh likely to exceed 1.30 million by 2020. Bangladesh will soon be one of the world's biggest tourist attractions (Rashid, 2016). In the study of Bangladesh, Al-Masud (2015) found Bangladesh to be a fresh attraction for travelers. To get success Bangladesh needs effective planning appropriate strategies for doing the right thing at the right time. In this circumstance, service providers may take some creative marketing and branding strategies so that they can capture new customers and attract them.
The growing interest in creativity and innovation may seem new to some, but in the marketing discipline its importance has long been recognized (Eriksson & Hauer, 2004;McIntyre, Hite, & Rickard, 2003;Titus, 2000). In his classic book titled The Marketing Imagination, Levitt (1986) argued that the practice of marketing was intimately linked to creative thought and imagination. Levitt went on to conclude that all marketing success begins with an imaginative thought or idea. Creativity includes the creation of new and fresh ideas or plans. This has the trait of using imagination and expression (Adams, 2005). Innovation is imagination putting into practice. This includes the adoption, adaptation or use of the innovative ideas of another; transforming them into reality (Priya & Vishal, 2007). The branding goal is to build an emotional connection between a company and its customers.
Branding comes from the sum of several different parts like brand name, logo, colors, etc. This helps companies to separate themselves in an increasingly competitive market. The advent of digital technologies affects how companies interact with customers and how they use branding (Lavoie, 2015). Finally, we can describe the brand as "Perceptible sign to the human senses of the organization and its products, from which the consumer can distinguish an organization and its products from others" (Chovanová, Korshunov, & Babčanová, 2015).
According to the International Association of Scientific Experts in Tourism, "tourism is the sum of the phenomena and relationships that emerge from traveling and researching non-residents in so far as they do not lead to permanent residents and are not related to any earning activity" (Nabi & Zaman, 2014). The hospitality industry is the very core of tourism that includes food, drink, and lodging consumption in an environment away from the usual home base. Hospitality as a tourism category "is a fundamental part of the leisure market, both domestic and inbound. Consistent tourism demand helps the hospitality industry to predict demand and find opportunities to increase customer spending, thereby generating a surge of secondary financial impacts" (Benea, 2014). Garrido-Moreno and Lockett (2016) reported that, in recent years, the emergence of social media platforms has become one of the most important technological advances and has greatly affected the tourism industry. Hays, Page, and Buhalis (2013) pointed out that social media is gaining popularity as an aspect of the marketing strategy for the Destination Marketing Organization (DMO). Their study was to explore the usage of social media among the DMOs of the top 10 most visited countries by international tourists & argued that social media usage among top DMOs is still largely experimental and that strategies vary significantly. In his research, Kang (2011) suggested some techniques to effectively develop Facebook fan pages for hotels and restaurants that can improve interactions with current customers and attract future consumers. Web 2.0 applications such as social networks, blogs, content aggregators, online forums, and user communities are identified by Yap, Cheng, and Choe (2014) which can serve as powerful marketing communication tools for disseminating product information, getting customer feedback, and building an online community. Dzhandzhugazovaa, Blinovaa, Orlovaa, and Romanovaa (2016) reveals the impact of the creative marketing mix on the success of enterprises in the hospitality industry. Sharma (2014) has shown that a business is going to thrive more than rivals because of certain innovative marketing ideas. The first measure of any business, small or large, when compared to its competitors, is its uniqueness. Terkan (2014) mentioned advertising as having a crucial role in the competitive marketing world of today. His research has examined two significant convincing methods often used in business management: Creative Advertising and Marketing Strategy. Debano (2015) described that the internet revolution has shifted business practices into a more complex and interactive manner through the development of Web 2.0 applications. As time goes by, businesses particularly in the hospitality industry have recognized the benefits of using social networking sites to promote their branding strategies online, providing easier access to target audiences and generating brand equity across selected channels. Fatima, Aftab, and Iqbal (2014) conducted a research about the impact of branding on consumer behavior. Brand knowledge is a very important factor. As the consumer is more aware of the brand and he has all the knowledge about its price, quality etc., the more he will be attracted towards that brand. Malik et al. (2013) observed the impact of brand recognition and brand loyalty on the intention to purchase. Brand recognition and brand loyalty correlate closely with purchasing intention. According to the findings, Satvati, Rabie, and Rasoli (2016) there seems to be a relationship between brand equity and consumer behavior, including payment of extra costs, brand preference, and purchasing intention. Ashton (2014) mentioned while the development of tourist destinations brands is well known, there is little work on the brand identity creation definition process. Kalembe (2015) has demonstrated, branding has a major positive impact on tourism performance in Rwanda. Hossain (2013) tried to investigate how the use of promotional activities would contribute to the growth of the tourism industry by giving the Bangladesh case special emphasis. Specialized consumer technique can help policymakers identify market visitors and customize their operation to achieve ideal promotional goals and address current downward income Islam and Jubery (2016). To gain insight and information on promotional methods used by Bangladesh tour operator Hasan, Rahman, and Hossain (2015) and Nabi and Zaman (2014) point out that due to a lack of knowledge, lack of facilities, and appropriate marketing methods, the tourism industry struggles to hit its destination.
1.
Social media engagement can strongly attract customers.
2.
User friendly web page has positive impact on customer attraction. Figure 1 shows the framework of customer attraction through creative marketing and innovative branding tools and strategies.
Conceptual Framework
This study focused on finding out of the contribution of creative marketing and innovative branding strategies and tools applied by these sectors and their effectiveness in attracting new customers from the perspective of the tourism and hospitality industry of Bangladesh. Apart from generating new knowledge and information which are useful to a various range of users this study can add value to the tourism and hospitality industry in terms of capturing new customers, increasing its revenue, and strengthening its current market position. The findings of the study will provide some useful marketing and branding strategies and tools for the tourism and hospitality industry of Bangladesh. The rest of the paper is structured as follows. The "materials and methods" section describes the study area and sampling techniques, variable descriptions, and analytical models. The "results and discussion" section detailed the finding of the study. The "conclusion" section provides the summary and some policy measures in related matters.
Data Sources
Purposive sampling was applied to collect data from two respondent groups namely customers and service providers in the tourism and hospitality industry of Bangladesh. A sample was drawn of 150 respondents including 33 service providers and 117 customers. In this study, a structured questionnaire was used as an instrument that contains open and close-ended questions to achieve the study's objectives. The data was collected by face to face interviews. All participants willingly took part. All queries were in Bangla, which respondents speak fluently.
Customers and service providers were asked by the researcher to fill out the questionnaire after the purpose of the study was clearly explained. The questionnaire was piloted on a group of 5 respondents to check for language clarity, duration of administration, and overall comprehension of statements.
Response Variable
In this study, customer attraction has been used as the response variable. The variable was categorized into a binary outcome (1 as "Yes"; 0 as "No"). Customer attraction through creative marketing strategies and innovative branding tools in the tourism and hospitality industry of Bangladesh were coded as "1" and "0" was for the rest.
Explanatory Variables
For the purpose of the study, social media, user-friendly and interesting web page, brandings like franchising, online booking, mobile apps, establishing forward business relation (linkage with target customers) and backward business relation (linkage with suppliers & personnel), promoting Bangladesh's tourism, launching Facebook page, search engine marketing (SEM), celebrity endorsement and evaluation site were used as predictor variables.
Response regarding every explanatory variable was categorized into three as agree, neutral, and disagree.
Statistical Analysis
In the univariate stage, we conduct Chi-square goodness of fit test (single-sample nonparametric test) which allows us to test whether the observed proportions for (creative marketing strategies and innovative branding tools) variables differ from hypothesized proportions. In the bivariate setup, the chi-square test of independence was considered. The Test of Independence assesses whether an association exists between creative marketing and innovative branding strategies and tools with drawing consumer attraction.
The calculation of the Chi-Square statistic is quite straight-forward and intuitive: Where, fo = the observed frequency (the observed counts in the cells).
fe = the expected frequency if NO relationship existed between the variables.
As depicted in the formula, the Chi-Square statistic is based on the difference between what is actually observed in the data and what would be expected if there is truly no relationship between the variables.
Binary Logistic Regression Model
Logistic regression can be used to predict a categorical dependent variable based on continuous or categorical independence; to determine the effect of the independent variables on dependent; to rank the relative importance of independence: to assess interaction effects. The impact of predictor variables is usually explained in terms of odds ratios.
Let We want to choose β0 and β1 so as to maximize the log-likelihood. These choices will also maximize the likelihood.
The SPSS (Statistical Package for Social Science) 25 version was used for data management and analysis.
RESULTS AND DISCUSSION
At first we studied statistics to see if the three categories (Agree, Neutral, and Disagree) given among the creative marketing strategies and innovative branding tools-based variables were equally effective for the tourism and hospitality industry of Bangladesh. Web page, 10 out of 7 respondents are in agreement that user-friendly and interesting web pages will attract new customers' attention (P < 0.001). More than half of respondents in the study agreed that branding like franchising can expand the international market for tourism and hospitality of Bangladesh (55 percent, P < 0.001). According to the study done by Zaitseva (2013) branding has a major positive impact on tourism performance. While nearly 80 percent of respondents assent online booking can fascinate customers (P < 0.001). Gregory and Breiter (2001) also found that the growth in the value of the Internet as a booking medium and found that half of the investigated hotels increased their occupancy and average daily rates due to online booking system. The results are very similar for other creative marketing and innovative branding strategies and tools (except evaluation site) such as mobile devices, the establishment of forwarding business relationships (linkage with target customers) and backward business relationships (linkage with suppliers and staff), the promotion of Bangladesh tourism, launching Facebook page, the use of search engine marketing (SEM) and the help of celebrity endorsement. Figure 2 indicates that the average score value of the above data is 1.48, roughly equal to the value of the agreement scale 1, suggesting that the majority of respondents agreed with the effectiveness of creative marketing and branding that attract tourism and hospitality industry customers in Bangladesh.
Figure-2.
Overall average score of creative marketing and innovative branding tools to attract customers of tourism and hospitality industry of Bangladesh.
The relationship between creative marketing and innovative branding strategies and tools with drawing consumer attraction is shown in Table 2. A strong relationship exists between social media and consumer attraction in the tourism and hospitality industry of Bangladesh through creative marketing and innovative branding (p<0.001). Nearly 86 percent of respondents agree that social media is highly effective for attracting customers through creative marketing and innovative branding while less than 10 respondents find social media to be not that effective for attracting customers. Hampton, Goulet, Rainie, and Purcell (2011) also showed in their study that in the recent year emergence of social media has greatly affected tourism and hospitality industry. Via creative marketing and innovative branding in the tourism and hospitality industry of Bangladesh, there is a substantial positive association between user-friendly and fascinating web page consumer and customer attraction.
Approximately 100 percent of respondents believe new consumers will be highly attractive to user-friendly and fascinating web pages (p<0.001 marketing (SEM), evaluation site and celebrity endorsement have a strong positive relationship with consumer attraction in tourism and hospitality industry of Bangladesh (p<0.001). Kim (2008). According to Hotelmarketing.com, 2011 75% hotel uses social media to effectively interactions with current customers and attract future consumers. There is no significant relationship between sponsorship (for various events such as sports competitions, music festivals, and tourism occasions) and attracting consumers through creative marketing and innovative branding (p=.065).
We considered logistic regression models to determine the modified effects of the selected explanatory variables on consumer attraction through creative marketing and innovative branding. In Table 3, binary logistic regression analysis of consumer attraction through creative marketing and innovative branding is illustrated. We have stated at the start of the study that the sample size is small. The outcome of the binary logistics due to sample size is not ideal, but only five variables provide results in the below table which is less optimal for any analysis.
Here only launching mobile apps and promoting Bangladesh's tourism is significant for customer attraction in the tourism and hospitality industry of Bangladesh among different creative marketing and innovative branding strategies and tools. Note: Significant at ***P<0.001.
From Figure 3 we can see that only 4.7% of respondents said very significant when they were asked about the effectiveness of traditional marketing while 42.7% and 44.7% said significant and average respectively. On the other hand, 64.7% of respondents strongly agreed with the statement that creative marketing is more effective than traditional marketing to attract customers in tourism and hospitality industry. And this result is also supported by the study conducted by Dzhandzhugazovaa et al. (2016) While 10%, 16.9%, 5.3%, and 4% respondents agreed, remain neutral, disagreed, and strongly disagreed respectively with the statement. | 2020-11-26T09:05:39.948Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "4ddf093ed3e661483c93f1b04be9b559c1481c61",
"oa_license": null,
"oa_url": "http://www.conscientiabeam.com/pdf-files/eco/29/EFL20207(2)308-319.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4ae8f40a54be6a6da8b27abe731348ccaf7ae66c",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
246061767 | pes2o/s2orc | v3-fos-license | Innovative approach to improve information accuracy in a two-district cross-sectional study in Bihar, India
Objective Combine Health Management Information Systems (HMIS) and probability survey data using the statistical annealing technique (AT) to produce more accurate health coverage estimates than either source of data and a measure of HMIS data error. Setting This study is set in Bihar, the fifth poorest state in India, where half the population lives below the poverty line. An important source of data, used by health professionals for programme decision making, is routine health facility or HMIS data. Its quality is sometimes poor or unknown, and has no measure of its uncertainty. Using AT, we combine district-level HMIS and probability survey data (n=475) for the first time for 10 indicators assessing antenatal care, institutional delivery and neonatal care from 11 blocks of Aurangabad and 14 blocks of Gopalganj districts (N=6 253 965) in Bihar state, India. Participants Both districts are rural. Bihar is 82.7% Hindu and 16.9% Islamic. Primary outcome measures Survey prevalence measures for 10 indicators, corresponding prevalences using HMIS data, combined prevalences calculated with AT and SEs for each type of data. Results The combined and survey estimates differ by <0.10. The combined and HMIS estimates differ by up to 84.2%, with the HMIS having 1.4–32.3 times larger error. Of 20 HMIS versus survey coverage estimate comparisons across the two districts only five differed by <0.10. Of 250 subdistrict-level comparisons of HMIS versus combined estimates, only 36.4% of the HMIS estimates are within the 95% CI of the combined estimate. Conclusions Our statistical innovation increases the accuracy of information available for local health system decision making, allows evaluation of indicator accuracy and increases the accuracy of HMIS estimates. The combined estimates with a measure of error better informs health system professionals about their risks when using HMIS estimates, so they can reduce waste by making better decisions. Our results show that AT is an effective method ready for additional international assessment while also being used to provide affordable information to improve health services.
INTRODUCTION
The 17 Sustainable Development Goals (SDGs) were adopted by all United Nations member states in 2015 as an urgent call for action to end poverty and deprivation by following strategies that improve health and education, reduce inequality and spur economic growth. For national strategies to be effective, they must be grounded in the local context. Good-quality data to measure the prevalence of disease conditions, or the population's coverage with health services coverage, is an indispensable resource for programme managers and health policy makers to understand their context. This point is equally true in higher-income countries as it is in low-income and middleincome countries (LMICs). In these latter settings, household surveys are generally considered resource intensive, and being carried out at 3-5 years intervals, are not sufficiently frequency for many decisions. As a result, data generated routinely through the Health Management Information Systems (HMIS) is more frequently used for decisionmaking and for annual reviews than household surveys. HMIS data consist of routine information reported from the health facilities to districts health offices, who submit it to Ministry of Health on a monthly, quarterly, semiannual and annual basis. A key Strengths and limitations of this study ► Household survey data were captured as a stratified random sample leading to an efficient use of information. ► Administrative data comprise 100% of the available recurrent information available in the two selected districts. ► The study population is very large covering a large geographical area, reducing the likelihood that the results are pertinent only to a small group of mothers with children; results may be generalisable. ► The process for combining probability and administrative data has been assessed using a statistically principled approach prior to use in this study. ► The study is confined to two districts of Bihar, India which indicates the need for replicating the study in additional States of India and in other country settings.
Open access
shortcoming of HMIS data is that it is only representative of those who can access health services. This shortcoming is especially problematic in LMIC where access to healthcare services is constrained by geographic, economic and social determinants, and in settings where there are competing services in the community that do not report into the HMIS. Demand by global donors for timely and reliable data continues to increase. 1 The 60th World Health Assembly underscored the importance of robust information to strengthen health systems and policies (resolution WHA60.27), 2 and to this end, established a Health Metrics Network (HMN) partnership to aid countries improve data quality of routine information systems to enable their use for health planning and decision-making. [3][4][5][6] This was followed by other resolutions and in 2015 by a high-level summit on Measurement and Accountability for Results leading to the formation of the Health Data Collaborative (HDC) in March 2016 and supported by global partners. HDC's mandate is more extensive than that of HMN, having an ultimate objective of aiding countries to improve the quality and availability of health data and their ability to consistently and accurately report on progress towards the health-related SDGs.
The advent of electronic health records was heralded, as studies showed their association with improved clinical care, outcomes 7 and surveillance. 8 Consequently, the advancement of a computerised District Health Information System-2 (DHIS2) was implemented in 73+ countries to integrate data sources for their more rapid analyses, dissemination and use. An evaluation in Tanzania showed it improved timeliness and completeness of reporting. 8 South Africa established a dedicated HIV/AIDS information system recommending the use of the earlier DHIS for integrating different information sources 9 and as input for designing national programme strategies. 10 A 2010-2015 study in Swaziland recommended embracing electronic medical data systems to reduce the discrepancies occurring between the existing three information systems put in place for malaria elimination. 11 Recently, a similar demand was issued by practitioners working in humanitarian settings for an epidemiology and demography service to develop robust and timely information from multiple sources to establish priorities that address the population's needs. 12 One important use of routine information systems is to monitor a population's coverage with health service, as Global Alliance on Vaccines and Immunisations has done for vaccination rates since the 1990s. They attempt to control for known weaknesses in administrative data by carrying out data quality audits by recounting, and contrasting, the number vaccinated in health centres with the number originally reported. 13 Although measurement improves all facets of health management, HMIS related studies mainly focus on improvements in obtaining numerators. 1 5 14 15 The future also demands accurate and precise denominators, and, more importantly, a principled method for assessing their accuracy (eg, SE) so users understand their risk when using the data. Under normal circumstances, the error present in HMIS remains unknown.
The research question we investigate in this paper: How can we measure the error in the HMIS and improve its accuracy? We address this issue by presenting an innovation called Statistical Annealing (annealing technique, AT). It builds on our earlier pioneering work in 2011 16 and refined in 2018 17 that provides a coverage estimator with a 95% CI by combining data from a probability survey and HMIS data. HMIS coverage estimates have not before had a 95% CI to inform users of their accuracy. We demonstrated the magnitude of HMIS error and how it can be improved by using AT.
By the early 20h century, Bowley 18 and Neyman 19 proved the importance of sampling and the estimated magnitudes of their errors. The SE 20 is a commonly used measure of error, which is an estimate of deviation of the sample mean from the actual population mean. The SE is used in calculating a 95% CI, which relates to the accuracy of the estimate. If survey data are combined (annealed) with HMIS appropriately they produce a single, more accurate coverage estimator. 16 However, concurrent well-designed surveys have been a surprisingly neglected sources of information for improving HMIS. 21 22 Recent efforts have used surveys to improve HMIS estimates, 23 24 although the resulting revised estimates do not have a corresponding 95% CI.
The objective of AT is to produce a principled measure of health system performance by combining existing HMIS and survey information which also produces an accompanying measure of its accuracy. This hybrid estimate is more accurate than either data source alone. Our proof of principle study, used Child Health Day (CHD) administrative data from Benin and Madagascar provided by UNICEF country offices and household survey data collected at the same time, to verify CHD coverage and the quality of CHD administrative data. 17 This refined approach to AT resulted in the production of a 95% CI for the administrative data and for the combined result. See online supplemental information 1 for the AT formulae.
The current study assesses the transferability of AT to a new setting, HMIS data from Bihar state located in northeast India. Bihar is the fifth poorest state of India where half the population lives below the poverty line. It is one of the most densely populated states (N=110 million), and has some of the weakest maternal and child health and nutrition indicators in India. 25 26 Both districts are primarily rural with a mix of upper caste and lower caste Hindu residents, and minority Muslim communities.
Also, this study is the first use of AT with true HMIS data, which in the future will be a more typical use for AT, and result in our conclusions about its global applicability. We apply AT to 10 indicators related to antenatal care (ANC), institutional delivery, and neonatal care using data from two districts, Aurangabad (N=3 695 928) and Gopalganj (N=2 558 037) and two sources of data collected for purposes other than this study. HMIS Open access data from the two districts and a probability survey data collected as part of an earlier assessment of maternal and child healthcare coverage in Bihar. 27 This secondary data analysis uses datasets without personal identifiers.
HMIS system in Bihar
The HMIS data in Bihar consists of 318 data elements. These are reported monthly by health subcentres (HSCs) and aggregated at the block primary health centre (BPHC). The block is the subdistrict administrative unit in India. Staff at the HSCs tally information from paper registers and send a paper report to BPHCs where it is entered into a computerised HMIS system, and subsequently maintained electronically. District-level indicators comprise information aggregated across the block as well as data from district and referral hospitals.
Data
To develop AT in Bihar, we used the health facility-based HMIS and probability data from a household survey in two districts, Aurangabad and Gopalganj. From the Bihar State Health Society, we obtained two sources of HMIS data: the HMIS data reported by all health facilities in the two districts, at district level and disaggregated to the block-level; and the Expected Level of Achievement (ELA), which is projected data. They calculated the ELA data using the latest 2011 census and applied an average annual population growth rate using parameter values reported in the Annual Health Survey 2012-2013 (eg, crude birth rate) and in the HMIS data from previous years (eg, percentage of pregnancies leading to a stillbirth in 2015-16). We averaged these two data sources, the HMIS and the ELA, to obtain the administrative estimate.
A household probability survey was conducted during 7 September 2016-25 September 2016 in each district in a standardised manner 28 29 using Lot Quality Assurance Survey (LQAS). The LQAS survey is a stratified random sample used to assess health service coverage. The strata are administrative blocks (11 in Aurangabad, 14 in Gopalganj), which are subdistrict units in India. The strata were weighted by their population size. The first stage sample uses probability proportional to size to randomly sample villages (typically, n=19 per stratum). 28 In each random location a second stage sample identifies an index household using segmentation sampling 30 31 from locally constructed hand-drawn maps. The next closest household is then selected for interview to reduce the likelihood that households not included in the map do not have zero probability of selection. In the selected household, individuals in three target groups (women with children 0-2, 3-5 and 0-5 months of age 32 are listed with one selected randomly using a random number table. The remaining target group is selected in the next closest house using the same protocol. 33 The resulting sample of individuals is random and provides its own SE estimator used for calculating a 95% CI. One member of each target group only was selected in a sampled village. The total number of random samples for each indicator is 209 (Aurangabad) and 266 (Gopalganj). LQAS gives reliable estimates for indicators at the district level 34 and allows the classification of blocks, as 'performing adequately' or not. However, in this paper, we do not use the data for classification, but rather for estimation.
Indicators for annealing
We based the selection of 10 indicators for annealing on the ease of combining HMIS data temporally to the survey data (table 1). For the LQAS survey, we measured ANC, institutional delivery and neonatal care outcomes in the population of women with children 0-2, 3-5 and 0-5 months of age 32 resulting in a total of 627 (Aurangabad) and 798 (Gopalganj) interviews. We used the HMIS data from the last 5 months of reports (April-August 2016) preceding the LQAS survey (tables 1 and 2), so that they matched temporally.
The AT design
As in earlier work, 17 we apply AT in each sub-district for each indicator. The district-level estimators are an aggregation of appropriate block-level coverage data, weighted by the population sizes. Data analysis was done using Stata SE V.15 and R V.3.4.1.
Survey coverage estimator
For all but two indicators, the numerator of the LQAS coverage is the sum of correct responses in each of two target group (mothers of children 0-2 months and 0-5 months) in each block. The denominator is the sum of the denominators of both groups ( 19 × 2 = 38 ) . For the two indicators we used additional data from a third target group, mothers of children 3-5 months, where the denominator is the sum of the denominators of each target group table 2). A step in the annealing process requires a weighted average of component parts; the LQAS estimator is one such component. For each component, the weight used is inversely proportional to its variance. If the LQAS coverage, p, is away from the extremes of zero or one, we can use the standard binomial formula, p /n . When the LQAS coverage is zero (or one), we calculate the weight by using a surrogate 'SE' from a 95% CI using Louis' approach 35 and dividing its length (1-0.05 (1/n) ) by , and squaring the resultant. To assess any block-level clustering effect between target groups sampled in parallel, we use the two-tail McNemar's test for correlated proportions for each indicator, 36 across all blocks sampled in each district.
HMIS coverage estimator
The numerator of the HMIS coverage is reported in table 1 (column 4). The denominator is the average between the total number of ANC registration per month calculated by the ELA projected data and those reported Open access in the HMIS data (averaged over April-August 2016) (table 2).
To calculate the weight, as we did for the LQAS estimator, to use for the HMIS estimator, we use the fact that both estimators are estimating the same quantity, so their difference should be estimating zero. To construct a variance for the HMIS coverage estimator, we use the average mean square error (MSE), which is the average of the squared difference between the HMIS and LQAS coverage estimates over all the blocks composing one district. The MSE is calculated for the whole district, and can be decomposed in two ways: where: ► K is the total number of blocks in the district; ► k is the index for the block in the concerned district, and ranges from one to K ; ► σ HMIS is the SE of the HMIS coverage p HMIS ; ► σ LQAS is the SE of the LQAS coverage p LQAS .
The first decomposition measures how much the HMIS and LQAS coverage estimators differ across the
Open access
blocks composing the district. The second decomposition shows the MSE is also the sum of the two variances of HMIS and LQAS estimators, assuming each source of data was collected independently from each other. This decomposition also assumes that both HMIS and LQAS are unbiased, that is, that the expectation of both estimates is the same). We calculate the MSE using the first decomposition and subtract the LQAS variance to obtain the HMIS variance (and thus σ HMIS ). This can then be used to calculate the weight, as we show in 17 and in online supplemental information 1 and 2 tables S1-S9.
Assessing variation in local-level data quality
To advance the use of AT in multiple settings we developed a tool for subdistrict-level analysis. We assess the variation in the quality of the HMIS indicators in 250 block-level comparisons of the 10 indicators ( 10 × ( 11 Aurangabad + 14 Gopalganj )) and 20 districtlevel comparisons. We classify the value of the HMIS coverage relative to the CI around the combined estimate. If the HMIS coverage is within 95% CI of p combined the cell is coloured green. If the HMIS estimate is within a reasonable percentage difference of each boundary of the CI, for example 10% (we colour those cells light red if it is above the high boundary or light blue if it is below the lower boundary), we are assured its value is not far from the survey estimate. While when the HMIS coverage is beyond the reasonable percentage difference of each boundary, the discrepancy might be too large to recommend combining the two estimates and to not use the HMIS estimate (we colour these dark red or dark blue).
The survey data were collected originally for assessment of a maternal and child healthcare programme in Bihar. We obtained oral rather than written informed consent from all respondents, because of the high illiteracy rate. The survey data we later treated as secondary data for the current study which was supported by our donor and the Bureau of Statistics for the Ministry of Health for the State of Bihar. After giving permission to access the State HMIS, they facilitated the collection of the HMIS data.
Patient and public involvement
This study does not involve patients. Also, the public were not involved in the design, conduct and reporting of the research. The public was engaged as interviewees. To ensure local engagement all data capture was carried out in close coordination with the State Ministry of Health of Bihar. We also shared the results with them and offered further dissemination of results, and engaged them for data use and action planning activities.
RESULTS
Data characteristics and at characteristics Among the 10 indicators, the MSE between the two sources of data ranges from 0.013 to 0.193 in Aurangabad, and between 0.005 and 0.391 in Gopalganj (see table 3 for the indicators). Three blocks in Gopalganj have an HMIS coverage larger than 100% (two or more tetanus toxoid vaccinations (TT2): 104% in Barauli and 103% in Kateya; iron folic acid (IFA) tablets distribution: 147% in Pach Deuri). The HMIS coverage averaged over blocks ranges between 12% and 80% in Aurangabad, and between 10% and 84% in Gopalganj. The LQAS coverage averaged over the blocks ranges between 3% and 98% in Aurangabad, and between 5% and 97% in Gopalganj. The McNemar tests did not detect block-level clustering except for the four tests with indicators measuring institutional delivery in all facilities or in public facilities.
The weighting factor used in the calculation of the combined estimator varies for each indicator in each district (figure 1). For six indicators (registration during first trimester of pregnancy, receiving TT1, receiving >100 IFA tablets, institutional delivery in any facility, deliveries in public facilities, baby weighed within 1 hour of birth), all block-level weighting factors w, are below 0.10 in both districts, indicating the contribution of HMIS estimator to the combined estimator is no higher than 10% (table 3 and online supplemental tables S1-S9). We use a one-tail test to check for difference of the weights between the two districts, based on the observed mean difference. The weights for Gopalganj are higher than Aurangabad's for three indicators: first trimester registration, receiving TT1 and having a postpartum visit-for all three, onesided t-test p<0.01. The Gopalganj weighting factors are lower than Aurangabad's for the other seven indicators: >3 ANC visits, receiving TT2, receiving>100 IFA tablets, institutional delivery in any facility, deliveries in public facilities, early initiation of breastfeeding (EIBF), baby weighed at delivery (all seven one-sided t-test p<0.05). This result indicates that there are more discrepancies between the HMIS estimates and the LQAS estimates in Aurangabad when compared with Gopalganj for the first three indicators, while the discrepancies are greater in Gopalganj for the latter seven indicators. All but one of the weights for the indicator >3 ANC visits in Aurangabad are between 0.4 and 0.5, indicating an almost equal contribution of the HMIS and LQAS estimators to the resulting combined estimator. Alternatively, there are Open access large variations in the weights for the indicator measuring the one visit by any front-line worker (FLW) within 24 hours of delivery in Gopalganj; while, the MSE is small in this instance. Comparing indicator estimates of 20 p HMIS vs p LQAS across the two districts, five differed by <0.10 and six others by <0.20. The remaining nine indicator estimates show larger differences (table 3; Online supplemental tables S1-S9).
ANC services
All estimates, SEs and CIs are calculated at the block and district levels (as an illustration, see the detailed results for first trimester ANC registration in table 4; Online supplemental tables S1-S4 for other indicators). For the five indicators measuring coverage related to ANC and birth preparedness, the combined block-level estimate differs from the LQAS estimate by 1% (received>100 IFA tablets) to 10% (>3 ANC visits); the block-level combined estimate differs from the HMIS estimate at most by 28% (TT1) to 133% (received >100 IFA tablets). SEs for the HMIS data are 1.4 to 32.3 times larger than those calculated for the combined estimates.
Institutional delivery
For the two indicators measuring the proportion of infants institutionally delivered (online supplemental tables S5 and S6), the combined block-level estimate differs from the LQAS estimate at most by 1% (all facilities) and 4% (public facilities only); the combined estimate differs from the HMIS estimate at most by 84% (all facilities) and 58% (public facilities only). SEs for the HMIS data Open access are 3.7-14.6 times larger than those calculated for the combined estimates.
Neonatal health
For the three indicators measuring coverages related to neonatal health (online supplemental tables S7-S9), the combined block-level estimate differs from the LQAS estimate at most by 3% (newborns who were visited by any FLW within 24 hours of home delivery) to 8% (EIBF); the combined estimate differs from the HMIS estimate at most by 36% (newborns who were visited by any FLW within 24 hours of home delivery) to 70% (EIBF). SEs for the HMIS data are 1.4 to 6.2 times larger than those calculated for the combined estimates.
Local level variation
We first look at the two rows of figure 2 that summarise the district level measures and see that, on average, Aurangabad's HMIS measures (row 13) behave slightly better than Gopalganj's (row 28). We see that for two indicators both districts are red; for three they are both blue; for two they are both light red; and for two Aurangabad's are light blue vs Gopalganj's dark blue; and, for one Aurangabad's is green and Gopalganj's is light red. We get a more detailed contrast when investigating the block-level results. Of the 250 comparisons 36.4% of the HMIS estimates are within the CI of the combined estimate with Aurangabad displaying greater accuracy (44.5%) than Gopalganj (30%). For two indicators (100 IFA tablets received and facility based delivery (all)) the results are consistently poor across blocks. The same can be said of TT1 in Aurangabad, but not in Gopalganj. These indicators warrant further investigation into why the two sources of data have such discrepancies. The 3+ANC displays good agreement in Aurangabad, but a mixed evaluation in Gopalganj. Similarly, with the remaining indicators, we can detect subtly different results within the two districts. These results suggest substantial variation across blocks, districts and indicators in the HMIS accuracy.
DISCUSSION
The results of our study show numerous discrepancies of block-level HMIS coverage estimates in both districts compared with the probability samples. However, the extent of these discrepancies varies across the blocks. We would expect consistency among block-level results given their geographical and political proximity within these two districts. Using the combined estimates as our guides, they differ from the respective probability estimates by at most 10%. In contrast, the combined estimates differ from the HMIS estimates by up to 84.2% (facility-based delivery (all) in Aurangabad's block Sadar), except for one instance where the difference is 133% due to the administrative coverage being >100% (distribution IFA during pregnancy in Pach Deuri). SEs for the HMIS estimates are between 1.4 and 32.3 times larger than for the Open access combined estimates (online supplemental tables S1, S9 and S4).
The HMIS data provides two sources of information to calculate the denominator (reported HMIS and ELA). Across the 14 blocks of Gopalganj, the two values were very close. In Aurangabad, we observe a linear relationship between the two sets of values, with the ELA values being slightly higher than the HMIS recorded values, suggesting the projected data overestimates the numbers reported. Since there was no evidence that one was better than the other, we chose to take the average of both values to define the denominator of the HMIS estimator. This step can be used in future applications.
A general attractiveness of the HMIS results for management purposes is their results apply at the block-level, and we do not wish to lose this property. We are fortunate because this level also corresponds to the strata in the LQAS surveys, and thus, this is the information level provided by the combined estimates. For greater granularity, LQAS is a favourable method to use for AT. In this study, we were also able to increase the sample size by pooling the data from two or three target populations surveyed in parallel within a block.
Our approach allows us to contrast the two districts of Bihar by studying the block-level (subdistrict-level) weighting factors w. For example, six indicators across both districts, have weights lower than 0.10, highlighting a large SE in the HMIS estimates vs those in the probability sample estimates which leads to a wide 95% CI in the HMIS estimates. The contribution of the HMIS estimator to the combined estimator for the indicator measuring visits by any FLW within 24 hours of birth estimator is higher in Gopalganj; the weights range between 0.15 and 0.54, indicating a more balanced contribution between Open access the convenience and the probability sample, meaning that some indicators have similar estimates while others do not. In Aurangabad the only indicator having a contribution weight of the HMIS estimator of over 0.35 in all blocks is three ANC visits. Hence, we are not observing a systematic discrepancy across indicators and districts. Some HMIS estimates are similar to the probability survey estimates for specific indicators, other indicators display similarity in only certain subdistricts or districts, while other indicators show consistent discrepancies between HMIS and survey result estimates. Ex post, we can learn from these concordance and discordance patterns how well some indicators are defined and measured across districts. These Bihar-HMIS AT results differ from the Benin and Madagascar-CHD AT results. In the latter study the administrative data consistently over-estimated prevalence. 17 Such is not the case here as 25.6% of indicators overestimate and 38% underestimate coverage. Future research should map the variation in data quality by country and indicators as well as subnationally, and understand the reasons for the discrepancies between the HMIS and probability sample. These discrepancies might be due to one or more reasons: poor-quality denominators in the HMIS in the state or in specific districts; inaccurate or incomplete recording of the HMIS numerator data in one district or at the block level; or incomplete transmission of HMIS records from a health centre to the block level for aggregation. 37 It is also possible that the survey and HMIS indicators are not measuring quite the same health system product. The perception of a respondent to a survey question in their own home and that of the health worker in a health centre may differ. Such indicators may need replacement as well. Related to this issue is a question of a more general nature. In situations when two estimates differ substantially, is it advisable to combine conflicting estimates? We think this issue can be answered by the data: as the estimates are different, a compromise suggested by the data is to give appropriate weight to the disparate estimates when combining them-a principle which we demonstrate. Our AT method is grounded on the average squared difference between the HMIS and LQAS estimates of the same quantity (see formula in online supplemental file 1). Two assumptions are essential for applying the formula: (1) the two estimates have the same expectation, that is, they measure the same indicator, (2) the two estimates are uncorrelated, which is guaranteed by their independent source of data collection. The first assumption requires that the target population, time period and geographical areas are the same in each source of information. The resulting weights assigned to each estimate illustrate the reliance on the quality of the other source of data in terms of variability. The more variable the survey estimate is, the higher the weight for the HMIS contribution to the combined estimate, and vice and versa. As to the prescription for future strengthening of the HMIS and DHIS2, this is a separate issue the solution Figure 2 HMIS coverage value compared with the 95% CI of the p combined estimate for 10 Indicators. Five categories: (blue) HMIS coverage is more than 10% below the lower bound. (Light blue) HMIS coverage is less than 10% below the CI lower bound. (Green) HMIS coverage is within the CI. (Light red) HMIS coverage is less than 10% above the CI upper bound, (red) HMIS coverage is more than 10% above the CI upper bound. ANC, antenatal care; EIBF, early initiation of breastfeeding; HMIS, Health Management Information Systems; TT2, two or more tetanus toxoid vaccinations; FBD, facility-based delivery.
Open access of which depends on information obtained from principled future research.
Our AT method uses statistical principles to combine data coming from different sources and calculates error for all sources as well as for the resulting combined estimate. This innovative product is valuable. The AT approach is different to WHO's well-intentioned attempt to improve HMIS estimates with Computation Logic; its estimates rely on professional judgements and do not produce error terms or CIs-a limitation noted by the authors. 38 Therefore, the risk associated with their resultant is unknown. The Institute for Health Metrics and Evaluation applies more complex quantitative analyses rather than just human judgements. It produces an estimate of the number of additional children covered or overestimated. 39 However, like computational logic, it does not report a measure of its error.
We are living in an information conscious era with continual demand for, and the ability to produce, more and better data 1 40 obtained from multiple sources. Using accurate information can improve a population's health and reduce waste. We guide the integration of existing data sources to provide a more complete assessment of national health and to reduce the cost to the health system. While we consider numerous data sources, HMIS, which generally do not produce SEs, are touted as having increasing importance for improving the coverage and quality of health services. 41 42 Yet, HMIS are not accurate. Improving and measuring the quality of health information systems is greatly needed. 43 Our examples exemplify a ubiquitous situation, namely, the difficulty with having a good estimator of the denominator. These problems occur in developed and low resource countries, 15 leading some policy-makers to call for a restructuring of information systems. 43 The current strategy for improving information involves rolling-out a computerised DHIS2. Though promising, whether the DHIS2 improves data quality is yet to be determined. Nevertheless, its concern is production of quality numerators. Our study shows that AT detected variation in the quality of the HMIS by indicator and sub-district location, and can be used to strengthen systems like the DHIS2 by identifying indicators and their location where improvement is needed. While it does not tell us the reasons for the errors it shows where they exist and their magnitude. This information can be used to understand and correct the sources of errors, which can improve health programmes and reduce waste.
Although we have discussed in detail the statistical principles of our approach, we should make clear its limitations that the global health community can address. Availability of survey data across multiple districts and subdistricts (Blocks in the case of India) is an issue. Survey data are not always widely available or conducted at a frequency that is needed for timely monitoring of health programmes. Concomitant survey data are essential for using AT to improve routine data. However, one single survey is not necessarily what is needed. Multiple surveys carried out at approximately the same time can be used. What is needed is a central data warehouse where surveys sponsored by government, international and civil society are accessible for use in AT, as are clear descriptions of sampling procedures and data codebooks. We are early in this era of improving data quality. While we have focused in this paper on developing appropriate statistical solutions, other health systems strengthening innovations are also needed. The international community needs to consider survey data as a public good. While their primary use may be for programme strengthening in a small catchment area, when considered together with other databases, survey data can have an enormous impact on improving data quality in the health system as a whole.
These results indicate the need to establish a service working side by side of the DHIS2 to inculcate hybrid estimation using AT as a standard component of district information systems. It should be an independent partner of the DHIS2, under the stewardship of lead international agencies such as UNICEF, the Global Fund to Fight AIDS, Tuberculosis and Malaria, USAID, World Bank and bi-lateral organisations (USAID, DFID). It would work independently and transparently to advance hybrid estimation not only in support of DHIS2, but also, and possibly more importantly, to support local public health practitioners and national policy-makers focus their programmes to address priorities.
CONCLUSION
Our statistical innovation, hybrid prevalence estimation using statistical AT, requires concomitant data from a random survey sample 44 conducted at the subdistrictlevel and subdistrict-level HMIS data. The LQAS survey in Bihar was undertaken for another purpose and so its use in AT was at no additional expense. Other statistical surveys, including the Demographic and Health Survey, are potential candidates. Provided both datasets are available in multiple sub-districts, we can estimate the variability of each indicator between subdistricts, and thus, construct not only a combined estimate but also its 95% CI at both subdistrict and district levels and a 95% CI for the HMIS estimator. With the new visual tool presented here, AT results can be quickly interpreted.
AT is intended to improve the quality and use of the HMIS while reducing waste due to using inaccurate information. This study shows that bringing data from existing household surveys together with HMIS data permits the calculation of more accurate and precise decentralised prevalence measures by combining HMIS and probability samples at very little cost. In addition to measuring coverage, it also allows us to evaluate the HMIS in practice and point to corrective measures. Such results lead to better systems for tracking the public's health.
Twitter Joseph J Valadez @JValadez Open access LQAS survey and the HMIS data retrieval. We thank Prof Imelda Bates, Prof Brian Faragher and Nancy Vollmer for their valuable feedback on an earlier version of this manuscript.
Contributors JJV and MP developed the research question; MP and CJ led the development of the mathematical statistics for the annealing technique; JJV and BD developed the survey design; JJV developed the survey methodology; BD managed the survey and data quality; CJ carried out the statistical analyses; JJV obtained the funding and donor support for the research; JJV, MP and CJ interpreted the data; CJ was responsible for data curation; all authors wrote and reviewed the paper; JJV acted as guarantor.
Funding This research was funded by the Bill & Melinda Gates Foundation Investment ID OPP1142889.
Competing interests None declared.
Patient consent for publication Not applicable. Provenance and peer review Not commissioned; externally peer reviewed.
Ethics approval
Data availability statement Data are available on reasonable request. Household survey data are available on reasonable request. The recurrent data belong to the Ministry of Health of the State of Bihar and requests for data must be made to them.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.
Open access This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made. See: https://creativecommons.org/ licenses/by/4.0/. | 2022-01-08T06:16:59.673Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "daae4f9d595d4d57253f25d2595a647e79d831de",
"oa_license": "CCBY",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/12/1/e051427.full.pdf",
"oa_status": "GOLD",
"pdf_src": "BMJ",
"pdf_hash": "4fc3bc75f0922e30400541a40c82ab89828a51f0",
"s2fieldsofstudy": [
"Medicine",
"Economics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244233290 | pes2o/s2orc | v3-fos-license | Bicyclist Stress Perceptions and Heart Rate Variability
Findings This paper describes the relationship between a physiological marker of stress (heart rate variability) and survey-based stress responses from a cross-over real-world bicycling experiment. The analysis shows that while heart rate variability was inversely associated with survey-based estimates of stress, large uncertainty in the relationship indicates carefully controlled experiments are still needed before we can be confident that bicyclist stress can be measured through heart rate variability.
QUESTIONS
In this article I examine the relationship between people's stated perceptions of bicycling stress and their heart rate variability (HRV). Objective psychological measures of stress have the potential to aid identification of road environment factors that affect bicyclist comfort and safety, both key barriers to bicycling (Buehler and Dill 2016;Heinen, van Wee, and Maat 2010). I hypothesized that heart rate variability (low heart rate variability is a marker for a stressed psychological state) would be inversely associated with survey-based measures of stress from bicycling experiences. I focus on evaluating this hypothesis and exploring the following research questions:
METHODS
Using data from a prior cross-over bicycling field experiment (see Fitch et al. (2020) for details), I examine data from 20 female college undergraduates who bicycled on five road conditions, answering eight survey items about stress with ordered categorical responses for each road condition (Table 1). Each participant rode at a comfortable pace with their usual bicycle on five flat road conditions, with rests in-between, during times of low heat and low wind (to avoid physical exertion). Because the survey data was collected after each participant's ride on each road condition, HRV is aggregated over the duration of each road condition (i.e., not as a metric for detecting "moments of stress"). The limitation of this approach is that a participant's HRV is reflecting stimuli of all kinds, not just threat-type stressors, adding to measurement noise.
1. Does heart rate variability have a consistent relationship with survey responses indicative of "stressful" bicycling?
2. Do heart rate variability and survey response relationships vary by road environments and type of stressor? How would you classify your bicycling ability? By ability we mean your balance, steering, and general technical control of your bicycle.
I am a cautious bicyclist When bicycling, I always keep a watchful eye on cars
The data at the road condition level has a total sample size of 792 ratings (one participant failed to ride on the correct route for one condition which resulted in missing data for 8 ratings). The primary predictor of interest is the standard deviation of high frequency filtered inter-heart-beat intervals, also called high frequency heart rate variability (HF-HRV). I extracted the HF-HRV from inter-heart-beat intervals using the maximal overlap discrete wavelet transform (MODWT) where frequency filter parameters vary based on person-specific inter-heart-beat time series making the metric a person-scaled standardized measure of HF-HRV (see Fitch et al. (2020) for details about signal processing). HF-HRV is a commonly used marker for psychological stress due to its close connection to vagal tone (the physiological basis of all current theories linking HRV to psychology) (Laborde, Mosley, and Thayer 2017). The important moderators of interest are the road condition levels and the survey items. I selected four additional co-variates in this analysis based on the prior study, they include average speed to adjust for physical exertion's influence on the heart, and survey measures of bicycling vigilance, ability, and desire which adjust for some person-level differences in stress response to on-road bicycling.
To examine how HF-HRV influenced post-ride stress ratings, I estimated a Bayesian multilevel (by person and survey item) ordered logistic regression model using the R package brms (Bürkner 2017), an interface for the Stan computing language (Stan Development Team 2018) (See supplemental material for model details). I ensured that the model converged, that no Stan diagnostic warnings occurred, and that the model was regularized to guard against overfitting by using "weakly informative" priors on all parameters (McElreath 2020), selected from prior predictive visual plots.
FINDINGS
The model results are compatible with my original hypothesis, people who had high HF-HRV while bicycling were less likely to agree with statements of stressful experiences on average ( , Table 2, and negative slopes in Figure 1), but the uncertainty in the mean effect makes the evidence unconvincing. Person-level variation in this relationship is small ( , Table 2, slopes of thin lines in Figure 1), especially compared to person-level variation in ratings on average ( , Table 2).
The variation in the relationship between HF-HRV and survey-based stress ratings by road condition and survey items are also small (Figure 1), and most of that variation is imprecisely estimated by the model ( , Table 2). Like the person-level variation, the moderating effects of road condition and item on ratings ( , Table 2) are much greater than the moderation of those effects on the relationship between HF-HRV and ratings. The lack of item-level moderation on the relationship between HF-HRV and ratings suggests HF-HRV did not help distinguish types of stressors nor types of stress perceptions.
The moderating effects of the interaction between road condition and survey item on the relationship between HF-HRV and ratings are again, small ( , Table 2). Overall, the general trend of low HF-HRV predicting greater agreement with the stress statements is mostly consistent across road conditions and survey items (negative slopes in all panels of Figure 1), albeit with great uncertainty. human-environment interactions? While physiological markers attempt to measure pre-cognitive (or sub-conscious) stress responses, they probably should be strongly associated with conscious evaluations of stress to be deemed valid stress markers. Given existing bicyclist stress studies, including this one, evidence is weak on this connection. While this study attempted to control (both experimentally (see Fitch et al. (2020)) and statistically) important confounds, the number of potential confounds to consider in such experiments are much more numerous (Laborde, Mosley, and Thayer 2017;Ausri and Bigazzi 2021). The differences in HF-HRV found between road conditions in Fitch et al. (2020) may have less to do with psychological stress, and more to do with other factors. These findings suggest that HRV and other physiological markers of bicyclist stress need more measurement validation by including survey-based measures of stress in tandem with physiological variables in experiments that have stronger controls for potential confounds.
supplementary materials | 2021-10-19T15:47:38.828Z | 2021-09-24T00:00:00.000 | {
"year": 2021,
"sha1": "9cc7de1b518b7b44b2cded7ed5bf3e150b96f64e",
"oa_license": "CCBYSA",
"oa_url": "https://findingspress.org/article/28138.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b35382158efc745670d9afe65d9a13f9a0bc0de7",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236933335 | pes2o/s2orc | v3-fos-license | Horses show individual level lateralisation when inspecting an unfamiliar and unexpected stimulus.
Animals must attend to a diverse array of stimuli in their environments. The emotional valence and salience of a stimulus can affect how this information is processed in the brain. Many species preferentially attend to negatively valent stimuli using the sensory organs on the left side of their body and hence the right hemisphere of their brain. Here, we investigated the lateralisation of visual attention to the rapid appearance of a stimulus (an inflated balloon) designed to induce an avoidance reaction and a negatively valent emotional state in 77 Italian saddle horses. Horses' eyes are laterally positioned on the head, and each eye projects primarily to the contralateral hemisphere, allowing eye use to be a proxy for preferential processing in one hemisphere of the brain. We predicted that horses would inspect the novel and unexpected stimulus with their left eye and hence right hemisphere. We found that horses primarily inspected the balloon with one eye, and most horses had a preferred eye to do so, however, we did not find a population level tendency for this to be the left or the right eye. The strength of this preference tended to decrease over time, with the horses using their non-preferred eye to inspect the balloon increasingly as the trial progressed. Our results confirm a lateralised eye use tendency when viewing negatively emotionally valent stimuli in horses, in agreement with previous findings. However, there was not any alignment of lateralisation at the group level in our sample, suggesting that the expression of lateralisation in horses depends on the sample population and testing context.
Introduction
Animal brains exhibit anatomical and functional asymmetries [1,2]. This feature is widespread among both vertebrate and invertebrate taxa suggesting adaptive significance [3]. The specialisation of the right and left hemisphere for different tasks is especially evident in the lateralisation of the sensory systems, including sight, audition, and olfaction [4][5][6][7]. The left hemisphere has been implicated in the analytic categorization of stimuli, while the right hemisphere, often takes precedence when responding to threats, during the detection of and escape from a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 predators, and for emotional evaluation [8][9][10]. Thus, the right hemisphere seems to dominate in emotional attention [11], wherein a strong reaction is triggered by the stimulus which attracted the animal's attention [12].
Historically, research on lateralisation has focused on humans [13] and non-human primates [14,15], partly due to the mistaken belief that other animals did not show lateralised functioning of the brain [16]. Over the past three decades, studies on functional lateralisation have been extended to other species [17][18][19][20]. These studies have largely focused on bird [21] and fish models [22] because these animals have laterally placed eyes with monocular fields that project primarily to the contralateral hemisphere, allowing hemisphere use to be estimated by measuring eye use [23]. More recently, there has been a significant increase in studies on lateralisation in ungulates, in part because of the presence of similar anatomical advantages [24]. Among ungulates, the species that has undoubtedly been most closely examined is the horse (Equus caballus), likely owing to its recreational and social role in human society [24]. As with other ungulates, horses' eyes are positioned on the sides of the head and horses see the world primarily through monocular vision. Their monocular visual field has a mean range of 190˚-195˚[25, 26] and therefore, horses must be able to process visual information, including monitoring for potential threats, from each monocular field. The visual system of the horse shows extensive decussation of the afferent sensory fibres to the contralateral hemisphere [27,28]. Collectively, these characteristics make the horse a good model for investigating sensory lateralisation in mammals.
Threatening stimuli are primarily attended to with the left eye in many species of fishes, amphibians, reptiles, birds, and mammals [29,30]. In accordance with this, horses show a tendency to use the left eye when examining unfamiliar objects [ In contrast to the aforementioned support for the predominance of the left eye and right hemisphere for processing negatively emotionally valent stimuli, there have been several reports of contradictory results [24] in a diversity of animal species [20, 37-39] including horses [4,33,34,[40][41][42][43][44][45][46][47]. These apparently incongruous findings may be attributable to variation in motivation or attention in the animals, which may affect the expression of sensory laterality [8,48]. In horses, aspects of management, such as training and handling history may also influence the lateralisation of behaviour [31,35].
Horses have a wide monocular field of vision and evaluating their current visual focus is not straightforward [26]. The scoring of eye use, regardless of whether in person or on video, maybe be susceptible to perspective errors due to the positioning of the camera or observer which could affect the attribution of the eye used to examine the stimulus. If the observer is not directly in line with the stimulus and the body axis of the animal the apparent eye use may be difficult to assign, potentially increasing variation [49][50][51]. Many published studies on horses do not provide a clear description of how such perspective errors were avoided.
Another variable which may affect the degree to which the right hemisphere appears to control the response to stressful stimuli is the nature of the stimulus used to induce a negative emotional state. Many studies assume a stress response is induced following the exposure to a noxious stimulus [52] or define a stimulus as aversive a priori and interpret the response as a stress reaction [53]. Each of these approaches could incorrectly classify a stimulus as a stressor and therefore, it has been suggested that the word "stressful" should be reserved for uncontrollable and/or unpredictable stimuli [53].
In the current study, we sought to examine lateralised visual inspection in horses confronted with an emotionally arousing stimulus. We examined whether horses inspected a novel, uncontrollable, and unpredictable stimulus appearing in a familiar environment preferentially with one eye over the other. Since the left eye/right hemisphere may be specialized for responding to novel or threatening stimuli, and for expressing intense emotions, we expected the horses would display a left eye (and hence right hemisphere) bias for inspecting the stimulus [30].
Animals
Ninety-eight Italian saddle horses were exposed to a novel visual stimulus in a familiar stall [54,55]. The horses came from seven different stables in the Piemonte Region if Italy. Based on the quality of the video recordings, 77 horses (30 geldings and 47 intact females, aged 4 to 24 years) from the total sample were selected for the present study. The horses were stabled in individual stalls (ranging in size from 9 to 12 m 2 ) and had paddock turnout at least three times per week. They were fed hay twice per day, concentrates three to four times per day. Water was available ad libitum. The horses were not provided with toys (plastic bottles, balls, etc.) in the stall [54].
Stimulus presentation
In animals, an acute emotional response and avoidance reaction can be induced by the sudden appearance of an unfamiliar stimulus [56]. In the current study, a yellow balloon (13 cm when uninflated and about 25 cm Ø when inflated, see S1 Fig) was remotely inflated near the horse inside of a familiar stall to introduce a mild, acute, unpredictable, and uncontrollable stimulus. The stimulus presentation apparatus consisted of a metal panel (40×40 cm) with a hole (3 cm Ø) in the centre, concealed by two small flaps. One end of a compressed air hose (10 m long) was inserted into this hole and an inflatable rubber balloon was attached to it, while the other end was connected to a canister of compressed air. The apparatus allowed us to rapidly inflate the balloon which the horses were not accustomed to and could not predict or control.
A webcam (QuickCam Pro 9000, Logitech, Lausanne, Switzerland) was positioned inside a second opening in the balloon presentation apparatus, 15 cm above the balloon release opening and connected to a laptop computer (K42J, ASUS, Beitou District, Taipei, Taiwan) positioned 10m away, out of sight of the focal animal. The webcam was directly aligned along the vertical axis of the inflated balloon. Therefore, the horse was recorded from viewpoint of the balloon avoiding any perspective errors that could be introduced by filming from an off-centre vantage point. We analysed the video recordings using Kinovea 0.8.15 (https://www.kinovea. org/).
Each horse was acclimated to the presence of the apparatus in the stall for 5 minutes before the onset of the trial. The experimenter then opened the compressed air valve from 10m away causing the balloon to inflate and suddenly appear. Each balloon was inflated with 2 bar of air pressure over 6s of inflation and remained inflated for 5 minutes. We conducted trials in the morning between 09:00h and 12:00h, at least 1 h after feeding (for further details see references [54] and [55]). The horses were unrestrained and free to move around the stable during the testing period. No persons were in view of the horses for the duration of the trial (10 minutes: 5 minute acclimatization plus 5 minute test).
These videos have previously been used to analyse how age affects the reactivity toward an uncontrollable and/or unpredictable stimulus [54] and to examine displacement behaviours [55] in horses. Baragli et al [54] found an age effect on the reactivity of horses towards the stimulus, with older horses showing less behavioural reactivity. Scopa et al. [55] found that behaviours linked with arousal decreased over the 5 minutes of observation suggesting habituation to the stimulus. Neither of these previous studies considered lateralisation of behaviour or sensory perception.
Behavioural measures
We measured the amount of time each horse spent looking at the balloon exclusively with one eye (monocular investigation, the time the left or right eye was visible from the balloon's perspective, while the contralateral eye was not) over the 5 minute trial. We considered a horse to be engaged in monocular investigation towards the stimulus (actively looking) when the auricle was directed toward the balloon (Fig 1), which was positioned in the visual hemifield [57,58]. We also recorded eye use separately for each minute long interval during the trial to allow us to investigate whether eye use preferences changed over time.
For each horse, we calculated a laterality index (LI = [right-left]/[right + left]) for the duration of time spent investigating the balloon with each eye. Horses that spent more than 75% of their time in monocular inspection observing the balloon one eye over the other (LI > 0.5 = right eye preferred; LI < -0.5 = left eye preferred) over the course of the 5 minute trial were considered to show an eye use preference [59].
Horses were free to move in their stall prior to and during the trial. Consequently, the stimulus could initially appear on either side of the horse, depending which direction they were facing when the balloon was inflated. We classified the starting position of each horse based on which side of the animal was directed towards the balloon apparatus at the onset of inflation.
Data analysis
After classifying the horses as right or left lateralised based on the laterality index, we compared the number of left lateralised to the number of right lateralised horses using a binomial test. For each horse, we examined the tendency to use each eye to inspect the balloon using a linear mixed model. Eye use was used as a within subjects repeated measure (duration of monocular inspection on the right, duration of monocular inspection on the left). We included the 2-way interactions between sex, age, and the starting position of the horse at the beginning of the trial with eye use as fixed factors in the model to examine whether these factors affected the tendency to use the left or right eye. We included subject identity and home stable as random factors. The duration of inspection with each eye was Log10 transformed prior to analysis to account for the positive skew of the data. We inspected the model residuals for departures from normality using a Q-Q plot. To examine the effect of the trial time on eye use preferences, we use a linear mixed model including horse as a random factor and the minute of observation (1-5) as a repeated measure. We examined both the directional preference (laterality index = [right-left]/[right + left]) and the absolute strength of that preference (the absolute value of the laterality index) across time. We carried out the analysis using SPSS 26 (IBM, USA) for Macintosh (MacOS version 10.15.7).
Ethical note
This study was carried out in accordance with the EU Directive 2010/63/EU for animal experiments (adopted by the Italian Animal Care Act, decree Law 26/2014). The Ethical Committee on Animal Experimentation of the University of Pisa approved the experimental design (Prot. N. 0033937/2018). Consent to participation in the test was signed by the owner of each horse.
Results
Horses spent a mean (± SEM) of 48.27 ± 4.38s with a single eye visible and 9.36±1.56s with both of their eyes visible from the perspective of the balloon. While inspecting the balloon with one eye, 56 out of 77 horses preferred to use one eye over the other (Fig 2). Of those 56 horses, 30 showed a preference for the right eye and 26 for the left eye, which did not significantly differ from chance (two-tailed binomial p = 0.69).
Across the horses, there was no significant effect of eye (right versus left) on the amount of time spent examining the balloon with one eye (F 1,148.66 = 0.53, p = 0.47; Fig 3). Age (F 1,73 = 0.26, p = 0.77), sex (F 1,73 = 0.24, p = 0.79), and the starting position of the horse (F 1,73 = 0.43, p = 0.65) did not significantly interact with eye preference, and therefore did not predict the tendency to use the right versus the left eye to inspect the balloon.
There was a significant effect of time on eye use preference (F 4,90.78 = 3.34, p = 0.012; Fig 4A) suggesting the horses' response to the balloon changed across the trial, although there was no consistent pattern across individuals. When looking at the absolute magnitude of the preference for one eye over the other, irrespective of direction, horses inspected the balloon significantly more with their non-preferred eye as the trial progressed (F 4,134.20 = 6.86, p < 0.001; Fig 4B).
Discussion
We found that horses spent most of their time inspecting an unexpected novel stimulus (an inflated balloon) with one eye at a time rather using their binocular visual field. Most horses showed a clear preference for one eye over the other, however, contrary to prediction, the left eye (projecting predominantly to the right hemisphere) was not more likely to be used, and similar number of horses showed a preference for the left and right eyes. These preferences changed over the course of the trial, with the preference for one eye over the other being the strongest in the first minute of inspection, with a gradual increase in the use of the non-preferred eye over the course of the 5-minutes of observation.
Lateralisation of neural function appears to be a universal feature of nervous systems [60,61], however the tendency for the direction of lateralised function to be aligned at the population level is highly variable between species and even populations [62][63][64][65]. The benefits of lateralised processing in the brain do not depend on population level consistency in the direction of asymmetry, and in fact consistent alignment at the population level may have costs in terms of behavioural predictability or exploitable sensory biases [66]. Social species may still be under selection for population level lateralisation if that consistency increases coordination between social fellows [67], which in turn may improve socially dependent antipredator strategies such as herding or schooling [62]. Theory predicts that social living prey species ought to show population level alignment for the detection of threats, while less social species tend to show individual level lateralisation only [68,69]. Some previous data on horses supports this prediction [33, 35, 45], however, our results suggest that this may depend on the context, methodology, or even the specific population under study [see also: 31, 32, 34, 36, 43,47,70].
Stress response tests have been widely used to evaluate emotional responses in animals, since the expression of negative emotions (when a stimulus induces a negative and unpleasant state) [71] is usually well-defined and hence easier to observe than positive emotional expression [72]. Stress response tests have been widely employed in the study of functional lateralisation [31, 44,73], however, these tests have yielded conflicting results [74]. This lack of consistency is a general phenomenon in the investigation of emotional states and is not unique to studies of sensory laterality [72]. Emotion is a subjective experience, and therefore, individual variation in the reaction to a particular stimulus is to be expected [75]. The subjective perception of a stimulus will determine the degree to which it acts as a stressor [76] and how it is approached by the animal. Individual variation in the evoked motivation or emotional response towards a given stimulus could explain variation in the lateralisation of visual investigation of that stimulus [77,78]. For example, if some horses found the balloon frightening, while others thought interesting in a more neutral or even positive way, then we may expect to see individual variation in the direction and strength of laterality among animals, fitting with the patterns we observed, even in the presence of a strong underlying population level lateralisation of for investigating negatively valent stimuli. The horses in our study spent approximately 20% of the trial visually inspecting the balloon on average, although this was highly variable between individuals. This relatively brief period of visual inspection could suggest that at least some individuals did not find the balloon to be as emotionally arousing as we had expected. Just over half of our sample showed avoidance behaviours, moving rapidly away from the stressor [54]. In the concept of stress proposed by Koolhaas et al [53], a stimulus can be defined as a "stressor" when it is uncontrollable and unexpected, thus causing arousal, which remains even when the stimulus is removed, suggesting that the animal may have perceived the stimulus as a threat. In our study, the inflated balloon meets these a priori requirements (unpredictability and uncontrollability) of a stressor. However, we did not have an
PLOS ONE
independent measure of stress response in tested horses (e.g., a physiological measure) and therefore, it is possible that the balloon did not induce stress or was perceived as stressful by some but not all horses.
Our current results are not necessarily at odds with previous studies that have found population level consistency in the direction of laterality among horses. Our sample size was large and heterogeneous, and despite rigorous control of experimental procedures, the differing backgrounds of the animals included in our study may have led to greater variation in the direction of lateralisation exhibited. Studies carried out with a smaller number of subjects drawn from a homogenous population maybe be more likely to show lateralization at the population level due to a more uniform response to the stimulus. Consistent handling, management, and/or training history, could increase consistency in sensory laterality at the population level.
Horses are conventionally trained and handled from their left side including leading, saddling, percutaneous injections, and other common procedures [35,79]. This prior experience could lead horses to react differently when a stimulus appears on their left side compared to their right side. This domestication bias has been suggested as a possible explanation for some population level asymmetries in horses [31]. Recently however, it has been shown that initial training does not affect sensory lateralisation [79]. The lack of population level consistency that we observed supports this contention. Furthermore, it has been suggested that sensory and motor laterality may be strengthened with age because of training [80], while other authors suggest that training may reduce the natural emotional asymmetry to human approach shown by horses [35]. The horses tested in our study were all trained from left side, per conventional stable management practice, and we did not detect any effect of age across a range of 4-24 years, suggesting that training history did not have a major effect in our sample. We tested each horse only once, raising the question of how repeatable the eye use biases we observed would be within animals across time [81][82][83]. We found significant changes in the pattern of eye use across the 5-minutes of observation; notably, the horses used their nonpreferred eye to inspect the stimulus more often as the trial progressed. Whether this trend would continue as the balloon became increasingly more familiar is an open question, but we predict that eventually the horse would largely cease to attend to the balloon as it became clear that it was not a threat, and any further visual inspection would not be biased to one eye over the other. Due to the importance of emotional state on lateralised processing, and the effects of habituation, a direct retest using the same setup is unlikely to be especially informative. Future work should aim to test eye use preferences across multiple contexts using different emotionally valent stimuli to determine the within individual consistency of these characteristics across contexts and over time.
We tested the horses in their home stall which is between 9 and 12 square meters square. Although offering plenty of space to move around, this confinement may have affected our results by limiting the ability for horses to escape from the stimulus, something that 57% of the animals nevertheless attempted to do [54]. To our knowledge only one paper has examined the influence of space constraints on laterality mammals, Zucca et al. [84] found that space availability affected variation in the laterality strength and direction in donkeys. When coping with a potential threat in a restricted space, animals should quickly evaluate the stimulus to determine the potential danger, regardless of the side of inspection, which may suppress population level lateralised tendencies.
Conclusions
In the current study, we found that individual horses responded to a novel and unexpected stimulus by examining it primarily with one eye. Across horses, the preferred eye (right or left) was not consistent, nor did sex or age affect the preferred eye. This individual variation suggests that training history is not the predominant causal factor in determining visual laterality in the horse. Horses showed a decreasing preference for using one eye over the other to inspect the stimulus as the trial progressed, suggesting that the horses may have habituated to the stimulus. Our study suggests that in at least in some testing contexts, horses appear to show individual but not population level laterality. We highlight the necessity for carefully designed observation methods in which data is recorded from a vantage point in line with the stimulus and the midline of the animal to avoid perspective errors in eye use data collection. We also suggest that stimuli assumed to act as stressors may not always do so for all sampled individuals, and ideally should be verified with physiological data in future work. | 2021-08-07T06:18:10.724Z | 2021-08-05T00:00:00.000 | {
"year": 2021,
"sha1": "a981a8918a6ffec022552407fbc48d44f5d37f54",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0255688&type=printable",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "d63f160622580dda175a5f316a0febe535356a1c",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
265585717 | pes2o/s2orc | v3-fos-license | Be sealed with the Holy Spirit: Behind the metaphor in Ephesians 1:13
God’s chosen people. Contribution: This article seeks to contribute to the ongoing challenges that Indonesian Christians face in manifesting their unity because of their diverse cultural or historical backgrounds as part of a formerly colonialised nation especially those who are underprivileged and live in rural areas. By recognising that God has redeemed, endowed them with the Holy Spirit, and united with each other, they are free from various status bondages, especially as a minority group among the largest Muslim population in the world.
Introduction
The Epistle to the Ephesians is one of the most significant documents ever written.As O'Brien states, according to Coleridge the Epistle is 'the most divine composition of humans', while Robinson mentions it as 'the crown of Pauline writings' (O'Brien 2004:1).
Secondly, the study explores the meaning of 'seal' as a metaphor with the Cognitive-Linguistic Metaphor Theory as the framework, which explains that metaphor is a language form that people use to convey meanings by connecting two separate domains into one relationship.The first domain is a concrete, material, or shared experience as its foundation, while the second is an abstract concept.The metaphor's certain aspects of its concrete domain point out the main concepts in the target one.Thus, metaphor is a method to help people understand abstract ideas based on their concrete experiences.The latest studies explore the sentimental, emotional, and individual experiences that the recipients of a metaphor are familiar with.Thus, the study explores how the metaphor 'seal' gives clues of the recipients' life.
Thirdly, based on the two previous steps, the study continues to delve into the metaphor 'seal with the Spirit' of Ephesians 1:13.It explores the relationship of the term 'seal' and 'with the Spirit' even with other teachings of the Epistle primarily in relation to unity and the features of the diverse recipients.In particular, the study explores the life of slaves and the gentile Christians whether the metaphor 'seal' evokes a certain emotion for them.Thereafter, this study combines the results of the analyses in relation to the hypothesis.
Results
The nature of the Epistle to the Ephesians Many studies have delved into the nature of the Epistle, its authorship, the context of the recipients, and the main messages of Ephesians.In the early church era, the Christians were familiar with this Epistle and accepted its authority as they knew the author and accepted his or her authority (Wolfe 2006).O'Brien also states that the Church Fathers such as Clement of Rome (95 CE) pointed out the writer.Ignatius, Polycarp, and Irenaeus also stated that the letter is the Epistle of Paul.Even Marcion mentioned the Epistle as related to Paul, but he writes it for the Christians in Laodicea.Then, in the Canon Muratorian in 180 CE, the church leaders classified it as one of Paul's epistles (O'Brien 2004:4).
The first verse of this Epistle is Παῦλος ἀπόστολος Χριστοῦ Ἰησοῦ διὰ θελήματος θεοῦ τοῖς ἁγίοις τοῖς οὖσιν ἐν Ἐφέσῳ καὶ πιστοῖς ἐν Χριστῷ Ἰησοῦ [King James Version: Paul, an apostle of Jesus Christ by the will of God, to the saints which are at Ephesus, and to the faithful in Christ Jesus] consists a word ἐν Ἐφέσῳ [meaning 'in Ephesus'].Historically, there are more than 16 textual variances in the manuscripts of Ephesian 1:1-12.The word 'in Ephesus' is absent from at least five important Alexandrian texts and the manuscripts mentioned by Origen and Basil, an issue that might cause people to doubt that the Epistle is addressed to the Christian in Ephesus.However, many experts such as Hoehner state that 'transcriptionally the omission of the phrase "in Ephesus" creates a grammatical anomaly; since the use of οὖσιν has no predicate' (Hoehner 2002:140). .Wintle and Gnanakan also state that Irenaeus, Tertullian, and Clement of Alexandria regarded the letter was addressed to the Ephesian church whatever the content of its first verse (Wintle & Gnanakan 2006:2).
One probability is the recepients of the Epistle are the Christians in Ephesus who lived among the non-Christian majority of the city with their Greco-Roman culture.Paul spent 2 years ministering there and established a church after he left Troas in Asia Minor as observed in Acts 18-20.In Act 19, Paul had encounters with the Jewish as well as the Greeks after 3 months of being there.Ephesus is the second largest centre of civilisation in the Roman era and was known as one of the prominent harbours that had 2.5 million people (Mark 2009).This city is a cosmopolitan that functions as a significant trade hub, a centre of God and Goddess temples, the philosophy and culture learning destination, and a society where magic, superstition, and Gnosticism flourish (Arnold 2001:4, 167).
Other studies view that the Epistle is a circular letter, written around the same time as Colossians and Philemon for a wider group of churches probably in the vicinity of Ephesus with a possibility for they who were in the Lycus valley areas (Lightfoot 1993).Scholars also use Ephesians 6:21 to state that, it is a circular letter written by Paul for the churches in Ephesus, Laodicea, Colossians, and the surrounding regions and one of the versions of the circular especially is sent for the Christians in Ephesus who are not Jewish in their background (Conybeare & Howson 2012).Then, there is also a study that mentions that this letter is a farewell message of Paul to many Christians (Wintle & Gnanakan 2006:3).
Whatever theory is plausible, the studies agree that the author of this Epistle addresses both the Jewish as well as the Gentile Christians who lived in the middle of a gentile culture.Furthermore, the author of the Epistle also pays attention to the differences in the social classes of those Christians, such as free persons and slaves.
Since the 17th century, some scholars have been doubting the identity of Paul as the writer of this Epistle (Mollica 2007:4).They point out that the writer of the Epistles does not show any close relationship with the recipients even though he has lived in that area at least for 2 years (Conybeare & Howson 2012).Others who doubt Paul as the writer of this Epistle base their argument on four factors.Firstly, there is no word 'Ephesians' in the manuscript (Lincoln 1990).Secondly, the theological concepts do not fit with those of Paul, but more with the later writers.Thirdly, the theology is more abstract and repetitive (Wolfe 2006).Lastly, the language expression is more like the Epistle to the Colossians, which proposes a universal ecclesiology (Lincoln 1990:Lx-Lxviii).
Today, as modern studies of the Epistle point that it was composed by his friends, disciples, or collaborators, the Epistle, thus it can be called a pseudo-epigraphical-Pauline letter.However, sufficient continuity with the messages in the authentic Pauline letters is evident to warrant a conclusion that some of the followers of Paul who want to continue his work and spread the messages wrote the Epistle (Talbert 2007:11).There is also an interesting view that states the Epistle as the work of Paul together with his friends (Johnson 1999:409).
O'Brien summarises that at least there are two streams of views concerning the Epistle's purpose.The first one agrees that the writer of this Epistle provides answers to specific problem of the recipients while the second one states that the Epistle conveys no specific or single issue (O'Brien 2004:77).
In the first group, for example, a book written by Lindemann states that their main problem is persecution (Lindemann 1985).Then, Ralph Martin mentions that the recipients have to deal with the problem of interrelationship between the Gentile Christians who hate the Jewish Christians (Martin 1978:224).Weedman also mentions that the Epistle is to help the early Christians erase their view, which separates the Jewish from the Gentile Christians (Weedman 2006:81).Furthermore, Arnold states that the Epistle teaches the recipients to deal with the worldly and supernatural powers that surround them (Arnold 1989).In short, many of them might live with various feelings of insecurity as magic and witchcraft including astrology and goddess worship bother them as new Christians (Arnold 2001:188).
In the second group, the scholars agree that the Epistle does not explicitly indicate whether there is a specific or single issue.For example, Mollica with many newer scholars also points out that the Epistle deals with a general purpose such as identity formation rather than to correct or address a specific situation or problem (Talbert 2007:14).
With the two possibilities, this study delves into the structure and contents of this Epistle to obtain a coherent interpretation in the process of exploring the meaning of the metaphor 'sealed with the Spirit', its recipients, and the reason of the author to use the metaphor.
Seal as metaphor
Why does Ephesians 1:13 contain the metaphor instead of direct language?Metaphor derives from two words meta meaning a change and pherein, which means to carry or to bear.According to Aristotle, the most remarkable thing is to master metaphors.It signifies our ingenuity.Good metaphors show an intuitive perception of similarities in different things (Barnes 2009:424).
In the 20th century, Gorge Lakoff and Mark Johnson promote a new framework to understand metaphors called a Cognitive-Linguistic Metaphor Approach.It proposes that each metaphor consists of two domains.The first domain is named a concrete or source domain.It consists of something that most people know.The second domain is called the abstract or target domain.The concrete domain highlights features or aspects of the abstract domain to convey the concept emphasised by the metaphor.Thus, the concept is hidden or embedded (Lakoff & Johnson 1980).
The theory also indicates that if one person uses a metaphor, the recipients have a certain space to respond to the embedded message or concept based on their own concrete experience in daily life or background.Based on the familiarity with the recipients' concrete experience, someone purposely uses the metaphor to allow them to give a deeper response as they interpret the concept based on both their knowledge as well as feelings.Metaphor can trigger deeper understanding while at the same time, it gives space for the recipients to develop their own comprehension and emotional response.Therefore, metaphor as a communication tool is very powerful as it touches the recipients deeply both in their cognitive and affective dimensions.
The functions and the significance of metaphor shows in its capabilities.Firstly, a metaphor either isolates certain characteristics, emphasises specific features, or conceals certain aspects of something known to convey the embedded meaning to the abstract domain.(Mayer-Schoenberger & Oberlechner 2002).
Secondly, the study of Mohammad, Shutova, and Turney emphasises that metaphor has a stronger emotional impact than literal language.A word such as a metaphor carries more emotion and the literal sense of the same word (Mohammad, Shutova & Turney 2016:31).Li, Guerin, and Lin also agree with this finding (2022).Gibb also points out to the same view (Gibbs 2008).For example, a shield as a metaphor evokes more emotional response in the recipients who are soldiers as their lives are closely related to the instrument especially during battles that they are engaged in compared with the recipients who are painters.The metaphor 'shepherd' will affect the people in farming more than the urban contexts.Therefore, the metaphor indicates the author's intention and the main characteristics of the recipients' social status, psychology, or religious background.
The use of metaphor in the Epistle shows that the author realises that direct language is not sufficiently strong to convey the message.It might also indicate that there are many and diverse groups of recipients.As the metaphor is intended to evoke certain kinds of sentiments, with high probability, the use of it in the Epistle indicates the writer's knowledge of the recipients' stronger emotional ties to it compared with others.Who are they?To answer the question, the concrete domain and the abstract one of the metaphors 'seals' needs an analysis.
A seal as a metaphor has an embedded teaching or message based on the concrete experiences of people.
Today, in general, the concrete domain of a seal is as follows: • A seal can be made to mark various elements or substances such as skin, paper, wood, clay, and others.• There are various forms of seals such as circles, hexagons, or others.• There are varieties in the colour of the seal.
• A seal is used by the owner or the representatives.
• No two seals are the same, each is unique or even exquisite.• A seal warns people not to open a door or a territory to be entered, and the content of a bottle to be unspoiled such as in a wine.It is a protection.
Based on such a concrete domain, the abstract domain consists of: • Regardless of the many forms and colours of the seal, all have the same embedded legal or social power.• A seal points out the ownership of an animal, building, vehicle, territory, or slaves while the owner's view their status as a property.• A seal indicates protection by the owner as she or he protects the asset.• The identity of the recipients that have been marked is fixed.
The metaphor 'seal' in the Bible
In The Dictionary of Classical Hebrew, the word 'seal' )חתם( is found some 42 times in the Old Testament (Clines 1996:3, 43-44).Pope states that the word is a loan from an Egyptian culture.There, a seal can take the form of a signet ring or can be worn around the neck or wrist.Seals were made of precious and semi-precious metals and stones, elaborately and exquisitely engraved and therefore, became a person's most valuable possession (Pope 1977:666).The owner used the seal to create a lasting mark on clay, skin, wax, or other substances.
One of the meanings of being sealed is in the Book of Daniel.
The sentence in Daniel 6:17: A stone was brought and placed over the mouth of the den, and the king sealed it with his own signet ring and with the rings of his nobles shows that Daniel's situation might not be changed.(Murdy 2008:1-2) It indicates that 'sealed' means a way to lock or close and prevent a change from taking place (Arndt & Gingrich 1957:140).Here, sealed can mean prevention towards the cancellation of a decision.
Then, Goodwin points out that a seal often relates to inheritance as it has at least two functions.Firstly, it gives assurance that heritage is authentic and legally true.Secondly, it gives affirmation that the heritage is truly given to the person (Goodwin 1958:231).
Furthermore, Ferda states that the rabbinic literature makes clear that 'seal' was a common metaphoric description of circumcision (Ferda 2012:558).The Jews claim that they are the chosen people with circumcision as the mark or seal.Thus, for the recipients of the Epistle who are the Jewish Christians, the metaphor 'sealed' would have brought circumcision into their minds or even emotional responses such as pride and a positive self-image.Contrarily, for the Gentile Christians, as they do not experience circumcision, the metaphor might trigger dislike towards the Jewish Christians.
Another group of Christians who can have a particular response to the metaphor 'sealed' are slaves.In the Roman Empire, the slave population exploded, as Rome quickly conquered their neighbours in the Mediterranean basin (Hopkins 1981:102).In the New Testament, the term used for slave is δουλος while for master is κυριος.These terms are used most often in the New Testament in describing the relationship of the believer to Christ and vice versa.The fact indicates that slavery was common at that time and the Christians are familiar with the presence of slaves.
If using the Cognitive Metaphor Theory analysis shows that seal indicates a property and thus slaves are viewed and treated as properties, Meltzer also states that slaves are simply properties of the owners who consider them not as fellow humans.There was no legal recourse for a slave when beaten by the owner.Furthermore, slaves could own nothing and inherit nothing.(Meltzer 1993:101).Logically, slaves know that they no longer have a future hope although abolition might take place rarely.
The identity as slaves can be recognised as in the Roman Empire inflicting marks on their bodies is a common practice (Kamen 2010).Jones describes that one of methods is branding the slave's forehead for those who tried to steal or have tried to run away (Jones 1987).Another practice is tattooing the slave's body to place a stigma on the individual and claim ownership.Lastly, slaves were forced to special collars on their necks (Thompson 2003:238).It is logical, for slaves, the metaphor seals can trigger negative feelings or hopelessness related to their identity especially if they experience cruelty of their master and, the seals are the constant reminder of their status.
Thus far, when the use of the metaphor 'seals' indicates the intended recipients could be both slaves and the people who are familiar with circumcision practices.How does the Epistle use the metaphor?
In the New Testament, the word sphragizo or sphragis (σφραγίζω or σφραγίς) is the translation of the word 'seal'.Lampe in the Patristic Greek Lexicon states that the term sphragizo means 'binding' and 'stating'.Moulton points out that as a figure of speech or reality, sphragizo or seal can mean 'sealing' or 'covering with a seal' (Moulton 1978).To conclude, Murdy writes that the word 'be sealed' or sphragizo has meanings that most biblical scholars agree.Firstly, a seal is to authenticate something.Secondly, it is a sign to affirm ownership.The seal then will prevent others from stealing them.The seal is also a mark that conveys a warranty (Murdy 2008:1-2).In short, the metaphor 'seal' or 'be sealed' points out the abstract concepts, which are identification, authentication, ownership, and security, or warranty that prevents any cancellation (Philippa 2012:115).
To sum up, in relation to slaves, they have close personal experience with seals, which indicates their low social status identity and lifetime bondage.The Jewish Christians also have a concrete experience with the metaphor 'seal' in relation to circumcision as it gives a positive feeling as the metaphor strengthens their identity as God's chosen people.
Metaphor of 'sealed with the Spirit' in Ephesian 1:13
The familiarity of the early Christians with the term 'sealed with the Spirit' is evident.A Church Father, Chrysostomus who lived between 347 to 407 AD wrote many homilies that consist of the term 'sealed with the Spirit'.Ferda mentions that a cardinal of Constantinople mentions that the term 'be sealed' is related to the word 'circumcised' (Ferda 2012:557).
Modern scholars such as Barth (1974:135-144) and Gnilka (1971:86) point out that the term is related to baptism.As baptism, seals indicate a public sign of one's turning point experience of receiving God's grace in Christ.
In 'sealed with the Spirit' of Ephesians 1:13, the metaphor signifies a couple of things.Firstly, the seal is related to the Holy Spirit.Then, the word 'with' instead of 'by' or 'for' is evident.Perhaps, to understand deeply the meaning, an analysis of the main theological themes and structure of this Epistle is needed.
Concerning the themes, O'Brien states that the Epistle repeatedly attracts attention to the contrasts between the former way of life of the Christians with their new life in Christ as conveyed by the author's repeated use of once-now forms (O'Brien 2004:2).The author describes the former condition of the new Christians by using various terms among other relating to the concept of being bondage to evil.The emphasis is on the contrast of their dark former condition and the unconditional grace of God.In Ephesians 2:4, although there is no explicit use of the word 'now', the author emphasised the contrast: 'But God who is rich in mercy' has acted decisively on behalf of those who were objects of wrath; he has made them alive with Christ, raised them up and seated them with him in the heavenly places (Eph 2:5-6) (O'Brien 2023:190).Based on such analysis, O'Brien proposes that in the first half of the Epistle, it describes the solemnity and the broad sweep of God's majestic saving purposes, while chapters 4-6 as the second part give guidance to live as the true believers (O'Brien 2023:70).
Other studies show that Ephesians consists of three parts, each having its theme.Chapters 1-3 is the first and Chapters 4-6 is the second and the third part (Merida, Platt & Akin 2014).The first part consists of the teaching of God's grace in Christ.The second part teaches about the Holy Spirit in their lives.The third part is about the practice of such grace by manifesting unity in the daily life of the believers.
Then, Lau states that the Ephesians describes ἐκκλησίαthe vehicle through which cosmic reconciliation in Christ manifested.It should appear as a body (σῶμα) united under Christ (1:23;2:16;4:4,12,16;5:23,30), as a biological organism (4:16), as a unified building or temple (2:19-22), and as the bride of Christ (5:25-30).Furthermore, he mentions that the Ephesians is rich in σύν and μετά prefixed words, indicating union with Christ or other believers (Lau 2009:35-47).Thus, the main theme of this Epistle is unity with God and church unity as the result of the redemption in Christ.
Concerning specifically the metaphor 'be sealed with the Holy Spirit', other scholars such as Nyamiwa point out that Ephesians 1:1-12 can convey a perspective that gives a backbone to the whole Ephesians (Nyamiwa 2016:1).
As a benediction or berakhah the beginning is as follow: Verse 1: Παῦλος ἀπόστολος Χριστοῦ Ἰησοῦ διὰ θελήματος θεοῦ τοῖς ἁγίοις τοῖς οὖσιν ἐν Ἐφέσῳ καὶ πίστοις ἐν Χριστῷ Ἰησοῦ, King James Version with Strong's translation is: Paul an apostle of Jesus Christ by the will of God to the saints which are at Ephesus and to the faithful in Christ Jesus.
Grace be to you and peace from God our Father and from the Lord Jesus Christ.
According to Greever, after the opening of the Epistle, three sections follow it.The first which is verse 4-6 emphasises the electing love and grace of God through or in Christ.
From the eternity past, God chose to set Divine's covenant affection on believers.The description of verse 4 indicates that the election grants holiness and blamelessness in the believers, which will find its completion in the presence of God at the last day.Verses 11-14 as the final section of Ephesians 1 speak about the assurance or the inheritance believers enjoy, which the Spirit's presence guarantees.However, in this part, there is a second person pronoun in Greek ὑμεῖς, which means 'you also'.Why does the author use the word 'also' here (Greever 2014:75)?
Weedman states that although the recipients of the Epistle are the Gentile Christians, the word 'us or our', refers to the recipients who are the Jew and Jewish Christians.
When he states about God's choice for us, the statement echoes the Jewish background of the author (Weedman 2006:83).Then, the word 'you also are sealed with the Holy Spirit', is addressed to the Gentile Christians.It is a rhetorical surprise that the author places at the end of a greeting.The recipients, mainly Gentile Christians are led to understand that while they live in God's grace because of Jesus Christ, however they do so by standing on the shoulders of Israel (Weedman 2006:88).This finding is parallel with the probability that the author uses metaphor as the recipients have personal experience or even certain kinds of sentiment with circumcision.
Relating with such a background as mentioned here, it means 'have been sealed' is also a freedom from the past identity.Then, if this verse is related to Ephesians 6:8 εἰδότες ὅτι ἕκαστος ἐάν τι ποιήσῃ ἀγαθόν, τοῦτο κομίσεται παρὰ κυρίου εἴτε δοῦλος εἴτε ἐλεύθεροςit (King James version: knowing that whatsoever good thing any man doeth, the same shall he receive from the Lord, whether he be bond or free), it might indicate that the metaphor is conveyed also for the doulos or slaves.They also need to be freed (ἐλεύθεροςit) from the identity that becomes bondage and receive a freedom to enter a new life with the Spirit who unites them.The baptism that they have received also points to the same concept.It is their inheritance from God.In the present time the inheritance has not been manifested thoroughly but in the future its manifestation will complete.
Another term in Ephesians 1:14, which is arrabōn supports the view.Arrabōn can mean earnest money, a deposit of future purchases, or a down payment.Thus, the author of the Epistle refers to 'sealed with the Holy Spirit' as a future completion, a sign of commitment, or assurance, and spiritual inheritance (Woodcock 1996:150).The Jewish Christians have had the inheritance all along as the covenant people and the Gentile believers have also received the down payment of it and along with the original inheritors.
Thus, to conclude, the author distinguishes two groups, the Jewish and Gentile Christians by clarifying the role of the Jews and the covenant in bringing the Gentiles into a relationship with God.The author does not look down on the Israelites and their law (Weedman 2006:84).Furthermore, the author might relate the metaphor to the status of slaves as they are parts of the church.However, it means that freedom from status for them is not immediately manifested in their world at that time, but in their soul.Therefore, the author also gives guidance about the slaves' conduct in their master's house.
Overall, Murdy states that at the beginning of the Epistle to the Ephesians, the author teaches about forgiveness from sin, redemption in the blood of Christ, and the richness of His gifts.It is of God's plan that has begun in the past and will be completed in the future (Murdy 2008:1).The term 'be sealed with the Holy Spirit' indicates that the Holy Spirit who dwells in us is the assurance of God's grace.Thus, being sealed with the Spirit also means being freed from the bondage of sin and various social or historical bondages.
Based on the combination of such interpretation with the Cognitive Metaphor Theory in analysing the contents of the concrete domain, an analysis of the concepts in the abstract domain will point out the embedded message: • The seal that represents the identity of God as the owner of the Christians, either they are the Jewish Christians or the Gentile Christians, and either they are free person or the slaves.• Being 'sealed with the Spirit' means that the recipients no longer live alone but with the Holy Spirit, which means they will have protection and guarantee of their special identity.• Legally a seal has a spiritual impact for each recipient.
• 'Sealed with the Spirit' means that transformation becomes part of the recipient's life, especially they need to leave their identities that are or were based on historical or social status.
Discussion
The use of metaphor helps to ease the understanding of the teachings and concepts presented in the Epistle especially for slaves or people who are new believers.The metaphor provides vivid imagery that helps the recipients to visualise the teachings, especially it connects with their personal social status and concrete experience or even can trigger emotional responses.The metaphor also reinforces key themes in the Epistle, such as the unity of the church and the new identity of believers in Christ.For the slaves, 'sealed with the Spirit' means that although in the world they are still slaves, in their spirit, they are free persons.For the non-Jewish Christians, although they do not undergo circumcision as their fellow Jewish Christian, their status and identity as the chosen people is equal.The Jewish Christians view the seal metaphor as the assurance of their continuing role as God's elect.Together the new believers learn about their roles and place as the united chosen people in their society.
Thus, with high probability the writer chooses such a metaphor as it affects the recipients both in their mind as well as in heart as the metaphor roots in their concrete life.
Although the metaphor 'seal' is known by many Gentiles and the Jewish Christians, the author purposely also uses it to address the slaves who are more emotionally or deeply touched by the word 'seal or being sealed'.
This study can contribute significant meaning and implications for Indonesian Christians.In the past, European missionaries managed to introduce the Gospel to many tribes.However, they separated each of the newly converted tribes.In a certain case, in a city where a Christian Church was developed and consisted of more than one tribe, the missionary persuaded the church leaders to separate them based on their tribal backgrounds.Such an approach causes many churches to distance themselves from each other as their tribes are very different from each other.Understanding and feeling secure based on this metaphor can provide spiritual encouragement, guidance, and a unique perspective for their daily lives and mutual appreciation among fellow Christians in society.
Many large churches in Indonesia minister in urban settings while others in rural contexts.The differences in access to good education, health, and economic life between the contexts are quite evident.Denominational origins added to tribal-cultural background and the differences in the social status of the members create many difficulties in uniting them.For example, at the national organisational level instead of having one Church body, besides the Communion of Churches in Indonesia, which is the largest, there are the Fellowship of Chinese Church which consists of the Indonesian Chinese Christians, the Fellowship of the Pentecostal Churches in Indonesia, and the Fellowship of the Evangelical Churches and Organisation.All together are members of a minority group.
Being 'sealed with the Spirit' serves as a reminder of their same and equal identity as children of God regardless of the differences in size, wealth, and heritage.Such a view is especially needed by small, rural, and poor churches to assure their worthiness.
http://www.hts.org.zaOpen Access The metaphor also underscores the importance of their roles in the pluralistic society with the largest Muslim population in the world.They should engage the other religious adherents peacefully while inspiring them by living as a united minority who feel secure in the Holy Spirit.
Conclusion
Analyses based on the exegesis of the Ephesians and the Cognitive Metaphor Analysis show how the embedded teaching of the metaphor signifies the worthiness of the ones who are sealed.Being 'sealed with the Spirit' means that the diversified believers are united in the Spirit to undergo transformation and continue sharing God's love and grace.
The hypothesis of this study proves that the use of the metaphor 'seal with the Spirit' indicates that the author knows that recipients' experiences are closely related to 'seal' at a personal level as slaves who at that time were branded or as the Christians who were familiar with circumcision.The embedded teaching in the metaphor is then, that living with the Holy Spirit means the recipients are united and assured of their worthiness as Christ has liberated them from various social or historical bondages which are divisive.This study can contribute to the Christians who live in the Indonesian pluralistic society today as a minority group and are still captured in the bondage of their past tribal, denominational, and social status, which prevents them from manifesting an inspiring Christian unity. | 2023-12-04T16:40:37.781Z | 2023-11-30T00:00:00.000 | {
"year": 2023,
"sha1": "5b1e4d9e0d3b41453a4c07769a01c557660d5c96",
"oa_license": "CCBY",
"oa_url": "https://hts.org.za/index.php/hts/article/download/9308/26048",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ed3e2b1e31a9940c7384df369add81fe8c817b6d",
"s2fieldsofstudy": [
"Philosophy"
],
"extfieldsofstudy": []
} |
158291244 | pes2o/s2orc | v3-fos-license | THE EFFECTIVENESS OF CRITICAL THINKING IN HIGHER EDUCATION
This paper aims to explore the role of the university as an educational institution to encourage young people to express critical thinking. Critical thinking should become part of the teaching process so that students engage more in analyzing social problems that exist in society. Often, we see young people in audiences who discuss, interpret different social problems and do so based on personal judgment or personal experience and not on the basis of facts or arguments. At this point, there is a need for students to develop critical thinking skills as a necessity to understand and identify phenomena occurring in social reality. The focus of the paper is on the question: How much does the European University of Tirana enable the student to think critically? What are some of the basic skills you need to teach students to develop critical thinking and how much is critical thinking part of the curriculum? What are some skills that one needs to teach students for developing critical thinking and how much is the space of expressing critical thinking part of the curriculum? If critical thinking were to become more integrated in the teaching process, this would help students engage more in understanding knowledge, identifying social problems, and problem-solving abilities. The document focuses on the university as the main institution that should foster the power of critical thinking in students.
Introduction
Facione argues that, teach people to make good decisions and you equip them to improve their own futures and become contributing members of society, rather than burdens to society.Becoming educated and practicing good judgment does not absolutely guarantee a life of happiness, virtue, or economic success, but it surely offers a better chance at those things.And it is clearly better than enduring the consequences of making bad decisions and better than burdening friends, family, and all the rest of us with the unwanted and avoidable consequences of those poor choices.(Facione, 2015, p. 2).
The study focuses on the effectiveness and the role of students with the development and expression of critical thinking as a necessary aspect to understanding our individual and social experiences.Critical thinking should become more part of the teaching process where students can engage in the identification, understanding and analysis of various social problems.But what are some of the basic skills that school should teach students to promote critical thinking?How much is part of curricula the space where students can express critical thinking?What are some of the methods used in teaching that stimulate critical thinking?
This paper focuses on the university as the basic educational institution that should promote the preparation of students with the ability to express and apply critical thinking to every link of their student, professional, or academic performance in the future.
The paper will first address some of the key definitions provided for critical thinking, ranging from thinkers and classical philosophers, such as Bacon, Glaser, Paul, Scriven, Siegel etc. and other contemporary authors such as Crebert, Gardner & Marzano, Facione, McKeachie etc. who through undergraduate studies have identified new ways of understanding and defining the critical thinking concept.
Second, we will argue some of the critical thinking skills that a student must learn to manifest them, not only in educational processes but also in everyday life practice.Some of the teaching strategies that promote critical thinking are case studies, focus groups, various discussions, problem-solving, reflections on a phenomenon in the form of essays, commentaries, reportages, etc.How does school promote these strategies or teaching methods to students?
On-line Journal Modelling the New Europe Issue no.26/2018 219 Third, the paper tries to answer the question, why does the development of critical thinking in students play an important role in their personal and social life?In this issue, the focus is on the impact and efficiency of critical thinking in students and the role it plays in society.Among other things, some teaching strategies or practices will be presented on how the concept of critical thinking in everyday academic life can be developed with students.
Classical definitions of 'Critical Thinking'
Critical thinking has been defined in many different ways.Very broad definitions include "thinking which has a purpose or reflective judgement".Basically, the term "critical" is related to the Greek word "criterion", or standard to judge.The term "critical" is essentially related to thinking, judgment and appreciation as forms of thinking.The main object of critical thinking is related to determining the quality and value of your beliefs.Thinking critically has nothing to do with what you think, but how you think."Critical thinking does not focus on the causes of your conviction, but on whether this conviction is worth it.A conviction is worth and should be kept if we have solid reasons to accept it.Critical thinking offers a whole set of compelling criteria in the techniques, attitudes, and principles that we use to evaluate beliefs and determine whether they are based on sustainable reasons."(Vaughn & MacDonald, 2010, p. 3). "Francis Bacon (1561-1626), founder of modern science, articulated the basic principles and methods of science and propagated their use in the prudent acquisition of accurate knowledge.He also warned of the risk of common mistakes in thinking, which could ruin all the efforts of science and lead to deformed perceptions and heavy mistakes".(Vaughn & MacDonald, 2010, p. 35).According to Bacon, 'scientific thinking' is based on 'facts or as we call it critical thinking' which is a very important tool in seeking truth."He called the 'icons of the mind' the mistakes, because according to him, people not only make mistakes, but also fetish them, just as we do fetish false gods" (Ibid.).
Eduard Glaser in 1941 argued that critical thinking is a human ability to create "a strong persistence in search of data that support any beliefs or assumptions we have."(Fisher, 2001, p. 3).Richard Paul in 1993 gives an interpretation other than Glaser's.According to him, critical thinking is the way of thinking about "any subject, context, problem, in which On-line Journal Modelling the New Europe Issue no.26/2018 220 the critical thinker in such cases proves his or her qualities of thinking and masterfully builds the natural structures of thinking and sets intellectual standards on them".(Fisher, 2001, p. 4).
Michael Scriven argued that "critical thinking is an academic competency akin to reading and writing.It defines it thus: critical thinking is skilled and active interpretation and evaluation of observations and communications, information and argumentation.He defines critical thinking as a 'skilled' activity for reasons similar to those mentioned above.To be critical, thinking has to meet certain standards -of clarity, relevance, reasonableness" (Ibid.) Siegel has defined critical thinking as an "education in knowledge of rationality," McPeck in 1981 described the term "skepticism or reflective doubt," Barnett in 1997 with the term "Critical Self Reflection", while Toulmin, Rieke and Janick in 1984 linked critical thinking to reasoning to use it later as "central activity in introducing reasons in support of a particular issue or argument".(Vyncke, 2012, pp. 9-10).According to Ennies in 1985, critical thinking is an "reflective and reasonable thinking that is focused on deciding what to believe or do".(Lai, 2011, p. 6).
Contemporary definitions of 'Critical Thinking'
Gardner and Marzano are two research scientists who, in the study they have made, have observed the role played by the creation of an interactive lesson for successful student learning.According to them, "studies show that learning in an active, organized and wellthought out learning is often far more complete and fruitful.Learning fruitfully means that you can think of what you learn.To apply it to life situations, to use it as a basis for further learning and continuing to learn independently " (Temple, Crawford, Saul, Mathews & Makinster, 2006, p. 1).
Other authors have seen critical thinking as a cognitive ability regarding rational judgment (Vyncke, 2012).Critical thinking is reasoning, reflection, responsiveness, and thinking ability that is focused on what to believe or do."We understand critical thinking to be purposeful, self-regulatory judgment which results in interpretation, analysis, evaluation, and inference, as well as explanation of the evidential, conceptual, methodological, criteriological, or contextual considerations upon which that judgment is based".(Facione, 2015)."The ideal critical thinker is habitually inquisitive, well-informed, trustful of reason, On-line Journal Modelling the New Europe Issue no.26/2018 221 open-minded, flexible, fair-minded in evaluation, honest in facing personal biases, prudent in making judgments, willing to reconsider, clear about issues, orderly in complex matters, diligent in seeking relevant information, reasonable in the selection of criteria, focused in inquiry, and persistent in seeking results which are as precise as the subject and the circumstances of inquiry permit".(Ibid).
Tara DeLecce, a psychologist who in one of the online lectures held on critical thinking, argued that "additionally, critical thinking can be divided into the following three core skills: Curiosity is the desire to learn more information and seek evidence as well as being open to new ideas.Skepticism involves having a healthy questioning attitude about new information that you are exposed to and not blindly believing everything everyone tells you.
Finally, humility is the ability to admit that your opinions and ideas are wrong when faced with new convincing evidence that states otherwise".(DeLecce, 2015).
Thinking is defined in different ways, some authors define it "as a psychic process, as a form of general reflection of reality on human consciousness through notions, judgments and reasoning".Authors call this "conscious thinking" because it is affected by pedagogical work of educators.(Jashari & Ballhysa, 2005, p. 23).Kec and Ara (2005), provide another kind of definition of critical thinking as follows: "Critical thinking is a process that brings a result.It is part of the thinking process, by which every person critically thinks, as a natural path of interaction with ideas and information; it is an active process that develops with a certain purpose or happens by chance, which permanently makes students control the information and meet the challenges; include; adapt or disseminate information.Critical thinking occurs when students begin to reflect on what they read, start asking questions, start linking knowledge that they know and learn new knowledge, select information, analyze behavior, actions and situations, argue and maintain attitudes assigned to them."(Gjokutaj,Shahini,Markja,Zisi & Muça,p. 38).
The concept of critical thinking in the age of modernity and in the history of philosophical thought of the 15 th to the 16 th centuries is mostly related to the concept of Philosophy science was born from reflexion on the essence.Descartes ( 1981) is known for the formula "I doubt, I think, so I am".Hence, he integrated into the history of philosophy the notion of "methodical suspicion".According to him, suspicion allows him to find the required security.Descartes required the analysis of all elements of thought to the simplest ones.Sort these simple elements in an increasingly complex order, so that complex terms can be clearly understood.(Hersh, 1981, p. 95).
The features of a 'critical thinker' according to Crebert (2011) are listed as follows: • Inquisitiveness about a wide range of issues; • Desire to become and remain well-informed; • Alertness to opportunities to use critical thinking; • Trust in the processes of reasoned inquiry; • Self-confidence in own abilities to reason; • Open-mindedness towards divergent world views; • Flexibility in considering alternatives and opinions; • Understanding of the opinions of other people; • Fair-mindedness in appraising reasoning; • Honesty in facing own biases, prejudices, stereotypes etc.; • Discretion in suspending, making or altering judgments; and
The basic skills of a critical thinker
Almost everyone who has worked in the critical thinking tradition has produced a list of thinking skills which they see as basic to critical thinking.For example, Edward Glaser (1941) listed the abilities: • To recognize problems; • To find workable means for meeting those problems; • To gather and marshal pertinent information; • To recognize unstated assumptions and values; • To comprehend and use language with accuracy, clarity, and discrimination; • To interpret data; • To appraise evidence and evaluate statements; • To recognise the existence of logical relationship between propositions; • To draw warranted conclusions and generalisations.(Fisher, 2001, p. 6) According to Willison & O'Reagan (2006), they argued these abilities: • determine the need for knowledge; • find and generate the information; • critically evaluate the information; • organise the information; • synthesise, analyse and apply the new knowledge; and • communicate the knowledge.(Crebert et al., 2011, p. 13)
Teaching methods that promote the development of critical thinking
There are several methods that encourage critical thinking among students.One of the first activities known as the three phases or PNP model (Prediction, Knowledge Building, and According to Crebert (2011), using concept maps in planning a curriculum or instruction on a specific topic helps to make the instruction conceptually transparent to students.Many students have difficulty identifying and constructing powerful concepts and propositional frameworks, leading them to see science learning as a blur of myriad facts or equations to be memorized.If concept maps are used in planning instruction and students are required to construct concept maps as they are learning, previously unsuccessful students can become successful in making sense out of science and acquiring a feeling of control over the subject matter.The benefits of concept maps are that they enable students to: • Establish connections between ideas they already have; • Connect new ideas to existing knowledge; and • Organise ideas in a logical, but not rigid, structure that can be updated.(Crebert et al., 2011, pp. 10-11).
Reading
Three important purposes of reading critically are: ♦ to provide evidence to back up or challenge a point of view; ♦ to evaluate the validity and importance of a text/ position; ♦ to develop reflective thought and a tolerance for ambiguity.
Strategies for reading critically
Ask questions about: For example Your purpose Why?
The context of the text Why written? Where? When? Who?
How relevant?
The structure of the text Is there a clear argument?Do the parts fit together logically?
The arguments Are they fair?Do they leave out perspectives of certain groups?
The evidence used Is evidence given to support the point of view from On-line Journal Modelling the New Europe Issue no.26/2018 226 an authority in this field?Is the evidence evaluated from different perspectives?
The language used Is the language coloured to present some things as more positive than others?Are claims attributed clearly?
If we want to generalize it, critical thinking has three parts: 1. First, critical thinking involves asking questions.
2.
Second, critical thinking involves trying to answer those questions by reasoning them out.
3.
Third, critical thinking involves believing the results of our reasoning.
Thinking critically about solving a problem, on the other hand, begins with asking questions about the problem and about ways to address it: 1. What is the purpose behind the problem?
2.
What is a good way to begin?
3.
Do I have all the information I need to start solving the problem?
4.
What are some alternative ways of solving the problem assigned?
Impact of expressing critical thinking in society
According to authors Lau and Chan, qualitative knowledge is the student's ability to express critical and creative thinking, intellectual flexibility, competence to analyse information and integrate various sources of knowledge in problem solving.According to them, critical thinking is the basis of "science" and "democracy".Science requires the use of So, a major part of learning how to think critically is learning to ask the questions-to pose the problems-yourself.That means noticing that there are questions that need to be addressed; admitting that there are problems.Often, this is the hardest part of critical thinking.This is true not just in school, but in daily life as well.People often do not ask themselves, "How can I best get along with my parents (my partner, my co-workers, my friends) in this situation?"Instead, they continue relating to them in habitual and unexamined ways.If your goal is to improve some aspect of your daily life, begin by asking yourself some questions: What are some concrete things I can do to get better grades?To meet new people?To read more effectively?To make the subject matter of this course meaningful in my life?To be effective, you need to really ask these questions.It is not enough just to say the words.('What is critical thinking', 2017, pp.5-6).
Data and Methods
The purpose of this study is to explore the role of the European University of Tirana, as an educational institution, in promoting the ability of young people to express critical thinking.Critical thinking should become part of the teaching process so that students engage more in analysing social problems that exist in society.Often, we see young people in audiences who discuss, interpret different social problems and do so, based on personal judgment or personal experience and not based on facts or arguments relying on them.At this On-line Journal Modelling the New Europe Issue no.26/2018 228 point, the need arises for students to develop this ability to think critically as a need to understand and identify phenomena occurring in social reality.
The objectives of this study are to: • Identify critical definitions of the critical thinking concept.
• Analyse some of the basic skills of being a critical thinker.
•
Introduce key methods that promote critical thinking in education.
• Analyse the importance of student learning with critical thinking in relation to life and society.
The research questions include the purpose and objectives of the study: Research question 1: How much does the European University of Tirana enable the student to think critically?
Research question 2: What are the skills and methods that pedagogues use in the classroom to encourage critical thinking in students?
This work will support two hypotheses:
Hypothesis 1: The concept of thinking critically is not sufficiently cultivated at UET students.
Hypothesis 2: If critical thinking were to become more integrated in the teaching process, this would help students engage more in understanding knowledge, identifying social problems, and problem-solving skills.
In this study, a random sample was used.Sampling is (n = 200) students.
Questionnaires were distributed to Bachelor students at the European University of Tirana.The level of student clarification by the professors to be precise, for questioning, problematic or a particular subject during teaching is 32% with usually or very often, 26.5% with several times but not often, 19% rarely, 18% neutral, and 4.5% with never.
The level of understanding in the organized way of the key concepts of the topic results in 30% at several times but not often, 26% usually or very often, 20.5% rarely, 19% neutral, and 4.5% never.
The level of the student's ability to draw conclusions based on data or information collected, according to findings, is 34.5% rarely, 25% neutral, 18.5% usually or very often, 18% few times but not often, and 4% never.
The level of student training to make the differences in the assumptions, conclusions and consequences of a particular phenomenon, is 27% several times, but not often, 26% neutral, 21.5% usually or very often, 20% rarely, and 10.5% never.
The level of students' ability to think logically according to the findings is 31% several times, but not often, 26.5% usually or very often, 20.5% neutral, 18.5% rarely, and 3.5% never.
On-line Journal Modelling the New Europe Issue no.26/2018
231
The level of student training to maintain a personal attitude against the arguments that arise, results in 36% several times, but not often, 27% neutral, 16.5% rarely, 15.5% usually or very often, and 5% never.
Conclusions
This study was designed to investigate the perceptions of students about how the university as an educational institution encourages them to express critical thinking.But in this study we understand that critical thinking has difficulties in implementation in teaching and learning in students.
The difficulties in the implementation of critical thinking while teaching UET students are: 1.
The level related to the explanation of critical thinking concept to students by their professors results in the majority of 62% thinking that this process occurs several times but not often, 25% of students argue that this concept is rarely explained, 13% are neutral about the explanation of the concept.This indicates that the concept of critical thinking should be given in some definitions where students need to be taught how to use it.
2.
About giving definitions of critical thinking in teaching during course development, 58% of students said that definition of critical thinking was explained sometimes but not often, which can greatly affect the non-recognition of this concept.Only 22% said that this concept was developed usually or very often during teaching.
3.
Increasing the level of understanding in organized way is the key due the concepts of structured creative abilities such as: student's ability to draw conclusions based on data or information collected; student's ability to make the differences in the conclusions and consequences of a particular phenomenon; students' ability to think logically; student's ability to maintain a personal attitude against the arguments that arise;
4.
Encouraging critical thinking in the learning process according to the findings occurs 59% several times, but not often.And 40% are neutral.The data shows that encouragement is an indicator of the positive path of strengthening critical thinking in students, but more work is needed to keep the level of neutrality even smaller.
On-line Journal Modelling the New Europe Issue no.26/2018 reflecting critically.Critical reflection helps to stay active, in our individual and social experiences.Critical reflection enables us to reconsider our previous judgments and assessments, and to complement them on the basis of what new facts suggest."Critical On-line Journal Modelling the New Europe Issue no.26/2018 222 reflection makes each of us, at school or after its end, consistently, to remain an active student.By critically reflecting upon institutional organization, economic and cultural conditions, and on the problems of society, we reconsider our previous trials and assessments, adapt them, complement them, and enrich them on the basis of what the new facts we discover through our careful observations and through critical reflection on them" (Tarifa, 2014, p. 202).
Reinforcement) was created by authors Joseph Vaughn and Thomas Estes in 1986, and then the curriculum became more popular in pedagogical practices nowadays, like the ERR On-line Journal Modelling the New Europe Issue no.26/2018 224 structure (Evocation, Realization of Meaning and Reflection).(Temple, Crawford, Saul, Mathews & Makinster, 2006, p. 2).
. Ability to define topic, terms and premise.Stating what is to be proved, providing supporting evidence and examples.audience already knows/expects to hear on topic.Tailoring the content, pace and tone of a presentation to the audience.Presenting information in an engaging and entertaining style.
"
critical reasoning" in different experiments and theoretical confirmations.The function of liberal democracies requires citizens who think critically about social issues to inform their judgments about proper governance and overcome the various prejudices and disagreements that occur.(Lau & Chain, 2016).On-line Journal Modelling the New Europe Issue no.26/2018 227 Critical thinking should be more focused on university curricula.Yanklowitz (2013) said that the goal of an argument curriculum is to enhance the development of the responsible citizens and the pedagogical methodology consists of cultivating argument skills, epistemic development, and moral development.Also, Calfee and Chambliss (1987) argue that "students are unlikely to develop critical thinking skills naturally when their class reading assignments consist only of narrative and explanatory texts, as opposed to argumentative texts".If the transmission of the knowledge of a lecture, text, or information of any kind were to take place according to the dialogue-questioning process, the students would be more involved in the learning process and begin to develop their own arguments on the themes discussed by the subject lecturer.(Yanklowitz, 2013). | 2019-05-20T13:05:12.230Z | 2018-06-15T00:00:00.000 | {
"year": 2018,
"sha1": "35478eb0336a2a7dd30bbeef20cf087c90bb8ea6",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.24193/ojmne.2018.26.13",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "5bde7838da8a4c745c2cda1a66b2b5f5a522ab65",
"s2fieldsofstudy": [
"Education",
"Sociology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
54685964 | pes2o/s2orc | v3-fos-license | Simulating maize yield in sub tropical conditions of southern Brazil using Glam model
The objective of this work was to evaluate the feasibility of simulating maize yield in a sub‐tropical region of southern Brazil using the general large area model (Glam). A 16‐year time series of daily weather data were used. The model was adjusted and tested as an alternative for simulating maize yield at small and large spatial scales. Simulated and observed grain yields were highly correlated (r above 0.8; p<0.01) at large scales (greater than 100,000 km2), with variable and mostly lower correlations (r from 0.65 to 0.87; p<0.1) at small spatial scales (lower than 10,000 km2). Large area models can contribute to monitoring or forecasting regional patterns of variability in maize production in the region, providing a basis for agricultural decision making, and Glam‐Maize is one of the alternatives.
Introduction
The sub-tropical region of South America is responsible for the majority of soybean, wheat, maize, rice, coffee, and sugarcane production in Latin America.With the exception of rice, these crops are generally conducted in rainfed conditions, and the oscillation of productivity is usually intense due to irregular rainfall distribution.Therefore, simulation can be a useful tool for predicting crop productivity a season or more ahead (Coelho & Costa, 2010) and to investigate options for crop management (Challinor et al., 2005).
Many simulation studies of grain crop yield have been done in different sub-tropical regions of South America.The crop models that integrate DSSAT (including Ceres for maize) have been largely tested for this issue (Cardoso et al., 2004;Travasso et al., 2006;Mercau et al., 2007;Tojo Soler et al., 2007).The Wang & Engel (1998) model was used by Streck et al. (2012) to simulate the developmental cycle of maize crops.In southern Brazil, the penalization model of Jensen (1968), based on empirical relationships between grain production and water conditions, was also tested for maize crops (Matzenauer et al., 1995;Mello et al., 2003).Doorenbos & Kassam (1979) crop yield models have been used to estimate potential and actual yields, as well as to test the sensitivity of maize genotypes to water deficits (Andriolli & Sentelhas, 2009).Several observational studies on soil-plant-atmosphere relations have also been done in recent decades in southern Brazil (Müller et al., 2005;Bergamaschi et al., 2006).These modeling and observational researches have focused on spatial scales ranging from field (smaller than 10,000 km 2 ) to larger areas (greater than 100,000 km 2 ).
The general large area model (Glam) is a large area process-based crop model used for simulating yield of annual crops (Challinor et al., 2004).It has a low input data requirement for large-scale applications and can directly accept large-scale climate information from global and regional climate models.The model has been previously used to simulate groundnut (Challinor et al., 2004;Osborne et al., 2007) and wheat production (Sanai et al., 2007) The main hypothesis of this study is that the Glam model is robust enough to simulate grain yields at large areas, for different crops and climate conditions.
The objective of this work was to evaluate the feasibility of simulating maize yields in a sub-tropical region of Southern Brazil using Glam with a 16-year time series of daily weather data.
Materials and Methods
The study was carried out in the South of Brazil, in the state of Rio Grande do Sul (27.2° to 29.8°S and 51.2° to 56.0°W, at an altitude range of 0-1,380 m) (Figure 1).The state is one of the main maize producers in the country.During the study period (1998)(1999)(2000)(2001)(2002)(2003)(2004)(2005), it was ranked as the third main producer, but changed to the sixth position in 2004-2006(Companhia Nacional de Abastecimento, 2006).Maize (Zea mays L.) yield was simulated at three spatial scales: the main producer region (northern-northwestern zone of the state), which accounted for 83% of the state maize production during the studied period (Instituto Brasileiro de Geografia e Estatística, 2006); 11 micro-regions within the producer region; and one municipality within each micro-region (Figure 1; Table 1).
Maize yield production data from 1990 to 2005 (16 years) were collected from the Brazilian Institute of Geography and Statistics (Instituto Brasileiro de Geografia e Estatística, IBGE).Most maize crops in the state of Rio Grande do Sul are sown from September to the beginning of November and harvested from January to the beginning of March.The average sowing date for each municipality, obtained from the data of the official extension service in the state (Emater, RS), represents the average date by which at least 50% of the maize crops were sown between 2002 and 2005.Once the sowing data was known, the mean date of tasseling, milky and physiological maturity was obtained from a set of experimental data (Matzenauer et al., 1995;Müller et al., 2005;Bergamaschi et al., 2006) for short season maize hybrids (Table 1).
Daily rainfall, global solar radiation, relative humidity, minimum and maximum air temperatures were taken in each of the 11 municipalities from 1990 to 2005.The same database was used for the municipality and the respective micro-region (Figure 1).Weather data for the main producer region was the mean value of the 11 weather data stations.Weather stations belonged to the following official networks: Fundação Estadual de Pesquisa Agropecuária, Fepagro (in the municipalities of Cruz Alta, Erechim, Ijuí, Júlio de Castilhos, Santa Rosa, São Borja, Taquari, and Veranópolis) and Instituto Nacional de Meteorologia, Inmet (in the municipalities of Iraí, Passo Fundo, and São Luiz Gonzaga).Water retention data were taken from soil surveys (Dedececk, 197;Beltrame et al., 1979) of spatial units normally larger than one municipality.The prevailing soil type was used as the representative soil for each micro-region.Average values of the 11 soil coefficients were calculated in order to provide a mean soil water condition of the whole producer region for: saturation, field capacity, and permanent wilting point.The Glam processes and methodology were kept similar to the original framework, described by Challinor et al. (2004).A field experimental dataset for maize was used to define new parameter values for Glam-Maize in the region (Table 2).Great part of the data was obtained from field experiments carried 12 Jan.12 Feb.75 135 (1) Of at least 50% of the maize crops, according to the official extension service in the state (Emater, RS). ( 2) Mean date of tasseling, milky and physiological maturity for short season maize hybrids (Matzenauer et al., 1995;Müller et al., 2005;Bergamaschi et al., 2006). (1)Data in parenthesis represent the range among authors.The parameter used in modeling was 32°C. ( 2 Field capacity at matrix potential of -0.006MPa (72 hours of drainage after saturation).Data in parenthesis represent the variability among different soil types, in the region under study out in the state of Rio Grande do Sul in the 1980s and 1990s.Other parameters were obtained from the literature, with a preference for the most suitable to the regional cropping system (Table 2).All parameter values were kept constant across the study region, except for sowing dates, yield gap parameter, and soil water storage capacities.Sowing dates are given in Table 1.The yield gap parameter (YGP) accounts for the impacts on yield due to factors other than weather (i.e., pests, diseases, and management factors, which reduce yield by an amount referred to as the yield gap).
The YGP was calculated by minimizing the root mean square error found between simulation and observation dataset, as proposed by Challinor et al. (2004).
The maize crop cycle was divided into four stages (Table 2): stage 0, juvenile stage, from sowing to four fully expanded leaves; stage 1, rapid vegetative growth, from the end of stage 0 to tasseling; stage 2, flowering, from tasseling to the beginning of grain-filling; and stage 3, grain-filling, from the end of stage 2 to physiological maturity.Plant emergence was assumed as occurring eight days from sowing.The duration of all other stages was determined using thermal time (degree-days), calculated from the mean air temperature and cardinal temperatures (base, maximum, and optimum temperatures, Table 2).The thermal time chosen for each crop stage was in accordance with the experimental results for short season maize hybrids in the state.Those hybrids were assumed as photoperiod insensitive in the region.
The leaf area index growth rate (dL/dt) was based on a segmented linear model and varied for each stage (Table 2).The rate dL/dt increases from plant emergence to tasseling (stages 0 and 1) and decreases during stages 2 and 3.Those values were also taken from field experiments for short season maize hybrids, according to Müller et al. (2005).
Grain yield was calculated in Glam considering the above-ground biomass and the harvest index (HI).The latter was simulated using a mean rate of change in the harvest index (dHI/dt) during the reproductive period, from tasseling to physiological maturity (stages 2 and 3), and it was based on a six-year series of field experiments.Crop transpiration efficiency (TE) was calculated from the mean value of maximum evapotranspiration (ETm), measured in a weighing lysimeter, in five crop cycles (Bergamaschi et al., 2001;Radin et al., 2003).The partitioning of ETm to transpiration and soil evaporation was obtained around the maximum leaf area index by measuring plant transpiration through sap flow using a heat pulse tracer (Santos et al., 2000).
Regression analysis of simulated yield against observations was done after any technology trend was removed, in order to test Glam-Maize performance in simulating grain yield at the scale of municipalities, micro-regions, and main producer region.
Results and Discussion
Variability indices of both grain yield and rainfall were calculated by dividing their anomalies (with respect to the historical mean) by the respective standard deviations (Figure 2).This normalized grain yield standardized anomaly tended to be larger for municipalities and micro-regions than for the main producer region (Table 2).Rainfall during crop cycle explained (R 2 ) more than one-half of the detrended yield variance, whereas the rainfall during the 0-30 days after tasseling explained around 70% of the yield variance.
The effect of severe drought on grain yield is clearly shown in the 2004/2005 crop season, which was very dry; consequently, the harvest was less than half of that of the previous crop season (1,250 against 2,700 kg ha -1 for 2003/2004).Soil water availability is particularly important for maize crops during the tasseling and silking stages, when the leaf area index and the atmospheric evaporative demand are high (Bergamaschi et al., 2001(Bergamaschi et al., , 2006;;Radin et al., 2003).Tasseling occurred between December and January (Table 1).Therefore, short dry spells or isolated rainfall events during this period can cause high yield variability in maize crops in the region (Bergamaschi et al., 2006;Andriolli & Sentelhas, 2009).For example, the 1990/1991 season had a small rainfall anomaly for the entire crop cycle (i.e., standardized anomaly of -0.5, Figure 2), but a dry spell in the critical period of maize crops caused a strong negative effect in the 1990/1991 season (i.e., standardized anomaly of -1.25 for 0-30 days from tasseling).Since this short dry spell occurred in the critical period (tasseling to silking stages), a strong grain yield reduction was observed.Some simulated and observed phenological events, as well as the leaf area index, are shown in Figure 3.The simulated data correspond to the municipality of Santa Rosa, whereas the observed data were measured at the Experimental Station of the Universidade Federal do Rio Grande do Sul (UFRGS), in the municipality of Eldorado do Sul, just outside of the main producer region, close to the Taquari micro-region (Figure 1).Those two places have similar thermal conditions and a difference of about two degrees in latitude.However, limitations caused by water deficits in maize crops are higher in Eldorado do Sul than in Santa Rosa.
Glam-Maize simulations of the time of the developmental stages were in good agreement with the observed data.Glam-Maize estimated maximum leaf area index at 74 days after sowing, which was two days earlier than the observed data.The occurrence of milky maturity (102 days) and physiological maturity (132 days) was predicted five and one day later, respectively, than the observed.The agreement in timing (days after sowing) between estimated and observed phenological stages can be explained by the high dependence of maize phenology on the degree-days accumulation, as observed by Streck et al. (2008).However, leaf area expansion showed a consistent lag between the simulated and observed data, which may be attributed to other factors.Since observed data were obtained from irrigated experiments, it is possible that water limitations in Santa Rosa (in rainfed conditions) promoted some delay in leaf area expansion.Because of the importance of the leaf area for solar radiation interception in crop modeling and of the complexity of the genotype x environment interactions on maize development in subtropical conditions (Streck et al., 2012), this aspect must be considered with extreme caution in adjusting and testing maize crops models.
Glam-Maize simulations of soil evaporation (E), crop evapotranspiration (ET) and transpiration (T) for the entire main producer region are presented in Figure 4.The total observed ET of irrigated maize measured with a weighing lysimeter, during five years, (1993/1994 to 1998/1999) from experimental data in Eldorado do Sul (solid lines) and simulated by the Glam model for the municipality of Santa Rosa (dashed lines), in the state of Rio Grande do Sul, Brazil.Vertical lines show the observed (solid lines) and simulated (dashed lines) dates of maximum leaf area index, milk maturity, and physiological maturity.Actual evapotranspiration (ET), plant transpiration (T), and evaporation from the soil surface (E) estimated by the Glam model for maize crops over the main producer region of the state of Rio Grande do Sul, Brazil.Solid triangles are the maximum ET of maize measured in a weighing lysimeter in Eldorado do Sul, in the state of Rio Grande do Sul, Brazil (Bergamaschi et al., 2001;Radin et al., 2003). is also shown (Bergamaschi et al., 2001;Radin et al., 2003).Estimated ET was in good agreement with the experimental results during the three rainiest seasons (1994, 1995, and 1998).In contrast, the observed crop ET was higher than the ones simulated for the two years with substantial dry periods (1996 and 1997).This difference between observed and simulated ET can be mainly associated to the irrigation supply, which was not considered in the simulation.Simulated and observed maize yield time-series for the municipalities of Santa Rosa and Passo Fundo and their respective micro-regions are shown in Figure 5.These two micro-regions showed different correlations between observed yield and rainfall (Figure 3).Timeseries of observed and simulated crop yield for the main maize producer region are presented in Figure 6.Glam-Maize simulated well the inter-annual variability in maize yield for these time-series, including those The agreement between simulated and observed yields tended to increase from municipality to micro-region, even though correlations between observed yield and rainfall did not show significant differences (Figure 2, Table 3).Glam-Maize performance in simulating yield varied when the sowing date was changed, but these differences were not statistically significant (Table 4).Glam performance was analyzed under different spatial scales, technological levels, soil types, and sowing dates.The parameterizations in Glam are relatively simple, but complex enough to capture the climatic component of yield variability.The non-climatic components include new plant genotypes, improvements in soil management and fertilization, and plant density (Berlato et al., 2005).Although possible variations in these factors were neglected (e.g., plant genotypes), Glam-Maize simulated grain yield variability to a good degree of accuracy.
The comparison between observed and simulated yields is subject to constraints from both data and modeling.First, because a discrepancy (around 10%) between the two observed data sources (Conab and IBGE) for the same period (for the entire state) indicates a possible imprecision in the yield estimation methods of those official Brazilian agencies (Companhia Nacional de Abastecimento, 2006;Instituto Brasileiro de Geografia e Estatística, 2006).It is reasonable to expect that Glam-Maize performance could be better analyzed with more accurate yield observations.Second, the wide range of sowing dates adopted by farmers (Table 2) is problematic for large-area modeling.In the warmest zones, high thermal availability may sometimes permit two maize cycles in a single cropping season using very-short season hybrids.Similarly, maize is sometimes sown on small farms just after the harvest of other spring crops, such as dry beans and tobacco, that delay the sowing of maize and reduce its potentiality.However, neither of these processes has been simulated in the present study.Third, regional variability in soil and topography may influence the spatial variability of hydrological conditions, and some regions have higher spatial variability in soils than others.For instance, the Santa Rosa micro-region (site 5, Figure 1) has predominantly small farms and a mostly irregular pattern of soils and topography.In contrast, the Passo Fundo micro-region (site 6, Figure 1), located in the northern plateau of the state, consists of medium to large farms with more uniform soils, topography, and technological levels.In this context, improvements in simulating water dynamics in the soil-plant-atmosphere system can be performed by matching the soil-water storage to the observed conditions for each particular prevailing soil type.
Modeling maize yields in southern Brazil, even at large spatial scales, presents a great challenge, but can be very important for several applications, such as evaluating impacts of climatic changes on crops or estimating grain yield, taking seasonal climatic information as inputs.In this sense, Coelho & Costa (2010) observed a good performance of the Glam-Maize model (parametrized in the present study) in simulating maize yields in southern Brazil by using seasonal climate forecasts.This study also shows that a process-based model can simulate maize crop yield variability.A well-designed model (1) The correlation coefficients were significant at 5% probability.
may produce accurate results when fitted and parametrized to regional cropping systems.For this purpose, a well-established experimental basis seems to be indispensable, considering the complexity of the interactions among environmental factors on maize crops, in particular under sub-tropical conditions (Streck et al., 2012), and the specificity in cropping systems.Further development of the Glam-Maize model may improve its performance by introducing new adjustments that take into account short-term water stresses around flowering time, when the number of grains is established (e.g., Ceres-Maize model).
Conclusions
1.The general large area model for annual crops (Glam) captures the high inter-annual variability of maize yield in the sub-tropical conditions of the state of Rio Grande do Sul, Brazil, when parametrized for regional conditions and cropping systems.
2. Simulations of grain yields by the Glam-Maize model are highly correlated with observed yields at large spatial scales, with variable correlations at smaller spatial scales.
3. Large area crop simulations by processed-based models can contribute to monitoring or forecasting regional patterns of variability in maize yield in sub-tropical conditions, providing a basis for agricultural decision making, and the Glam model is one of the alternatives.
Figure 1 .
Figure 1.Map of the state of Rio Grande do Sul, Brazil, showing the main maize producer region, the 11 municipalities, and their respective micro-regions.
Figure 2 .
Figure 2. Variability of maize grain yield and rainfall at three spatial scales: municipality (county), micro-region, and main producer region, in the state of Rio Grande do Sul, Brazil.Standardized anomalies were calculated by dividing each value by the standard deviation in the time-series.
Figure 3 .
Figure 3. Maize leaf area index averaged over six cropping seasons(1993/1994 to 1998/1999) from experimental data in Eldorado do Sul (solid lines) and simulated by the Glam model for the municipality of Santa Rosa (dashed lines), in the state of Rio Grande do Sul, Brazil.Vertical lines show the observed (solid lines) and simulated (dashed lines) dates of maximum leaf area index, milk maturity, and physiological maturity.
Figure 4 .
Figure 4. Actual evapotranspiration (ET), plant transpiration (T), and evaporation from the soil surface (E) estimated by the Glam model for maize crops over the main producer region of the state of Rio Grande do Sul, Brazil.Solid triangles are the maximum ET of maize measured in a weighing lysimeter in Eldorado do Sul, in the state of Rio Grande do Sul, Brazil(Bergamaschi et al., 2001;Radin et al., 2003).
Figure 5 .
Figure 5. Observed (■) and simulated (Δ) maize grain yields for the municipalities of Santa Rosa (A) and Passo Fundo (C) and their respective micro-regions (B, D), in the state of Rio Grande do Sul, Brazil, from 1990 to 2005.
Figure 6 .
Figure6.Relationships between the observed (■) and simulated (Δ) maize grain yields for the main maize producer region of the state of Rio Grande doSul, Brazil, from 1990 to 2005.
Table 1 .
Area of maize cultivation, average dates of sowing and phenological stages for each municipality, micro-region, and main producer region.
Table 2 .
Parameters and specific values used in the Glam model for simulating maize growth and yield in sub-tropical conditions of southern Brazil.
Table 3 .
Correlation coefficients between rainfall and maize grain yields at three spatial scales: municipality, micro-region, and main producer region, in the state of Rio Grande do Sul, Brazil.
Table 4 .
Correlation coefficients between simulated and observed yields of maize, for small and large spatial scales, with different sowing dates around the basic sowing date (BSD), from the 1989/1990 to the 2004/2005 cropping season, in the state of Rio Grande do Sul, Brazil. | 2018-12-06T12:13:33.611Z | 2013-04-18T00:00:00.000 | {
"year": 2013,
"sha1": "8ff3cfd3927609b7af138e18a2b6e8cd3b2090f7",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/pab/a/Xv3rP9bFqn8KJWr5bj4fRcT/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "88b0d810d8ac037c610f2e83ee4cba7bc05457dd",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
254965217 | pes2o/s2orc | v3-fos-license | Implementation and perceived impact of the SWAN model of end-of-life and bereavement care: a realist evaluation
Objectives To evaluate the End-of-Life and Bereavement Care model (SWAN) from conception to current use. Design A realist evaluation was conducted to understand what works for whom and in what circumstances. The programme theory, derived from a scoping review, comprised: person and family centred care, institutional approaches and infrastructure. Data were collected across three stages (May 2021 to December 2021): semi-structured, online interviews and analysis of routinely collected local and national data. Setting Stage 1: Greater Manchester area of England where the SWAN model was developed and implemented. Stage 2: Midlands. Stage 3: National data. Participants Twenty-three participants were interviewed: Trust SWAN leads, end-of-life care nurses, board members, bereavement services, faith leadership, quality improvement, medicine, nursing, patient transport, mortuary, police and coroners. Results Results from all three stages were integrated within themes, linked to the mechanisms, context and outcomes for the SWAN model. The mechanisms are: SWAN is a values-based model, promoting person/family-centred care and emphasising personhood after death. Key features are: memory-making, normalisation of death and ‘one chance’ to get things right. SWAN is an enablement and empowerment model for all involved. The branding is recognisable and raises the profile of end-of-life and bereavement care. The contextual factors for successful implementation and sustainability include leadership, organisational support, teamwork and integrated working, education and engagement and investment in resources and facilities. The outcomes are perceived to be: a consistent approach to end-of-life and bereavement care; a person/family-centred approach to care; empowered and creative staff; an organisational culture that prioritises end-of-life and bereavement care. Conclusion The SWAN model is agile and has transferred to different settings and circumstances. This realist evaluation revealed the mechanisms of the SWAN model, the contextual factors supporting implementation and perceived outcomes for patients, families, staff and the organisation.
INTRODUCTION
Death, dying and bereavement are a natural part of life, and yet in technologically developed countries of the world, death and dying increasingly take place in a clinical or institutional setting.Experiences of death, dying and bereavement have far reaching consequences for individuals and society. 1 Many people have rarely if ever seen a dead person until it is somebody close to them, and Walter 2 suggests that this has led to support for the dying and bereaved being delegated to professionals.Up to one-third of the nonelective hospital inpatients at any one time may be in the last year of life 3 and in the UK, hospital remains the most common place of death. 4However, providing care to the dying and bereaved poses multiple challenges to health professionals, including: complex attitudes to death and dying, uncertainty about clinical roles and responsibilities and difficulties in coordination and communication, which can all impact on the quality of care provided to patients. 5Institutional and organisational cultures are also thought to influence end-of-life care, with some acute clinical settings being more equipped for curative rather than palliative care. 6mproving care of people at the end-of-life and those who are bereaved is a key policy priority within the UK 7 and worldwide, 8 9 and this includes those bereaved by both expected and unexpected deaths.Palliative care has recently been integrated into the WHO's
STRENGTHS AND LIMITATIONS OF THIS STUDY
⇒ Purposive sampling was used to include staff from different disciplines and settings.⇒ Data were triangulated from different sources across the three project stages.⇒ The interviews were conducted at only two different organisations in England but contextual information is provided to aid transferability.⇒ This study only included organisations where the SWAN model has been successfully implemented.⇒ The study did not include data directly collected from patients and families.
Open access definition of Universal Health Coverage. 10Care of the dying and grieving is complex and does not fall under the remit of any single profession.Research examining the changing needs for end-of-life care in an ageing population recognises that a paradigm shift is required, from the provision of palliative services to a palliative approach to care, operationalised through integration into systems and models of care across institutional and organisational boundaries. 11e SWAN model development and context From the early 2000s, an integrated care pathway, the Liverpool Care of the Dying Pathway, was embedded in UK policy but then withdrawn in 2014 after mounting controversy, 12 with recommendations for personalised end-of-life care plans instead. 13Since then, alternative guidelines have been published. 7 14 15In addition, there is the Gold Standards framework for training primary healthcare providers to identify and plan care for people at end-of-life. 16However, in many care settings, the lack of an overall model led to a vacuum for end-of-life care.In a two times per day to enhance the provision of end-of-life and bereavement care, the SWAN model of care (see definition table 1) was developed in Greater Manchester, in the north of England, in 2012.This values-based model of care incorporates both expected and unexpected deaths, and focuses on enabling a flexible compassionate workforce to provide end-of-life and bereavement care across a variety of settings.The SWAN model meets the criteria for being a complex intervention as it comprises several interlinked components, is dependent on personnel delivering the model, and delivery is adjusted to different contexts and settings. 17he model's development was based at one National Health Service (NHS) healthcare organisation ('Trust'), comprising several hospitals and was implemented across the local community too, involving services such as the coroners and police force for unexpected deaths.The model's development and implementation was led by a designated senior nurse with a multidisciplinary team.
A small team of end-of-life care and bereavement nurses were employed to enable the wider workforce to deliver the SWAN model of care across the hospitals and community.Since inception of the SWAN model, it has been expanded and adopted by other NHS Trusts in the UK.However, despite anecdotal support for the model, there has been no formal evaluation of its use in clinical practice.In general, evaluation of end-of-life and bereavement care is considered ethically challenging 18 and robust evaluations of hospital based end-of-life and bereavement services are scarce. 19This paper presents findings from a realist evaluation of the SWAN model.
Aim and objectives
The study aim was to evaluate the End-of-Life and Bereavement Care model (SWAN) from conception to current use.The study objectives were: 1.To explore the conception, development and implementation of the SWAN model.2. To investigate how the model was introduced across an organisation and community, and the wider implementation into other organisations.3. To evaluate perceived impact of the SWAN model in different circumstances from the perspective of those using the model (including the COVID-19 pandemic).4. To explore the emotional, personal, professional and practical experiences of staff using the model.Table 1 The SWAN model of care The SWAN model of care for individuals expected to die The SWAN model of care for individuals who have sudden/unexpected death Aim of the swan: To promote dignity, respect and compassion at the end-of-life.
Aim of the swan: To promote dignity, respect and compassion following death.
Sign-is the patient believed to be entering the dying phase of life-start the individual plan of care and support for the dying person.
Sign-ensure the provision of private space is identified.
Words-sensitively communicate with the patient and those important to the patient and family.
Words-sensitively communicate with the family.Needs-are the needs of the patient and family being met, documented and reviewed regularly.
Actions
Needs-are the needs of the family being met, documented and reviewed regularly.
The Swan symbol is placed on the door or curtain of the bay/room, swan room, swan suite and mortuary in which the patient/family are being cared for or supported.
Permission to act and break the rules that do not exist.
Open access
The features of the evaluation were: the programme (SWAN model); the theory (end of life/bereavement care existing literature); the mechanisms for evaluation.
Mechanisms for evaluation
The mechanisms for evaluation were derived from the literature during a scoping review at the design stage of the study.The scope included research that explores the mechanisms of bereavement interventions, the impact of care before and at the time of death on outcomes related to family and staff and the contextual conditions that facilitate or impede the provision of such care.Outcomes from the review are published elsewhere. 19This review highlighted limitations in the quality and quantity of research available in relation to evaluating services and interventions at the end-of-life.However, from the literature identified, there are key mechanisms that appear to be of value, with three themes identified: person and family centred care, institutional approaches and infrastructure; these formed the programme theory for the evaluation.
Patient and public involvement
A Steering Group was appointed to provide governance to the research project, which included patient and public involvement (PPI), providing independent advice, subject and lived experience/expertise, oversight, monitoring of progress and supporting the investigators in the delivery of the project.The PPI representative had access to all project progress documents, input in the design of the study, data analysis and dissemination strategy.The PPI representative reviewed and commented on the final report prior to release.
Evaluation design
Data were collected across three stages from May 2021 to December 2021; see table 1 for the stages, settings, purpose, data collection methods and participants.
Settings
The Stage 1 data collection took place at a NHS organisation with several hospital sites ('Trust') in an urban area in the north of England (Manchester) where the SWAN model was originally developed and implemented.The Stage 2 setting was purposively selected as a Trust in a different area of England, where the SWAN model had been implemented for a minimum of 4 years, and there was an emergency department so that there were deaths in different circumstances.
Data collection
At Stages 1 and 2, data were collected through online semi-structured interviews with participants, purposively selected for their experience with the SWAN model, and to ensure inclusion of varied disciplines.Participants were approached by email, participant information sheets were provided and all participants signed written consent forms.Stage 1 topic guides (see online supplemental material 1) were developed by the project team, 21 which removed choice from families about this option.Families may perceive the consenting to tissue and organ donation as fulfilling their loved one's known preferences and giving meaning to their life. 22
Data analysis and synthesis of findings
The interview data were analysed thematically, using Braun and Clarke's 23 six-phase method.Following familiarisation with all the Stage 1, phase 1 interview data (phase 1 'familiarisation'), coding commenced (phase 2 'generating initial codes') by developing a deductive coding framework, drawing from the scoping review.As this framework was applied to each transcript, the framework was further developed by adding inductive codes from the data.This iterative process continued throughout the coding process.Two researchers reviewed the data set and codes to identify candidate themes and subthemes, with their contributing codes (Phase 3 'Searching for themes').At Stage 2, the coding framework was used to code the transcripts; while most Stage 2 data could be coded with existing codes, there were some additional inductive codes added.The coding was then considered against the Stage 1 themes, which were reviewed and refined (Phase 4 'Reviewing themes' and 'Phase 5 Defining and naming themes).Credibility of the results was enhanced by reviewing the original interview transcripts to make sure that all themes were grounded in the data or explained by the researchers' interpretive scheme.
The Stage 3 CQC reports for England (2015-2020) were searched for the term SWAN; there were 142 citations, relating to six NHS Trusts across England, with multiple hospital sites.The qualitative comments related to endof-life care in these Trusts' reports were extracted and content analysed and then merged with the themes from the Braun and Clarke analysis of Stages 1 and 2 interview data.During the report production (Phase 6), all these data contributed to the final version of the themes and subthemes.
The Stage 3 quantitative data (Healthcare Quality Improvement Partnership (HQIP) and local audit data) were analysed using descriptive statistics such as difference in time elapsed, frequency and performance against national benchmarks, such as the best practice indicator for tissue donation.Data synthesis was achieved through the convergence of the data sets (qualitative and quantitative). 24To enhance the quality of the findings, 25 integration of findings were applied through triangulation, where the Stage 3 quantitative results were reviewed against the themes from the qualitative data sets (Stage 1, Stage 2, Stage 3 CQC reports), to produce the final, integrated report, with overall themes aligned with mechanisms, context and outcomes for the SWAN model.
RESULTS
The integrated findings are presented for the mechanisms, context and outcomes.The data extracts are identified as: S1P1_1-S1P1_8 (Stage 1, Phase 1 participants); S1P2_1-S1P1_10 (Stage 1, Phase 2 participants), S2_1-S2_5 (Stage 2 participants) and Stage 3 with data source, for example, S3_CQC.Table 3 summarises key findings, related to the realist evaluation questions and the programme theory of person-centred care, institutional approaches and infrastructure.The results are presented with data extracts from all three stages of the evaluation.Where there was a reluctance in the Liverpool Care Pathway, people are generally more accepting of this, this is the right thing.(S2_1) Participants perceived SWAN to represent a person/ family-centred approach to care, with an emphasis on personhood continuing after death too: That philosophy of actually, the patient continues to be the patient and should be treated with care and dignity.(S1P2_5) Participants viewed important aspects of the model as normalisation of death, and 'one chance': 'You only have one opportunity to make a difference when somebody dies' (S1P1_8).The SWAN model includes the key principles of 'Break the rules that don't exist' and 'Permission to act'.These form the basis for the SWAN model as: 'an enablement model' (S1P1_6), empowering staff to act in the best way for people at, and after, end-of-life, without constraints.The SWAN symbol was agreed through local consultation and is an acronym for: Signs, Words, Actions and Needs (see table 1).The SWAN model branding, which is used for all the resources and facilities, signified dying and bereavement and raised the profile of the At the Stage 2 Trust, there was a SWAN lead but participants considered that high-level organisational support has 'ebbed and flowed' with senior staff changes (S2_5).The Stage 2 participants explained that end-of-life care was managed through a steering group with implicit rather than explicit Board support; without high level support 'Sometimes, we would struggle to implement things' (S2_2).The Stage 3 CQC data revealed high level recognition for the model at some Trusts, for example: The SWAN initiative was given emphasis and importance at the trust in being central to its development of end-of-life care services.(S3_CQC)
Teamwork and integrated working within and across organisations
A further essential contextual requirement for the SWAN model's implementation and sustainability was perceived to be teamwork and integrated working: 'It's a real team approach: there's no way that one person, in one organisation, can do it alone: it has to be everybody's business' (S2_2).Participants reflected that staff from across the organisation and beyond together delivered the SWAN model, working: 'with other people we'd never worked with before' (S1P1_3).Relationships developed with voluntary organisations, faith groups, the police and coroners: 'It's about that integrated model around death' (S1P1_4).The Stage 3 national CQC data also referred to integrated working: Collaboration with multiple local healthcare stakeholders to implement a comprehensive end-of-life plan.(S3_CQC) Workforce resourcing, education and engagement Stage 1 and 2 participants explained how specialist teams of skilled and knowledgeable bereavement/end of life nurses facilitated the wider workforce to deliver the SWAN model.The Stage 3 national CQC analysis also indicated dedicated staff resourcing for the SWAN model, such as bereavement nurses and educationalists.
Stage 1 and 2 participants emphasised the need for education of the workforce, for implementation and on an ongoing basis, due to staff turnover.They explained that there are regular, multidisciplinary SWAN bereavement study days and coaching in practice.The importance of staff engagement was highlighted: 'It's got to be owned by every single person that works in the organisation' (S2_2).The Stage 3 national CQC report analysis identified inclusion of the wider workforce, for example, the domestic, portering, chaplaincy and mortuary staff, and referred to workforce engagement too: There was an overriding culture of passion and enthusiasm among staff in all areas to deliver care within the principles of the SWAN scheme [model].(S3_CQC) Both Stage 1 and 2 participants described how volunteers play an important role, particularly supporting the SWAN model resources.In addition, developing SWAN champions among the Trust staff built local expertise, empowering the wider workforce to take ownership.
At the Stage 2 Trust, some wards implemented the model better than others and so 'it was about finding out how they were managing to achieve it where others weren't' (S2_2).A key factor for success was the engagement and commitment of the ward leader and whole ward team.Staff in some areas found end-of-life conversations difficult, often for personal reasons and so: 'it was about recognising areas that were struggling and individuals that were struggling and spending time with them' (S2_2).
SWAN resources and facilities
At the Stage 1 Trust, participants emphasised the importance of investing in resources for the SWAN model: Unless we equip staff with the tools to be able to do what we ask, I don't believe it can happen in the way that we want it to.(S1P1_6) At both the Stage 1 and Stage 2 Trusts, the participants explained that funding is mainly through charitable sources, legacies and community engagement.Participants discussed how resources for families promote comfort and reduce stress, support memory-making and different faiths.The Stage 1 and 2 Trusts have branded SWAN property bags for the person's belongings.The Stage 3 CQC data also referred to dedicated SWAN resources on each ward.
Stage 1 and 2 participants discussed the bereavement facilities for families, including comfortable SWAN visiting rooms in the mortuary and memorial gardens.
Open access
At the Stage 2 Trust, side rooms were converted into relaxing 'SWAN rooms', with comforting resources for individuals and families.The participants explained how continuing improvements result from staff creativity, community engagement and patient/family feedback: 'it's still ongoing, there'll always be things that they can add to it' (S1P2_1).Therefore, while the resources and facilities are necessary for implementation, they are also outcomes of embedding the SWAN model.
Outcome pattern: what are the intended and unintended consequences of the SWAN model?While much of the data illuminating the outcomes are based on perceptions from the Stage 1 and Stage 2 participants, there was also relevant data from the Stage 3 analysis of routinely collected data.Most subsections report on the intended consequences of the SWAN model.However, there was some staff resistance, which, while not intended, could be expected when implementing any new programme.
Consistent standards for end-of-life and bereavement care within and across organisations and settings
The Stage 1 participants considered implementation of the SWAN model leads to a consistent approach: Person/family-centred care at end of life and after death The Stage 3 HQIP National Audit of Care at the Endof-Life revealed little difference between the average scores for non-SWAN and SWAN implemented hospitals, perhaps highlighting the difficulty of measuring end-of-life care quality.The qualitative data, from the CQC reports and the Stages 1 and 2 interviews provided perceptions of how person/family-centred care was achieved.The Stage 3 CQC reports analysis indicated that the SWAN model promotes early recognition of dying, enabling staff to support the person and family.At the Stage 2 Trust, participants reported that family feedback confirms appreciation for the SWAN model; in comparison with the previous LCP: 'families are more involved in the discussions about their loved ones' (S2_1).
The Stage 1 and 2 participants perceived that the SWAN model promoted more person/family-centred care approaches before and after death, for example, creating a homely environment with music and photographs, and individualised mouthcare using favourite flavours.They believed that memory-making is an important component: 'Capturing those moments and those memories is so significant for people' (S1P1_3).Examples from Stage 1 and 2 participants included reuniting families and taking photographs, and ensuring people see their pets for the last time.Mementoes include handprints, footprints, locks of hair and prints of lip kisses.The participants believed that memories are also about spending time together, before and after death, as 'you can never get those times back again' (S1P2_9).
The Stage 1 and 2 participants described how individualised care after death includes favourite clothing, leaving meaningful items with the person, meeting faith needs and making the person 'comfy and cosy' (S1P1_6) on a bed, rather than a trolley, with bedding in their favourite colour.Patients are transferred to the mortuary in their beds, rather than the traditional metal box.Stage 3 local audit data analysis demonstrated that transfer to the mortuary is now consistently achieved within a maximum of 6 hours, across the Stage 1 Trust, thus enabling families the option of tissue donation, denied to them otherwise.
Empowered and creative staff who take pride in delivering SWAN Stage 1 and 2 participants perceived the SWAN model as: 'an enablement model' (S1P1_6), empowering staff to act in the best way for people at, and after, end-of-life, and to empower families, so they have some control.Staff at the Stage 1 and 2 Trusts displayed satisfaction and pride: They couldn't wait to show off what they were doing about end-of-life care, about their relatives' rooms.(S1P2_5) The participants reported that staff are confident to take initiative, fulfilling last wishes, such as weddings, early birthday celebrations or pets visiting.During the COVID-19 pandemic, there were still emergency weddings: 'the boost that gave the nursing team during a pandemic' (S2_2).
Organisational culture that prioritises end-of-life and bereavement care The Stage 1 participants reflected that prior to the SWAN model being developed and implemented: 'End of life care wasn't really very high on the agenda' (S1P1_5).Stage 1 and 2 participants perceived that end-of-life and bereavement care are now a high priority in their Trusts and everybody's business: 'we've got a hospital that is full Open access of people who are passionate about good end-of-life care' (S1P2_5).They reported a more open culture of talking about death and dying: 'there's not this fear around death any more' (S1P2_10).The Stage 3 CQC data, mirrored these findings: Through the SWAN model of care, the hospital wanted to promote a culture that end-of-life care was everybody's business which involved talking about it more and for all staff to contribute to its implementation.(S3_CQC) A further perceived impact is the valuing of all staff roles for supporting end-of-life and bereavement care, and 'the proactive, celebrating of good practices' (S1P1_5).Participants reported that a culture of feedback and improvement, along with integrated working, leads to quick resolution of concerns.There were perceptions that systems for dealing with the administrative and legal aspects following death are now more efficient, which were considered to reduce time and stress for families.At both Stage 1 and 2 Trusts, the bereavement teams debrief staff, reviewing how end-of-life care can be improved, supporting reflection and learning.In addition there was a culture of support for staff with death and dying, including after the death of a colleague; one participant reflected the support was: 'everything to us at the time' (S1P2_3).
Staff resistance
Stage 1 and 2 participants reflected that there was some resistance to the SWAN model during implementation, and some still remains.Some staff, particularly palliative care teams, felt threatened by the SWAN model, which they saw as encroaching on their domain: I think people were quite protective-myself probably being one of them as well: 'Well, actually, we do a lot of that' […] I think people took that as a bit of a criticism.They didn't know what they didn't know.(S1P1_5) In addition, the SWAN model did not align with some staff's views about how care should be delivered; they believed: It's too pink and fluffy.'We only need to control symptoms.'But actually, the memories that will live on with the family are the pink and fluffy things.(S1P1_6) At the Stage 2 Trust, the Trust SWAN lead expressed her shock at some of the staff resistance, but also reflected that for some staff, death and dying is a difficult topic.
DISCUSSION
The study aim was to evaluate the End-of-Life and Bereavement Care model (SWAN) from conception to current use using a realist approach to discover: What works for whom and in what circumstances?A key requirement of realist evaluation is to identify the different layers of social reality which make up and surround programmes.The programme theory developed from the scoping review indicated that person/family-centred care, institutional approaches and infrastructure are important for end-of-life and bereavement care, and these components were reflected in the integrated results.The SWAN model's mechanisms include its branding and agility, that it is value-based and normalises death, promotes person/ family-centred care, including memorialisation and the 'one chance' to get things right, and empowers and enables all involved.The SWAN model has transferred to varied settings and circumstances.The contextual factors for implementing the SWAN model are leadership and high level organisational support, teamwork and integrated working, workforce resourcing, education and engagement and availability of resources and facilities to enable the SWAN model delivery.The outcomes are: consistency in standards for end-of-life and bereavement care, person/family-centred care at end-of-life and afterwards, empowered and creative staff and an organisational culture that prioritises end-of-life and bereavement care.As an unintended consequence staff resistance to the model was found, though any change can engender such responses.These principal results are next discussed in relation to other studies.
The SWAN model mechanisms appear to work due to being values and principles based, which frees the workforce to deliver person/family-centred care that is based on the individuals' needs rather than being constrained by policies or guidelines.The values of dignity, respect and compassion are central to the SWAN model; previous research has confirmed that patients and families appreciate compassionate support before and after death. 26ne of the principles, as shown in table 1, is to 'break the rules that don't exist' in order to meet the person and family's individual needs.Similarly, in a realistic evaluation of a coordinated end-of-life care service, Efstathiou et al 27 found that the service's success and acceptability was partly due to how it challenged traditional ways of working.However, the staff resistance to the SWAN model encountered was perceived to arise partly because some staff could not understand the values base of the model, and it threatened their established ways of working.
The contextual factors that enabled implementation and embedding of the SWAN model were found to align across the different data sources.The need for consistent, senior Trust support for the SWAN model was an important contextual factor, not only to support changes in practice but to ensure end of life care remains a priority and continued funding and resources are available to support delivery.Hospital-wide leadership has been previously identified as important for implementing new bereavement and end-of-life care services. 28However, the Stage 2 Trust had sometimes lacked this senior level support, which had affected delivery of the SWAN model.
The SWAN model development, implementation and delivery was characterised by collaborative working, involving different disciplines and departments across the Open access organisation, and Transdisciplinary working is considered to be an important factor in the success of end-of-life care services in different settings. 29At the Stage 1 Trust, following the Manchester Arena bombing in 2017, the established interpersonal relationships across sectors facilitated agility and adaptations to how the SWAN model was used to provide person/family-centred care for the victims and their families.SWAN nurses were cited by the bereaved families as offering valued support and by council workers who felt reassured that families were being cared for while they did their aftermath work. 30Indeed the implementation of the SWAN model is one of the recommendations of the first official report and it has continued in use in the coroner and police service for sudden and unexplained deaths.
The importance of workforce education for the delivery of end of life and bereavement services has been previously recognised. 27 28Across the data sources it was clear that education and engaging the whole workforce with the SWAN model's principles were necessary on an ongoing basis, to overcome the common issue of a workforce that is not confident in death, dying and bereavement. 31Having the appropriate resources and facilities to deliver the SWAN model based on the values and principles were also essential contextual factors.Many of these resources and facilities seem unique to the SWAN model and their continuing development stemmed from the empowerment of the staff who embraced the SWAN model's values.
Data from all three stages of the evaluation indicated that the SWAN model provided an institutional approach to end-of-life and bereavement care, resulting in consistent standards across organisations and other local settings too.However, to achieve this consistency, the contextual factors, discussed previously, were important.In particular, Stage 2 data illustrated how a lack of senior Trust support could affect achievement of consistent standards.Person/family-centred care was an important outcome, based on the programme theory, and supports a systematic review that found acute care nurses provide vital patient-centred and family-centred care at the endof-life and during the bereavement period. 32Participants at both the Stage 1 and 2 Trusts reported the embedding of memory-making activities; these types of interventions were found to be valued by bereaved families in previous qualitative research. 19The SWAN model of care improved the overall attention to end-of-life and bereavement care, as demonstrated through the qualitative findings as well as routinely collected data such as end-of-life care plans as well as timely mortuary transfer, which then enabled potential tissue donation, as desired by families, in accordance with their loved ones' preferences.
For the workforce, the SWAN model promotes empowerment and engenders satisfaction and pride about the care they are providing.Similarly, Walsh et al 28 reported staff pride, when a new bereavement service was implemented across a hospital.Data from all three stages of the evaluation indicated the impact of the SWAN model on organisational culture, with death and dying now seen as a high priority and everyone's business.This institutionwide approach to prioritising end of life and bereavement care has been found to be highly valued by staff in clinical settings where death occurs. 33A further outcome was support provided for staff affected by death and bereavement, personally or professionally.Previous research has highlighted the importance of providing support for staff, particularly where there are unexpected or traumatic deaths. 34aning of the study: possible explanations and implications for clinicians and policymakers The scoping review indicated there is limited knowledge and understanding of end-of-life care interventions.However, the evaluation results indicate that the SWAN model architects had effectively implemented a best practice model to deliver person/family-centred care, based on available evidence, including bereavement care and memorialising. 35The values and principles basis for the SWAN model, rather than sets of guidelines, with an emphasis on normalisation of death and delivery of endof-life and bereavement care by the whole workforce, appears to be a successful model.The SWAN model has grown organically, being implemented in different NHS organisations, the community and the care home sector.This evaluation indicates that for the model to be successfully implemented, there needs to be consistent senior level commitment and leadership, workforce engagement and education, teamwork and integrated working and appropriate investment in roles and resources.The SWAN model is currently being used in varied organisations across England.Consistent standards of end-of-life and bereavement care could be achieved nationally if the SWAN model was integrated into healthcare policy.
Strengths and limitations
Previous evaluations of end-of-life and bereavement care are scarce and are predominantly survey-based 19 ; in contrast this evaluation, using a realistic approach, included qualitative interview data as well as analyses of routinely collected qualitative and quantitative data.The use of a mixed method realist evaluation captured the nuances of what it is about the SWAN model that brings about change and the context ('for whom' and 'in what circumstances') SWAN works.The interview data were gathered from only two NHS Trusts, though national data from audits and reports were also analysed.The evaluation included organisations that have successfully implemented the SWAN model; it did not include organisations that have attempted to implement the SWAN model without success.The evaluation also did not include data directly collected from patients or families.
Unanswered questions and future research
This evaluation focused in depth on the Trust where the SWAN model was first developed and implemented, and a second Trust that had since implemented and embedded
Open access
The Stage 3 national data confirmed that SWAN has been implemented at other Trusts across England too, with comparable outcomes.An evaluation of the use of the SWAN model at national level is recommended to gain a greater understanding of the concepts and mechanisms to support national implementation.We also recommend the inclusion of direct patient and family data in future studies.Further research could evaluate the SWAN model's implementation in different settings and circumstances, such as care homes and the community, including the unique role of the coronial bereavement nurse.In addition, future research could investigate why the model has not worked in other organisations and the challenges associated with failed implementation attempts.
CONCLUSION
This realist evaluation has demonstrated the mechanisms (what it is about SWAN that brings about change), the context ('for whom' and 'in what circumstances' will SWAN work?) and the outcomes of the SWAN model's implementation.The evaluation has revealed that, with successful implementation, the outcomes include consistent delivery of person/family-centred care at end-of-life and afterwards, staff satisfaction and pride, and the development of an organisational culture that prioritises care for the dying and bereaved.
METHOD A realist evaluation
aims to discover: 'what works, for whom in what circumstances and in what respects and how?' 20 Applied to the SWAN model, the focus was: ► Mechanism: what is it about the SWAN model that brings about change?► Context: for what and for whom and in what circumstances will the SWAN model work?► Outcome pattern: what are the intended and unintended consequences of the SWAN model?
-step outside the box and facilitate what is important to the patient and family.Actions-step outside the box and facilitate what is important to the family.
Mechanism: what is it about the SWAN model that brings about change?Data from Stages 1 and 2 interviewees, and the Stage 3 CQC report analysis, highlighted the SWAN model as values-based, including compassion, kindness, respect, dignity and discretion.At the Stage 2 Trust, the SWAN model drew positive responses from staff, comparing it favourably with the Liverpool Care Pathway, which preceded it: Open access changes.The Stage 3 CQC reports data confirmed the SWAN model as an entity/brand; SWAN was recognisable to CQC inspectors and used by staff to describe a model of care.The SWAN model is agile and can be adapted and transferred to different circumstances and settings, based on the values and principles, and without the constraints of guidelines.Context: for what and for whom and in what circumstances will the SWAN model work?SWAN is transferable to different settings and circumstances While the SWAN model was developed and first implemented at the Stage 1 NHS Trust in 2012, where it remains embedded, it has since transferred to many other Trusts, mainly through informal networking.At the Stage 2 Trust, participants revealed that, after reviewing several models of care, the SWAN model was selected and implemented in 2015.The Stage 3 CQC report analysis revealed citations of SWAN relating to six NHS Trusts, which incorporated numerous hospitals, demonstrating wide application across England.Participants from Stages 1 and 2 Trusts reported transfer of the SWAN model into their local community settings, including care homes.Participants believed that the SWAN model is adaptable to differing settings and circumstances: Whatever situation that you're in, whether you're stood in a mortuary, whether you're out in the community or whether you're in an acute setting.For me, it's transferable across all those areas.(S1P1_3) Two specific examples are, at the Stage 1 Trust, supporting families after the 2017 Manchester Arena bombing, and, at both the Stage 1 and 2 Trusts, adaptations during the COVID-19 pandemic.Several Stage 1 participants recalled how after the Manchester Arena bombing, the model lead and bereavement nurses worked with the coroner and police to provide individualised care using SWAN model principles, including mementoes and offering choices in their care.The established relationships with coroners ensured families could spend time with their loved ones and touch them, recognising that: 'This is their last time with their loved one' (S1P1_4).At the Stage 1 Trust, during the COVID-19 pandemic, participants believed that the SWAN model: 'Enabled us in a huge way because we had a model that was recognised and well known' (S1P2_6).They explained It's equitable, doesn't matter who you are, what age you are, what faith you are, everybody receives that outstanding end-of-life care.(S1P2_7) At the Stage 1 Trust, there is now a bereavement care nurse based in the coroner's office to facilitate the SWAN model so that families in the community 'get the same service tailored to their needs as they would if it was a bereavement from the hospital' (S1P1_8).The Stage 1 and 2 participants described that the roll out of the SWAN model across community and care homes in their localities promoted consistency across care environments.Stage 3 analysis of the Stage 1 Trust's audit data showed more people had end-of-life care plans since the SWAN model implementation.The Stage 3 CQC reports also referred to consistency: The trust's end-of-life-individualised SWAN care plans were being used consistently throughout the hospital (S3_CQC) However, at the Stage 2 Trust, participants considered that the lack of resourcing and staff turnover led to complete consistency being aspirational.
Table 2
Data collection: stages, settings, purpose, participants/sources and methods on the scoping review and mechanisms identified, and piloted prior to use.The Stage 2 topic guide was adapted from the Stage 1 guides, to reflect initial themes from the Stage 1 data analysis, with the aim of investigating whether the mechanisms, context and outcomes were apparent at the Stage 2 site.Interviews were audiorecorded and professionally transcribed.Stage 3 comprised analysis of routinely collected data, as listed in table 1.The Care Quality Commission (CQC) monitors care standards in England producing periodic reports.The Stage 1 Trust local audit data pertained to best practice at end-of-life targets: the presence of endof-life care plans, and mortuary transfer time within a 6-hour maximum time frame.Previous delays to mortuary transfer reduced opportunities for tissue donation due to non-compliance with national guidelines, ► The 2019 Healthcare Quality Improvement Partnership national care at the endof-life audit data.Local (Stage 1 Trust) 2017-2020 audits ► Time of death to mortuary elapsed time.►The presence or absence of a care plan for end-of-life where end-of-life was expected and not sudden.NHS, National Health Service.based
Table 3
Realist evaluation questions and key findings related to the programme theory Open access how staff focused on what could be done, adapting ways of working to the new circumstances.A new healthcare worker role of 'Cygnet' enabled individualised one-to-one care: fulfilling last wishes, bridging communication between the person and family, supporting mementoes and ensuring no one died alone.At the Stage 2 Trust, participants considered that they still delivered person/family-centred care through focusing on how they could meet individual needs within the current regulations. | 2022-12-22T16:03:55.243Z | 2022-12-01T00:00:00.000 | {
"year": 2022,
"sha1": "659883a586318661db442dae6edd3b4b717c34fe",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/12/12/e066832.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Highwire",
"pdf_hash": "8c4357879a89696d1546cb25f3933de8bea5da60",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
5058282 | pes2o/s2orc | v3-fos-license | Relationships between volume and pressure measurements and stroke volume in critically ill patients.
Objective: To evaluate the relationships between the changes in stroke volume index (SVI), measured in both the aorta and the pulmonary artery, and the changes in intrathoracic blood volume index (ITBVI), as well as the relationship between changes in aortic SVI and changes in the pulmonary artery wedge pressure (PAWP). Design: Prospective study with measurements at predetermined intervals. Setting: Medical intensive care unit of a university hospital. Patients and methods: One hundred and fifty-four measurements were taken in 45 critically ill patients with varying underlying disorders. Aortic SVI and pulmonary arterial SVI were determined with thermodilution. PAWP was measured using a pulmonary artery catheter. ITBVI was determined with thermal-dye dilution, using a commercially available computer system. Results: A good correlation was found between changes in ITBVI and changes in aortic SVI. However, this correlation weakened when changes in ITBVI were plotted against changes in pulmonary arterial SVI, which was in part probably due to mathematical coupling between ITBVI and aortic SVI. A good correlation between changes in ITBVI and changes in aortic SVI could also be established in most of the individual patients. No correlation was found between changes in PAWP and changes in aortic SVI. Conclusion: ITBVI seems to be a better predictor of SVI than PAWP. ITBVI may be more suitable than PAWP for assessing cardiac filling in clinical practice.
Introduction
Assessing the volume status of critically ill patients is a routine task for intensivists. Clinical assessment by history taking, physical examination, fluid balance or radiographic findings provides belated or unreliable information [1][2][3][4]. Apart from clinical skills, invasive monitoring is widely applied as a tool for assessment of volume status. In its simplest form, central venous pressure (CVP) can be measured, by using a central venous catheter. In more complicated cases a pulmonary artery catheter is often used, with PAWP as the variable for determining cardiac filling. Because CVP and PAWP depend not only on cardiac filling, but also on ventricular compliance, these pressures are only poor reflections of a patient's volume status [5][6][7]. Moreover, CVP and PAWP are absolute intravascular pressures, meaning that changes in intrathoracic pressures will influence the recorded values of CVP and PAWP. This applies in particular to mechanically ventilated patients who are ventilated with positive end-expiratory pressure. Thus, therapeutic decisions based on CVP and/or PAWP may be based on inaccurate measures of a patient's volume status.
The thermal-dye dilution technique was originally introduced as a method to measure extravascular lung water (EVLW) [8]. In recent years the emphasis has moved to the ITBVI as the most important variable that can be determined using this technique. In a limited number of studies, ITBVI has been shown to correlate well with the cardiac index (CI), and it appears to be a better measure of cardiac filling than CVP or PAWP [7,[9][10][11]. Lichtwarck-Aschoff et al [7] showed a correlation coefficient of 0.65 between changes in ITBVI and SVI in 21 patients with acute respiratory failure. Gödje et al [9] showed a correlation coefficient of 0.87 between changes in ITBVI and changes in CI in cardiac surgery patients. Recently, Sakka et al [10] showed a correlation coefficient of 0.67 between changes in ITBVI and changes in SVI during the early phase of haemodynamic instability in patients with sepsis or septic shock.
In a mixed group of critically ill patients we studied the correlations between SVI and PAWP, measured using a pulmonary artery catheter, and the correlations between SVI and ITBVI, measured with a commercially available computer system using the thermal-dye dilution technique.
Methods
The data presented in this study were prospectively obtained from 45 critically ill patients. All patients concomitantly participated in four other studies. In those studies, haemodynamic patterns of specific clinical entities, with the emphasis on EVLW, were investigated. ITBV, however, was measured specifically for the present study. The groups consisted of patient with acute respiratory distress syndrome (ARDS), patients with acute cardiogenic pulmonary oedema [12], patients with a septic shock [13], and patients with hepatic cirrhosis requiring a transjugular intrahepatic portosystemic shunt (TIPS). In all of these studies, patients were monitored using a pulmonary artery catheter (7.5-F Swan Ganz-catheter, Model VS 1721; Ohmeda, Swindon, UK) and a 4-Fr fiberoptic catheter (Pulsiocath PV 2024; Pulsion, Munich, Germany), intro-duced into the descending aorta through a 6-Fr introducer sheath (Model 616150A; Ohmeda) and connected to a computer system (COLD Z-021 system; Pulsion) for determination of ITBVI.
Haemodynamic measurements, both with the pulmonary artery catheter and the thermal-dye dilution technique, were made at regular intervals during the first 24 h after admission to the intensive care unit. Fluid therapy was given as long as every seperate fluid bolus (500 ml colloids over 20 min) resulted in an increase of CI of 10% or more. PAWP was not allowed to exceed 18 mmHg in patients with acute cardiogenic pulmonary oedema, however, and was not allowed to exceed 16 mmHg in the other categories. Whenever CI increased by less than 10%, fluid challenges were stopped, regardless of the PAWP at that point, and inotropes and/or vasopressors were given when appropriate.
All study protocols were approved by the Local Ethics Committee, and informed consent was given by each patient or his/her next of kin.
The pulmonary artery catheter was used for measurements of CVP and PAWP, with the midchest level as zero reference. The heart rate was recorded continuously with one of the standard leads of the electrocardiogram. PAWP was measured exclusively by the investigators and not by the nursing staff, taking problems associated with PAWP measurement and recommendations from the literature into account [14].
The COLD system was connected both to the pulmonary artery catheter and to the fibreoptic catheter in the aorta, which enabled us to determine CI in the pulmonary artery and in the aorta in one measurement. SVI was calculated by dividing the respective CIs by the accompanying heart rate. The COLD system was also used for determination of ITBVI. Measurements were done by injecting 10 cm 3 of an ice-cold indocyanin green (ICG) solution (2 mg/ml). The mean value of two measurements was used for analysis.
For details concerning the thermal-dye dilution method, see Lewis et al [8] and Pfeiffer et al [15]. Briefly, the method uses two indicators (ie ice-cold water and ICG). Cold is distributed throughout both intravascular and extravascular volume, whereas ICG remains in the intravascular volume. Both indicators are injected into the right atrium, and concentration changes with time are recorded in the descending aorta. Thus, dilution curves are obtained for both indicators. From the thermodilution curve aortic CI is determined. From each indicator's dilution curve a mean transit time (MTT) can be derived. MTT is composed of the appearance time, which is the time until the first indicator particle has arrived at the point of detection, and the mean time difference between the occurrence of the first particle and all the following particles [15]. The product of CI and MTT is the volume between the site of injection and the site of detection. ITBVI can be calculated using the following formula: The correlations between the variables, as well as correlations between the changes in these variables, were studied using linear regression analysis. Changes in the variables were calculated by subtracting the first from the second measurement, the second from the third, and so on. To reduce the influence of changes in contractility and afterload, we used only those values for the analysis for which no supportive adjustments were made with inotropes and/or vasopressors between the measurements. Both pooled and intraindividual relationships were studied. The method described by Bland and Altman [16] was used for assessing differences between pulmonary arterial CI and aortic CI.
Results
A total of 283 haemodynamic measurements were made in 45 critically ill patients (10 patients with ARDS, 10 patients with acute cardiogenic pulmonary oedema, 15 patients with septic shock and 10 patients with hepatic cirrhosis requiring TIPS). After discarding the measurements in which supportive adjustments were made with inotropes and/or vasopressors, 154 changes between measurements were left for analysis. Details concerning the subgroups are shown in Table 1. Thirty-six patients were mechanically ventilated throughout the study protocol.
Pulmonary arterial CI and aortic CI correlated well (Fig. 1). In a Bland-Altman analysis, a mean difference of 0.49 l/min per m 2 (95% confidence interval 0.45-0.53) was found, with a lower limit of -0.41 l/min per m 2 and an upper limit of +1.39 l/min per m 2 . Figure 2 shows the regression analysis of the pooled data. A strong correlation was found between the changes in ITBVI and changes in aortic SVI. This correlation weakened significantly when changes in ITBVI were plotted against changes in pulmonary arterial SVI, however (Fisher Z test P < 0.001). Figure 4 shows the individual regression lines of ITBVI versus aortic SVI of the patients in the various disease categories. In three disease categories (sepsis, ARDS and TIPS) a positive correlation was noted in almost all patients, although interindividual differences exist in the steepness of the regression lines. Only in the patients with acute cardiogenic pulmonary oedema could such a relationship not be confirmed. It has to be noted, however, that in this patient group many supportive adjustments were made with inotropes and/or vasopressors during the course of the measurements, so that the relationships were based on a small number of measurements.
Discussion
The present study shows a good correlation between changes in ITBVI and aortic SVI. This correlation could also be found in the individual patients in three of the four disease categories studied. However, the correlation weakened when, in the pooled data, ITBVI was plotted against pulmonary arterial SVI. No consistent correlation could be established between PAWP and aortic SVI.
CVP and PAWP are pressures that are used in clinical practice to assess cardiac filling or cardiac preload. Under experimental conditions, the so-called ventricular performance curves show a close curvilinear relationship between the end-diastolic pressure of the ventricle and the stroke volume or cardiac output, provided that contractility and afterload are held constant. In clinical practice this relationship may be distorted for several reasons. The first reason is that several assumptions have to be made for PAWP to reflect the end-diastolic volume of the ventricle. PAWP must be accurately measured, it must reflect left atrial pressure (LAP), LAP must reflect left ventricular end-diastolic pressure (LVEDP), and then LVEDP must relate directly to left ventricular end-diastolic volume to be a true measure of cardiac filling.
In clinical practice there are many doubts about the accuracy of the PAWP measurement. Accurate measurements are frequently prevented by technical aspects. There is also an astonishing lack of basic knowledge among clinicians and nurses on how the measurement should be performed [17][18][19][20]. Apart from the technical factors, there are also clinical entities that interfere with the reliability of PAWP in reflecting LAP accurately. Pulmonary venous obstruction (eg tumours, atrial myxomas, mediastinal fibrosis, pulmonary venous thrombosis) increases PAWP, without an accompanying increased LAP. Disparity between LAP and LVEDP is found in the case of mitral stenosis, and, perhaps more often, in the presence of a decreased left ventricular compliance. A change in ventricular compliance, often met in critically ill patients, may also distort the assumed relationship between LVEDP and left ventricular end-diastolic volume. Furthermore, interventricular dependence also influences the pressure-volume curve of the left ventricle. Hence, disease states with an increased right ventricular afterload (eg acute pulmonary hypertension) will also impair Critical Care Vol 4 No 3 Bindels et al The correlation between changes in ITBVI (Diff ITBVI) and changes in SVI measured in the aorta (Diff SVIa; left panel), and the correlation between changes in ITBVI (Diff ITBVI) and changes in SVI measured in the pulmonary artery (Diff SVIpa; right panel). left ventricular compliance. Finally, all intrathoracic pressure changes will affect the recorded values of CVP and PAWP, because these pressures are measured relative to ambient air pressure. Therefore, the measured pressures are not transmural pressures, which is especially true in case the tip of the pulmonary artery catheter is located outside a West zone III [21].
The second reason for the distorted relationship between the cardiac filling pressures and the stroke volume in clinical practice is that the requirement for the contractility and the afterload to be constant is hardly ever met in clinical practice. Leaving aside the question of whether this requirement is verifiable, practically all interventions interfere either with the myocardial contractility (eg inotropes) or with the ventricular afterload (eg vasoconstrictors, vasodilators). Although we tried to make an approximate correction for this phenomenon, by leaving out those measurements in which supportive changes were made with inotropes or vasoactive medications, it cannot be ruled out that this phenomenon played a role in the results we found.
Taking into account the reasons indicated above, it is not surprising that we did not find a consistent correlation between PAWP and aortic SVI in the individual patients. The present results confirm those of earlier studies [7,[9][10][11]. In the patients we studied there were no major differences in the correlation of PAWP and aortic SVI between the different disease states, regardless of whether all patients were ventilated mechanically (ARDS), or only a minority of patients (TIPS) was on mechanical ventilation. In conclusion, PAWP is influenced by so many factors other than cardiac filling that it is not a reliable indicator of cardiac filling in clinical practice. Therefore, the absolute values of these two variables are not an adequate reflection of the cardiac filling conditions of an individual patient.
Changes in ITBVI showed better correlations with changes in aortic SVI than did changes in PAWP, which is also in accordance with earlier findings [7,[9][10][11]. From the individual regression lines (Fig. 4), however, it is clear that differences between the individual slopes and, likewise, http://ccforum.com/content/4/3/193
Figure 3
Individual regression of PAWP versus aortic SVI (SVIa) in the various disease categories. SVIa diff (ml/m ) 2 differences between the distinct disease categories may exist. The interindividual differences may be the consequence of the fact that aortic SVI not only depends on preload, but also on contractility and afterload. Contractility may differ from patient to patient, and from disease to disease. Also, afterload may influence aortic SVI to an extent that depends on the underlying disease. Especially in the case of a diminished contractility, afterload may be a decisive factor in the final aortic SVI. Hence it is understandable that the correlations between ITBVI and aortic SVI in patients with acute cardiogenic pulmonary oedema were not as firm as in the other subgroups. In conclusion, it may still be hard to predict whether an individual patient has reached optimal cardiac filling when a certain value of ITBVI is measured.
Acute Cardiogenic Pulmonary Edema
By connecting the Swan-Ganz catheter to the COLD system, time differences between pulmonary arterial CI and aortic CI were precluded. Pulmonary arterial CI and aortic CI were closely correlated, with a mean higher value of aortic CI of 0.49 l/min per m 2 . This is in accordance with an earlier report [11]. However, the difference in the correla-tion between ITBVI and pulmonary arterial SVI, and the correlation between ITBVI and aortic SVI (Fig. 2) was striking. This could be due to mathematical coupling, because the formula used to determine ITBVI includes aortic CI, and thus aortic SVI indirectly, as a variable [22]. Lichtwarck-Aschoff et al [23], however, showed that under experimental conditions an increase in aortic CI by inotropes, with a constant ITBVI, did not influence the measured value of ITBVI, because the MTT decreased concomitantly.
The thermal-dye dilution technique was originally developed to determine EVLW. As a consequence, validation of the method is based on comparison of measured values of EVLW with gravimetrically determined EVLW. These values correlate well, with an overestimation of the thermal-dye technique in the lower range and an underestimation in the higher range of EVLW values [24][25][26]. In a recent study [27], circulating (total) blood volume measured with the COLD system correlated well with standard methods for measuring circulating blood volume. From these results, it has been assumed that measured ITBVI also correlates well with the actual intrathoracic Individual regression lines of ITBVI versus aortic SVI (SVIa) in the various disease categories. volume. This has not been validated formally, however. On the other hand, the correlations we found are those one would expect on the basis of physiological knowledge. This implies that ITBVI, at least, is a reflection of the actual intrathoracic volume.
Acute Cardiogenic Pulmonary Edema
In conclusion, the present study shows that the cardiac filling in critically ill patients may not adequately be predicted by PAWP. ITBVI seems to be a more reliable predictor of cardiac filling, because changes in ITBVI closely relate with changes in aortic SVI. Partially, however, this may be due to mathematical coupling. Whether the use of ITBVI for guidance of fluid therapy will improve patient outcome should be subject to further studies. | 2014-10-01T00:00:00.000Z | 2000-05-15T00:00:00.000 | {
"year": 2000,
"sha1": "7be8de0837c9f16f71de62541947836900cc9659",
"oa_license": "CCBY",
"oa_url": "https://ccforum.biomedcentral.com/track/pdf/10.1186/cc693",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d659f5e5d6894f691145b86da94c6122db695cd7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
227248448 | pes2o/s2orc | v3-fos-license | MicroRNAs as Immune Regulators of Inflammation in Children with Epilepsy
Epilepsy is a chronic clinical syndrome of brain function which is caused by abnormal discharge of neurons. MicroRNAs (miRNAs) are small non-coding RNAs which act post-transcriptionally to regulate negatively protein levels. They affect neuroinflammatory signaling, glial and neuronal structure and function, neurogenesis, cell death, and other processes linked to epileptogenesis. The aim of this study was to explore the possible role of miR-125a and miR-181a as regulators of inflammation in epilepsy through investigating their involvement in the pathogenesis of epilepsy, and their correlation with the levels of inflammatory cytokines. Thirthy pediatric patients with epilepsy and 20 healthy controls matched for age and sex were involved in the study. MiR-181a and miR-125a expression were evaluated in plasma of all subjects using qRT-PCR. In addition, plasma levels of inflammatory cytokines (IFN-γ and TNF-) were determined using ELISA. Our findings indicated significantly lower expression levels of miR-125a (P=0.001) and miR-181a (P=0.001) in epileptic patients in comparison with controls. In addition, the production of IFN-γ and TNF- was non-significantly higher in patients with epilepsy in comparison with the control group. Furthermore, there were no correlations between miR-125a and miR-181a with the inflammatory cytokines (IFN-γ and TNF-) in epileptic patients. MiR-125a and miR-181a could be involved in the pathogenesis of epilepsy and could serve as diagnostic biomarkers for pediatric patients with epilepsy.
pilepsy is a chronic neurological disorder characterized by frequent aberrant electrical activity in brain, which is estimated to affect about 65 million individuals' worldwide (1). This complex disorder is associated with comprehensive changes in brain function at the molecular, cellular, and circular levels. It can occur due to genetic defects or it can be acquired through epileptogenic insults, such as "traumatic brain injury, brain infections, stroke, or status epilepticus" (2). In E recent years, some clinical and experimental studies have found that encephalitis is an essential manifestation of pathological brain tissue of high risk in pharmaco-resistant epilepsy from various etiologies (3).
MicroRNAs (miRNAs) are small non-coding
RNAs that control gene expression posttranscriptionally, and either encourage the degradation of their target mRNAs or promote translational suppression (4). They mainly regulate the innate immune response through modifying astrocyte-mediated inflammation, and are involved in regulating the function of T lymphocytes in the immune response. Unsurprisingly, dysregulation of miRNAs was observed in various diseases expanded from cancer and immune disorders to brain diseases (5). MiR181 and miR-125 families are highly preserved miRNAs in humans; miR181 family has four members miR181a, miR181b, miR181c, and miR181d (6). Abnormal expression of miR-181 family is associated with many nervous system disorders. Also, it plays a major role in immune cells development and function, including differentiation and activities of B and T lymphocytes (7). Previous studies demonstrated that miR-181a can prevent negative regulatory factors in the pathway of T-cell receptor signals (8), and through targeting IFN-γ, it influences on the naïve CD4 + T cells differentiation, prevents cell proliferation, and promotes programmed cell death, and thus, affects the function of T cells (9). On the other hand, miRNA-125 family consists of miR-125a and miR-125b (10). MiR-125a has shown to inhibit innate macrophage responses through suppressing macrophage differentiation (11). A large number of studies showed high levels of specific inflammatory mediators, and noticed upregulation of their correlated receptors in the chronic epileptic brain, which indicates that some of the proinflammatory pathways are mostly activated in foci of seizure (12).
Nudelman et al. first indicated relations bet--ween seizures and altered miRNAs expression (13). It has been demonstrated that modulation and inflammation of the neuron morphology may be two of the greatest significant controlling roles of miRNAs in epilepsy formation (14). In addition, miRNAs are known as chief regulators for the production of protein during and after seizures; so, miRNAs might regulate neuronal excitability and also remodeling responses (15). Therefore, miRNAs may be related to epilepsy pathogenesis and therapies (16). In nearly 30% of all epileptic individuals, seizures cannot be controlled with the currently available medications (17). Thus, studies in epilepsy that revealed altered expression and function of miRNAs, which are supposed to regulate many downstream targets and cellular processes simultaneously, have aroused great interest in miRNAs defects as pathological mechanisms and potential therapeutic targets in epilepsy (5). Consequently, the aim of this study was to determine the association of circulating miR-125a and miR-181a expression with the pathogenesis of epilepsy and with the production of IFN-γ and TNF-α in pediatric patients with epilepsy. Table 1.
RNA extraction and quantitative real-time PCR
MiRNAs were isolated and extracted from plasma of all subjects of the study using miRNeasy values, sensitivity, specificity, and 95 % confidence interval to each miRNA was calculated.
Dysregulation of plasma miR-125a and miR-181a expression pattern in patients with epilepsy
Our findings indicated highly significant down-regulation of miR-125a (4.2-fold) in epileptic patients in comparison with controls ( Figure 1).
In addition, the results of this study demonstrated that miR-181a showed significantly lower (3.9 fold) expression levels in epileptic patients in comparison with healthy controls ( Figure 1). Table 2).
Correlation of miR-125a and miR-181a with inflammatory cytokines in patients with epilepsy
The findings of the study clarified that the production of IFN-γ and TNF- was nonsignificantly higher in patients with epilepsy in comparison with the control group ( Figure 2).
In addition, our data indicated that there were no correlations between miR-125a and miR-181a with the inflammatory cytokines (IFN-γ and TNF -) in epileptic patients (Table 3).
Discussion
Epilepsy is considered as a chronic clinical syndrome of brain function, which is caused by (20). By using these models, more recent work has determined that the loss of miRNA biogenesis components by the mature brain will result in progressive tissue dysmorphogenesis, neurodegeneration, and seizures (21).
The majority of expressed miRNAs were upregulated in status epilepticus (SE) mice, while in tolerant mice only 18% of the expressed miRNAs were upregulated and 82% were down regulated (22).
The results of our study showed highly signi--ficant lower expression levels of miR-125a and miR-181a in epileptic patients in comparison with increased IFN-γ levels in epilepsy as miR-146a (32) and miR-155 (37).
TNF-α is an inflammatory marker, and its level is also increased after seizures (38,39). As our results indicated that there were no correlations between miR-125a and miR-181a with TNF- in epileptic patients, other miRNAs such as miR 155 might have a regulatory role on TNF-α in nervous tissue (40).
It has been proven that miR-181a-5p suppresses IFN-γ in human CD4 + T cells. I Infection of the activated human CD4 + T lymphocytes with a lentivirus encoding pre-miR-181a, significantly decreased the protein level of that cytokine in both CD4 + T cells and culture media (41,9).
In the present study, miR-125a showed a sensitivity of 70% and a specificity of 83% (P < 0.05) while miR-181a showed a sensitivity of 66.7% and a specificity of 67% (P < 0.05).
Any discrepancy between our results and others may be due to assessing their levels at different stages of epilepsy, or to different sample sizes. Thus, miR-125a and miR-181a may be considered as promisingly sensitive, easily detectable, and specific biomarkers that may improve the diagnosis and treatment outcome of epilepsy.
In conclusion, miRNAs and brain | 2020-12-03T05:11:17.953Z | 2020-11-10T00:00:00.000 | {
"year": 2020,
"sha1": "c3445359794551925b499125264532cbe117060e",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "c3445359794551925b499125264532cbe117060e",
"s2fieldsofstudy": [
"Biology",
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253597592 | pes2o/s2orc | v3-fos-license | In silico assessment of pharmacotherapy for carbon monoxide induced arrhythmias in healthy and failing human hearts
Background: Carbon monoxide (CO) is gaining increased attention in air pollution-induced arrhythmias. The severe cardiotoxic consequences of CO urgently require effective pharmacotherapy to treat it. However, existing evidence demonstrates that CO can induce arrhythmias by directly affecting multiple ion channels, which is a pathway distinct from heart ischemia and has received less concern in clinical treatment. Objective: To evaluate the efficacy of some common clinical antiarrhythmic drugs for CO-induced arrhythmias, and to propose a potential pharmacotherapy for CO-induced arrhythmias through the virtual pathological cell and tissue models. Methods: Two pathological models describing CO effects on healthy and failing hearts were constructed as control baseline models. After this, we first assessed the efficacy of some common antiarrhythmic drugs like ranolazine, amiodarone, nifedipine, etc., by incorporating their ion channel-level effects into the cell model. Cellular biomarkers like action potential duration and tissue-level biomarkers such as the QT interval from pseudo-ECGs were obtained to assess the drug efficacy. In addition, we also evaluated multiple specific I Kr activators in a similar way to multi-channel blocking drugs, as the I Kr activator showed great potency in dealing with CO-induced pathological changes. Results: Simulation results showed that the tested seven antiarrhythmic drugs failed to rescue the heart from CO-induced arrhythmias in terms of the action potential and the ECG manifestation. Some of them even worsened the condition of arrhythmogenesis. In contrast, I Kr activators like HW-0168 effectively alleviated the proarrhythmic effects of CO. Conclusion: Current antiarrhythmic drugs including the ranolazine suggested in previous studies did not achieve therapeutic effects for the cardiotoxicity of CO, and we showed that the specific I Kr activator is a promising pharmacotherapy for the treatment of CO-induced arrhythmias.
Introduction
Carbon monoxide (CO) is one of the major gaseous pollutants in traffic pollution. Epidemiological studies have substantiated the association of urban air pollution with cardiovascular events, among which CO is considered as a critical contributor (Hoek et al., 2002;Hoffmann et al., 2007;Allen et al., 2009;Bell et al., 2009). The traditional theory of CO poisoning attributes CO-induced arrhythmias to tissue hypoxia, a condition that arises from the highaffinity binding of CO to hemoglobin, which may predispose to arrhythmias (Hantson, 2019). However, accumulating evidences have demonstrated that CO can also impair cardiac electrophysiology by exerting direct effects on multiple ion channels. For sodium channels, Dallas et al. demonstrated that CO could enhance the late Na + current (I NaL ) by increasing the production of NO and the subsequent nitrosylation of the Na V 1.5 channel protein (Dallas et al., 2012). In addition, CO could inhibit the I Na and the process was dependent on the NO formation and channel redox states (Elies et al., 2014). For calcium channels, Scragg et al. found that CO inhibited L-type Ca 2+ channels (I CaL ) via redox modulation of key cysteine residues by mitochondrial reactive oxygen species (Scragg et al., 2008). Finally, for potassium channels, CO inhibited inward rectifier K + current (I K1 ) by modulating the interaction between Kir2.0 channels and phosphatidylinositol (4, 5)-diphosphate (Liang et al., 2014), and inhibited the rapid delayed rectifier K + current (I Kr ) by promoting the production of peroxynitrite (ONOO − ) (Al-Owais et al., 2017). These remodeling effects together contributed to a prolonged QT interval and predisposed to severe ventricular arrhythmias like Torsades de Pointes (TdP) (Jiang et al., 2022). Such arrhythmogenic influences may get even worse in susceptible populations like heart failure (HF) patients. This is because the repolarization reserve has been reduced in failing hearts, and the further depression of I Kr by CO can easily lead to early-afterdepolarization (EAD) activities in cardiomyocytes and ectopic beats at the organ level, which act as triggers for reentry arrhythmias (Al-Owais et al., 2021).
The serious consequence of CO cardiotoxicity has raised concerns on finding an effective pharmacotherapy for it. In this regard, potential drugs have been raised to deal with the proarrhythmic effects of CO. For instance, the antianginal drug ranolazine was suggested by Dallas et al. for its significant therapeutic effects on CO-induced arrhythmias (Dallas et al., 2012). In vivo experiments showed that ranolazine corrected QT variability and arrhythmias induced by CO, and further cellular investigations reported that ranolazine abolished CO-induced early after-depolarizations (EADs) in rat myocytes via the inhibition of I NaL . This study highlighted a potential pharmacological strategy for the treatment of COinduced arrhythmias; however, the efficacy of ranolazine was evaluated in rats, and the significant discrepancy between rats and human action potentials may limit their conclusions. Despite that ranolazine can inhibit I NaL and correct CO-induced arrhythmias in rat ventricular myocytes, the drug is also known to block I Kr (IC 50 12 μM) (Rajamani et al., 2008) in an overlapped range with I NaL (IC 50 5-21 μM) (Moreno et al., 2013). Therefore, considering the complicated multi-channel blocking effect of ranolazine, whether it still exerts antiarrhythmic effects in the human ventricle needs to be re-assessed. In addition to ranolazine, our previous simulation study on CO exposure showed that the inhibition of I Kr by CO is the main factor responsible for the substantial prolongation of the QT interval in patients (Jiang et al., 2022). Therefore, specific I Kr activators such as HW-0168 (Dong et al., 2019) might benefit the treatment of CO-induced arrhythmias.
In this study, we conducted an in silico assessment of pharmacotherapy for the treatment of CO-induced ventricular arrhythmias in healthy and failing hearts. First, human myocardial cell and tissue models with the effects of CO incorporated were constructed on healthy and heart failure conditions, respectively, to act as baseline pharmacological models for the screening of drugs. Next, we evaluated several of the clinically available antiarrhythmic drugs described above by incorporating their experimentally-measured dosedependent effects on various ion channels. The class IV antiarrhythmic drugs (i.e., calcium channel blockers including verapamil, nifedipine, and bepridil) were mainly focused on due to their ability of attenuating depolarization forces. We also tested three other multichannel drugs for a wide coverage of the antiarrhythmic drug classification. These drugs are namely quinidine (class I), amiodarone (class III), and vanoxerine (class III). Noted that, like the case of ranolazine, all these six drugs are multi-channel blockers and can block some critical channels concurrently. Action potentials and pseudo-ECGs after the application of drugs were simulated and used as the criteria for drug efficacy. In addition, due to the critical role of I Kr in mediating CO-induced arrhythmogenesis, we also evaluated multiple I Kr activators for potential pharmacotherapy. Comprehensive Simulations were conducted on cell Frontiers in Physiology frontiersin.org 02 populations, 1D transmural strands, and 2D ventricular slice models to verify the robustness of the reported findings.
Modeling action potentials of human ventricular myocytes
The O'Hara-Rudy dynamics (ORd) model (O'Hara et al., 2011) was utilized to simulate the electrophysiology of human ventricular myocytes in this study. The ORd model is a comprehensive human cell model that was created using human experimental data. To overcome its unphysiologically slow conduction velocity (Elshrif and Cherry, 2014), the original I Na in the ORd model was substituted with that in the Tusscher et al. biophysically detailed model (TNNP06 model) (Ten Tusscher and Panfilov, 2006).
A conventional Hodgkin-Huxley model of a cardiac cell was implemented at the cellular level, with the model equation being: where V m is the membrane potential, I ion is the sum of all transmembrane ionic currents, and I stim is the externally applied stimulus current. C m is the membrane capacitance.
The cell model of heart failure (HF) used in this study was based on Elshrif et al.'s research (Elshrif et al., 2015), where a collection of HF-induced ion channel remodeling effects were incorporated into the ORd model. Similarly, the effects of CORM-2 (i.e., a CO-releasing molecule) were modeled based on previous research by Al-Owais et al. (Al-Owais et al., 2021) and were incorporated into the healthy and HF cell models. The reason we chose CORM-2 rather than CO is that CORM-2 is one of the most common CO-releasing molecules in biological research, and is safer and more controllable than CO. More details can be found in Sections SII and SIII in the Supplementary Material.
Modeling the effects of ranolazine and HW-0168 on ion channels
Available experimental data regarding the effects of ranolazine and HW-0168 from previous studies have been summarized in Table 1 (Antzelevitch et al., 2004;Rajamani et al., 2008;Beyder et al., 2012;Moreno et al., 2013;Dong et al., 2019). Specifically, ranolazine has been shown to exert dose-dependent blocking effects on I Na (Beyder et al., 2012), I NaL (Antzelevitch et al., 2004), I NaCa (Antzelevitch et al., 2004), I CaL (Antzelevitch et al., 2004), I Kr (Rajamani et al., 2008). Dose-response curves for ranolazine-affected ion channels were fitted using the following Hill functions: I Na Rajamani et al. (2008) HW-0168 EC 50 (μM) n/a n/a n/a n/a 0.41 Hill n/a n/a n/a n/a 0.73 Act max n/a n/a n/a n/a 2.8 Species n/a n/a n/a n/a HEK 293 Ref n/a n/a n/a n/a The fitting results are illustrated in Figure 1A. For the I Kr activator, HW-0168, only I Kr was reported to be affected by the drug (Dong et al., 2019); therefore, the data were fitted using Eq. 7: I Kr where [HW] is the dose of HW-0168 used in the experiment. The fitted dose-dependent curve is illustrated in Figure 1B. In this study, we used 10 μM and 0.5 μM for ranolazine and HW-0168, respectively. The above-fitted equations were finally incorporated into the 'Healthy + CO' and 'HF + CO' cell models.
The ionic current under the action of the drug is calculated by Eq. 8: where f Drug ion represent the effect of a drug on a certain ionic current.
Simulating the efficacy of multichannel blockers and specific I Kr channel activators
In addition to ranolazine and HW-0168, we also selected six multi-channel blockers (i.e., amiodarone, verapamil, nifedipine, quinidine, vanoxerine, and bepridil) and four specific I Kr activators (i.e., KB130015, ICA-105574, NS1643, NS3623) for efficacy simulation and screening of the drugs. A simple pore block theory (Brennan et al., 2009) was used in this study to model the interactions between drugs and ion channels. Based on this theory, the effect of drugs blocking ion channels was fitted by the following formula: where θ is the blocking efficiency, [D] is the concentration of a drug, IC 50 is the half-maximal inhibitory concentration, and nH is the Hill coefficient. The effect of drugs activating ion channels was fitted by Eq. 11: where Y is the activation efficiency, and Act max is the maximum activation efficiency, EC 50 is the compound concentration resulting in 50% of the Act max . Frontiers in Physiology frontiersin.org 04 The six multi-channel blockers act on related ion channels in a dose-dependent manner, and the related parameters are listed in Table 2. To evaluate the drug efficacy more objectively, we explored all drugs at three doses based on their C max , as shown in Table 3. The four specific I Kr activators activated I Kr currents in a dosedependent manner as well, and the relevant parameters are shown in Table 4. Obejero-Paz et al. (2015) Obejero-Paz et al. (2015) n/a Obejero-Paz et al. (2015) Obejero-Paz et al. (2015) Lacerda et al.
n/a n/a *MANTA, the Maastricht Antiarrhythmic Drug Evaluator, integrated published computational cardiomyocyte models from different species, regions and disease conditions. #The drugs in the table are all inhibitory for the channels listed, so their effects are not individually marked in the table.
Frontiers in Physiology frontiersin.org
Modeling the conduction of action potentials in one-dimensional (1D) transmural ventricular strands
The 1D transmural ventricular strand model, which is a linear syncytium formed by coupling multiple cells, can be calculated by adding a diffusion term to the cell model equation: where D is the scalar diffusion coefficient that decides the conduction velocity of APs.
The 1D transmural strand was 15 mm long, which was close to the normal range of the human transmural ventricle width (~4.0-14.0 mm) (Drouin et al., 1995;Yan et al., 1998). The strand was discretized into 100 interconnected nodes with a spatial precision of 0.15 mm, which was consistent with the reported cell length [i.e., 80-150 μm (Hinrichs et al., 2011)]. The proportions for transmural cell types were set to 25:35:40 for ENDO, MID, and EPI cells, which were identical to that used in previous studies (Zhang and Hancox, 2004;Luo et al., 2017). Such proportions reliably reproduced a positive T wave in the computed pseudo-ECG under control (healthy) conditions. The diffusion coefficient D was set to 0.127 mm 2 /ms, giving a
Modeling the conduction of action potentials in the 1D strand with COaffected regions
To further quantify the critical size of EAD cells for overcoming the source-sink effect and initiating triggers in ventricular tissue, we simulated a 15 mm homogenous ventricular strand consisting of only MID cells for the failing heart, with the center of the strand (Figure 2, red region) containing a variable number of contiguous cells affected by CO. The number of cells in the susceptible region was gradually increased until the synchronously occurred EADs overcame the source-sink effect and trigger a premature beat. The critical cell number was recorded as a metric for measuring the susceptibility to arrhythmias.
Generating pseudo-ECGs using the 1D model
The pseudo-ECG was calculated from the constructed 1D strand model by the following equation (Gima and Rudy, 2002): where ϕ e is a unipolar potential generated by the strand, a is the radius of the strand, dx is the spatial resolution, and r is the Euclidean distance from a point x to another point x′.
As shown in Figure 3, the period from the earliest appearance of the QRS complex to the end of the T-wave was defined as the QT interval, measured in milliseconds. The end of the T-wave was defined as the return of the descending limb to the TP baseline.
Modeling cell populations
To demonstrate the robustness of the reported findings, we constructed cell population models with reference to previous studies (Britton et al., 2013;Sutanto and Heijman, 2020). Specifically, the maximum conductance of the nine major ionic currents (I Na , I NaL , I CaL , I Kr , I Ks , I K1 , I to , I NaCa , and I NaK ) in the original deterministic model was scaled by a group of factors that follow a normal distribution with mean 1.0 and standard deviation 0.2. In this way, 1,000 population model variants were obtained.
Dynamic restitution protocol
The CV dynamic restitution curves were obtained using a dynamic pacing protocol. Specifically, the 1D strand model was paced with a certain basic cycle length (BCL) until reading its steady state upon which the CV value was recorded for that BCL. The initial BCL was set to 3,000 ms and was decreased gradually until the model failed to produce excitation waves. Based on the 'CV-BCL' pairs generated by the above protocol, CV restitution curves could be plotted against BCL.
2.9 Modeling the conduction of excitation waves on a two-dimensional (2D) realistic ventricular slice Similar to the 1D strand model, the monodomain equation (Eq. 11) was adopted to describe the propagation of excitation waves in the ventricular slice. Isotropic propagation was assumed, and the diffusion coefficient D was set to 0.154 mm 2 /ms, to produce a CV of 0.74 m/s (Taggart et al., 2000). The spatial step was set to 0.15 mm to be consistent with that in 1D models. To mimic the physiological characteristics of the Purkinje fibers, a series of supra-threshold stimuli were applied to several pacing sites on the endocardium of the slice.
Results
3.1 Assessing the drug efficacy of multichannel blockers on CO-affected hearts 3.1.1 Effects of ranolazine on AP and ECG Previous studies have suggested the drug ranolazine to be a potential pharmacotherapy for the treatment of CO-induced Frontiers in Physiology frontiersin.org arrhythmias (Dallas et al., 2012). Therefore, we first tested the efficacy of ranolazine on the baseline model of 'healthy + CO'. Simulation results are illustrated in Figure 4. Interestingly, ranolazine aggravated the arrhythmogenesis of CO. At the cellular level, it can be observed that ranolazine (10 μM) further extended APDs of all cell types, and APD 90 values of ENDO, MID, and EPI cells were increased by 15.7%, 14.6%, and 20.3% based on CO conditions, respectively ( Figure 4A). At the tissue level, generated pseudo-ECGs using 1D transmural ventricular strand models showed that ranolazine further Frontiers in Physiology frontiersin.org prolonged the QT interval and decreased the T-wave amplitude (Figure 4Bii). The effect of ranolazine was also reflected in conduction properties, where the tissue with ranolazine owned a wider wavelength (Figure 4Bi) than the control condition.
Effects of six multi-channel blockers on ECG
To find out if there are any available medications for the treatment of CO-induced arrhythmias, we collected the experimental data regarding the blocking effects of drugs on various channels as possible (see Table 2 in the Method section), and incorporated them into the baseline model to explore their potential treatment to CO-induced arrhythmias. In this study, three experimental doses were designed based on the C max of these drugs (as shown in Table 3). The simulated pseudo-ECGs are shown in Figure 5.
It can be observed that all six drugs failed to restore the prolonged QT interval even at their 'high' doses that are remarkably higher than the C max level (i.e., 'high dose' = 10×C max ). Specifically, low doses of amiodarone, nifedipine, verapamil, vanoxerine, and bepridil had no effects on the QT interval, while a low dose of quinidine exerted mild QT prolongation effects. When moderate doses were applied, quinidine and vanoxerine considerably prolonged the QT interval, while the other drugs still had no sensible effects. Finally, at high doses, all drugs except nifedipine prolonged the QT interval to varying degrees. Among them, vanoxerine and bepridil considerably prolonged the QT interval, and quinidine led to ECG repolarization failure.
Independent component analysis of ion channels
To determine the independent role of each drug-affected ion channels, we performed an ion mechanism analysis with ranolazine as a representative case. First, we quantitatively analyzed the individual role of each ion channel involved in the action of ranolazine. APD 90 was used as the metric, and the results are summarized in Table 5. It can be observed that the effects of ranolazine on I Na , I NaCa , and I CaL have no effect on APD 90 . On the other hand, the inhibition effect of ranolazine on I NaL shortened the APD 90 of all three cell types, demonstrating an antiarrhythmic action; however, the simultaneously inhibited I Kr by ranolazine led to a more pronounced prolonging of APD, '↑' and '↓'indicate that the effect of the change of this ion channel on APD90 is lengthening or shortening. Frontiers in Physiology frontiersin.org which offset the effects of I NaL and aggravated the CO-induced arrhythmogenesis at the cellular level. Next, we analyzed the individual role of each ion channel in the ECG changes, as shown in Figure 6A. Consistent with the results at the cellular level, the effects of ranolazine on I Na , I NaCa , and I CaL did not cause any obvious ECG changes. More specifically, the IC 50 values of ranolazine for I Na , I NaCa , and I CaL were 53.6 μM, 91.0 μM, and 296.0 μM, and ranolazine at 10 μM inhibited only 1.8%, 3.7%, and 3.3% of I Na , I NaCa , and I CaL , respectively, which had almost no effect on APD and ECG. As for the I NaL , the QT interval shortening effect caused by the inhibition of I NaL could not offset the QT interval prolongation by the attenuation of I Kr . So overall, ranolazine eventually led to QT prolongation.
Effects of drugs on the transmural dispersion of repolarization
In this part, we assessed the role of heterogeneity among different ventricular cells on arrhythmias. Simulations at the cellular level show that, under the action of ranolazine, the APD difference between MID and ENDO cells (ΔAPD MID-ENDO ) decreased from 63 ms to 61 ms, and ΔAPD MID-EPI reduced from 111 ms to 109 ms. The decreased ΔAPD among different cell types suggested that the drug decreased the vulnerability in terms of transmural heterogeneity. The following experiments of vulnerable window measurements using transmural 1D strand further confirmed this. As shown in Figures 6Bi,Bii, the average width of the VW under the 'CO + RAN' condition is apparently narrower compared to that in the CO condition (from 7.04 ms to 4.28 ms). The decreased temporal risk evidenced by the vulnerable window changes is consistent with the cellular level simulation results.
Effects of drugs on conduction velocity
Simulations demonstrated that the CV under 'CO' and 'CO + drug' conditions were lower for all BCLs compared to the healthy conditions ( Figure 7A). Specifically, after the addition of amiodarone, verapamil, nifedipine, and bepridil, the CV dynamic restitution curves were almost unchanged compared to CO conditions, suggesting that amiodarone, verapamil, nifedipine, and bepridil had no effect in terms of the tissue conduction properties ( Figure 7B). Vanoxerine caused a further decrease in CV on the basis of CO, and ranolazine led to a right shift of the CV curve and an increase in the curve slope. Quinidine caused a mild decrease in CV and impaired the adaptability of tissue to fast heart rates (small BCLs).
In general, none of these drugs could restore the decreased CV by CO, and some of them even aggravated this situation. Furthermore, the decreased CV also contributed to a smaller wavelength (calculated as CV×ERP) and might therefore help to maintain the reentrant waves within a limited tissue size.
Assessing the drug efficacy of multichannel blockers on CO-affected hearts accompanied by heart failure
The influences of the aforementioned drugs were also evaluated under the heart failure condition. Simulated actions of ranolazine on CO-affected cells and tissues of heart failure are presented in Figure 8. Overall, ranolazine exacerbated the CO and heart failure-induced arrhythmias. In detail, the CO-induced 2:1 alternated EADs in MID cells became 1:1 consecutive EADs (Figure 8Aii), resulting in complete repolarization failure. Ranolazine also led to the occurrence of EAD in EPI cells (Figure 8Aiii). Above EAD activities in single cells did not develop into ectopic beats in 1D ventricular strands due to the 'source-sink' effect (Xie et al., 2010); however, ranolazine resulted in the 1:1 conduction failure of excitation waves at the pacing frequency of 1.25 Hz (Figure 8Bi). For the pseudo-ECG, ranolazine did not eliminate the CO-induced ECG morphological changes in heart failure tissue and further led to failed depolarization due to the considerably prolonged repolarization phase of the last cycle (Figure 8Bii).
Frontiers in Physiology
frontiersin.org Figure 9 shows the effects of the other six multi-channel blockers on ECG morphology in heart failure conditions. Due to the remodeled transmural gradient of repolarization in the heart failure condition, the T-wave was almost flattened. In terms of the QT-interval, amiodarone (0.0005 μM), nifedipine (0.005 μM), and verapamil (0.03 μM) had almost no effect on
FIGURE 9
Effects of six multi-channel blockers on CO-affected ECG morphology by heart failure (HF). ECG morphology of (A) amiodarone (0.0005 μM), Frontiers in Physiology frontiersin.org the QT interval, and bepridil (0.01 μM) slightly prolonged the QT interval. In addition, quinidine (1 μM) and vanoxerine (0.005 μM) caused depolarization failure. Overall, all six drugs were not effective against CO-induced arrhythmias in heart failure conditions.
Investigating the critical cell number for triggering ectopic beats
The baseline model of HF + CO showed that CO could induce pronounced EAD activities in MID cells, but these EADs did not evolve into ectopic beats in 1-D tissue due to the 'source-sink' effect (i.e., the depolarization force of EAD is not able to trigger an excitation due to the limited number of EAD cells) (Xie et al., 2010). Applying ranolazine did not trigger ectopic beats in the tissue either; however, it did diminish the repolarization ability in terms of the cellular action potential ( Figure 8A). To give a more intuitive presentation of the increased proarrhythmic risk of ranolazine, we quantified the risk by measuring the critical number for generating the ectopic beat. Specifically, we constructed a 1D model of HF MID cells, with its central segment being set to CO-affected, and the minimum number of affected cells that could overcome the source-sink effect and lead to ectopic beats was recorded as the critical cell number. As shown in Figure 10, simulations suggested that the critical cell number under CO conditions was 68, corresponding to a tissue length of 10.2 mm. In contrast, the critical cell number was only 58 after the addition of ranolazine, which suggested an increased susceptibility to ectopic beats. Action potentials of representative cells within the CO-affected region (marked '*' and '**' in Figure 10) were plotted in the right panels of Figure 10. In our previous study (Jiang et al., 2022), we have shown that the suppression of I Kr is the main factor responsible for the CO-induced prolongation of APD and QT interval. Considering the critical role of I Kr in the pathological pathway and the bad efficacy of multi-channel blockers, we evaluated several specific I Kr activators in this section. For simplicity, the simulation results of a representative drug HW-0168 (full name: N-(2-(tert-butyl) phenyl)-6-(4-chlorophenyl)-4-(trifluoromethyl) nicotinamide)
FIGURE 11
Actions of HW-0168 (HW) on CO-affected myocardial cells and tissues. (A) The comparison of action potentials of three cell types under 'healthy', 'healthy + CO', and 'healthy + CO + HW' conditions. (B) Spatial-temporal plots under the 'healthy + CO + HW' condition (Bi), and the corresponding pseudo-ECGs (Bii). Noted that the HW-0168 restored the QT interval almost to the control level. Frontiers in Physiology frontiersin.org (Dong et al., 2019) are presented in detail (Figures 11, 12), whereas only the effective dose is recorded for the other activators (Table 4). On the 'healthy + CO' condition, it can be observed that the HW-0168 at a dose of 0.5 μM [therapeutic range suggested in clinical: 0.5-1 μM (Dong et al., 2019)] effectively shortened the APD prolongation caused by CO and reversed the prolonged APD 90 to almost the same as the healthy condition. Generated pseudo-ECGs using 1D transmural ventricular strand models showed consistent results--HW-0168 restored the prolonged QT interval to a level that was almost identical to the control condition (Figure 11Bii). In addition, HW-0168 also improved the conduction properties of excitation waves and shortened the conduction wavelength of the tissue (Figure 11Bi).
The efficacy of HW-0168 under heart failure conditions is presented in Figure 12. Simulation results showed that HW-0168 effectively reversed the proarrhythmic effects (i.e., prolonged APDs and EADs) of CO in all three cell types ( Figure 12A), and shortened the excitation wavelength in the heart failure tissue (Figure 12Bi). For the ECG, although the drug did not restore the altered T-wave morphology in heart failure, it eliminated the QT interval prolongation effects by CO.
According to the above results, the selective I Kr activator achieved desired treatment for CO-induced arrhythmias. Therefore, more existent I Kr activators (i.e., KB130015 (Gessner et al., 2010), ICA-105574 (Asayama et al., 2013), NS1643 (Casis et al., 2006), NS3623 (Hansen et al., 2006)) were tested and the doses of drugs under which the QT-interval was restored were recorded in Table 6. According to our simulation results, ICA-105574 was the most sensitive one, which restored the QT-interval and suppressed EADs (under heart failure conditions) at a dose of only 0.25 μM.
Simulating drug efficacy based on cell population models
Considering the potential influence of intercellular or intersubject variability on the reported findings, we built cell population models and performed additional simulations based on them. The simulation results are shown in Figure 13. It can be observed that EADs occurred occasionally under the HF condition, with a ratio of only 2.6%. Next, after considering the effects of CO, APDs of cell populations were generally prolonged, and the ratio of cells with EAD increased to 18.5%. The administration of ranolazine aggravated the situation, and the ratio of EAD cells increased dramatically to 58.2% (as shown in panel Aiii). In contrast, the addition of HW effectively alleviated the above arrhythmogenesis at the cellular level, which was evidenced by the complete suppression of EAD activities and the generally shortened APDs. *Noted that the maximum I Kr activation (152%) of NS1643 was still not able to restore the QT interval to its control level. However, 30 μM NS1643 greatly shortened the QT interval to a normal range and was enough to suppress EADs in heart failure cells. Frontiers in Physiology frontiersin.org
Simulating pseudo-ECGs based on a 2D realistic ventricular slice
To avoid the potential difference caused by the simplified model geometry, we conducted simulation experiments for two representative drugs, i.e., ranolazine and HW-0168, using a 2D realistic ventricular slice model. The simulation results are shown in Figure 14. It can be observed obviously that the tissue slice with ranolazine took more time to repolarize than that with HW-0168 ( Figure 14A). In terms of the ECG, the 2D-based ECGs are consistent with the 1D-based ones ( Figure 14B). For example, ranolazine further prolonged the QT interval based on CO and therefore exacerbated the proarrhythmic effect. On the other hand, HW-0168 still exerted the antiarrhythmic effects of ranolazine by restoring the QT interval.
Main findings
The severe cardiotoxic consequences of CO urgently require an effective therapeutic strategy to treat them. In this study, we evaluated the efficacy of various multi-channel blockers and specific I Kr activators against CO-induced ventricular arrhythmias in healthy and failing hearts. The major findings are as follows: 1) The tested existent antiarrhythmic drugs failed to rescue the heart from CO-induced arrhythmias, and most of them even aggravated the arrhythmogenic condition, which was evidenced by the more frequent EAD activities and decreased critical cell numbers for triggering ectopic beats. 2) In contrast, specific I Kr activators demonstrated good efficacy according to the improved biomarkers at both cellular and tissue levels. All of the tested I Kr activators restored the prolonged QT intervals in both healthy and heart failure conditions, and the EADs in MID cells were successfully suppressed as well. 3) In-depth case analysis with ranolazine and HW-0168 revealed the critical role of I Kr in the COinduced functional changes in cardiac electrophysiology, and neither I CaL nor I NaL blockers were able to offset the decreased repolarization forces caused by the CO-induced I Kr inhibition. 4) Of note, the drug ranolazine was previously suggested as a potential strategy in dealing with CO-induced arrhythmogenesis due to its good efficacy demonstrated in rats, and the failure of ranolazine in the human tissue in this study hinted the crucial role of inter-species variances when determining the pharmacotherapeutic strategy.
Species-dependent effects of ranolazine for the treatment of COinduced arrhythmias
Ranolazine was first suggested in Dallas et al.'s study (Dallas et al., 2012) for the treatment of CO-induced arrhythmias. Based Frontiers in Physiology frontiersin.org on the experimental results obtained from rats, they proposed that CO-induced EADs arouse from the activation of NO synthase, which in turn leads to the NO-mediated nitrosylation of Na V 1.5 and the enhanced I NaL . Correspondingly, the I NaL inhibitor ranolazine abolished the EADs and was considered to be effective in dealing with COinduced arrhythmias. Similarly, Morita et al. also observed the antiarrhythmic effects of ranolazine for its suppression of reentrant and multifocal ventricular fibrillation in rat ventricles (Morita et al., 2011). However, APs in rats are distinctly different from those in humans, and the such discrepancy may lead to species-dependent effects of the same drug. This hypothesis was explored in Al-Owais et al.'s study (Al-Owais et al., 2017), where the effects of ranolazine were examined in guinea pigs-a species with action potentials more closely resembling that of humans. Interestingly, ranolazine failed to abolish CO-induced EAD and even exacerbated such proarrhythmic factors. Our simulations suggested that ranolazine exerted similar proarrhythmic effects in human hearts. Specifically, ranolazine further prolonged AP durations and QT intervals in healthy human simulations (Figure 4), while in heart failure conditions it led to more pronounced EADs in MID and EPI cells ( Figure 6). The above model-dependent effects of ranolazine arose from the differences of I Kr , a major outward current responsible for the repolarization in human APs but are almost negligible in rat myocytes (Pandit et al., 2001). Although the I NaL inhibition effects of ranolazine tend to suppress EAD, the drug can also reduce the repolarization force by inhibiting I Kr . Further assessment using a 1D homogenous ventricular strand consisting of only MID cells found that ranolazine decreased the critical cell number for triggering ectopic beats (from 68 to 58), which also suggested the increased arrhythmogenic risk of the drug. These findings provide new insights into the side effects of ranolazine on the treatment of CO-induced arrhythmias. They also highlighted that the drug effects obtained in rats need to be carefully interpreted in clinical trials due to the speciesdependent differences.
I Kr activator-A promising pharmacotherapy for the treatment of CO-induced arrhythmias
In addition to ranolazine, we evaluated more existent antiarrhythmic drugs to find potential drug strategies for COinduced arrhythmias. Calcium current blockers were focused on in hopes of attenuating the depolarization force in the plateau phase and therefore shortening the action potential and the QT interval. However, none of the six drugs was able to rescue the heart from arrhythmogenesis, and most of them even worsened the conditions, evidenced by the further prolonged QT intervals and more frequently observed EAD activities. By analyzing the separate role of each channel current in the integral effect of multi-channel drugs, we found that blocking I CaL and I NaL was not able to offset the reduction of I Kr by CO; furthermore, most of these multi-channel blockers also inhibited I Kr with a relatively low affinity. Indeed, the hERG channel that conducts I Kr is a highly sensitive target and it accounts for the majority of drug withdrawal events in the last 2 decades (Brown, 2004;Stockbridge et al., 2013;Villoutreix and Taboureau, 2015). On the other hand, there are few drugs available in the current antiarrhythmic category exerting I Kr activating effects (Lei et al., 2018), making it difficult to find a proper drug strategy. We have also tried pinacidil (an I KATP activator) in the model, but it did not produce any significant efficacy as well (data not shown). This can be attributed to the fact that the K-ATP channel barely opens under normoxic conditions due to its ATP-sensitive characteristic (Dart and Standen, 1995); therefore, the I KATP would not make obvious differences even a high magnification ratio was used in the model.
In-depth analysis has demonstrated that I Kr plays a major role in CO-induced arrhythmogenesis (Jiang et al., 2022). Considering that existent multi-channel antiarrhythmic drugs did not achieve idealized efficacy, we turned to evaluate the potential phrenological effect of specific I Kr activators. In line with expectations, the simulation results showed that I Kr activators could effectively reverse the proarrhythmic effects of CO. All the tested drugs notwithstanding in different doses restored AP and ECG morphologies almost to their control levels in healthy human simulations, and they also suppressed EADs and ectopic beats in heart failure human simulations. These findings suggest that the I Kr activator is a promising pharmacotherapy for the treatment of CO-induced arrhythmias.
Potential limitations of this study
This study lacks validation of heart failure models. Though we have adopted a well-established cell model under heart failure conditions and replicated several known electrophysiological changes in failing hearts, for example, the prolonged APD (Akar and Rosenbaum, 2003;Lou et al., 2012), the decreased conduction velocity (Akar et al., 2004), the widened QRS complex (Shenkman et al., 2002;Sandhu and Bahler, 2004), and the prolonged QT interval (Davey et al., 2000;Medina-Ravell et al., 2003); however, we did not find enough tissue-level experimental data to validate other observations such as the flattened T-wave.
The above limitations shall not change the main conclusions of this study. Specifically, most of the observations and conclusions in the present study were based on the damaged cellular repolarization and the consequent QT prolongation in failing hearts, which were well-established in biological experiments (Davey et al., 2000;Medina-Ravell et al., 2003;Lou et al., 2012;Ng et al., 2014). In addition, for the EAD Frontiers in Physiology frontiersin.org phenomenon, we adopted relatively conservative parameters (i.e., no EAD phenomenon occurred in pure heart failure conditions) to avoid exaggeration of the experimental results. The experimental data on CO effects, drugs, and currents used in this study were obtained from different species, and the CO effects were obtained at room temperature. Interspecies differences and temperature dependence should be taken into account when interpreting and translating the results. The effects on APD in this study were measured in individual isolated ventricular myocytes, and the potential cell-coupling effects on the APD in high-dimensional models were not considered. Besides, the pathological model of CO was constructed based on experimental data obtained from different CORM-2 doses (10-30 μM) (Al-Owais et al., 2021), which should be considered in future studies. As for the drugs, the I Kr activators proposed in this study for the treatment of CO-induced arrhythmias currently face some disadvantages and unknowns. Specifically, compared with the FDA-approved drugs such as ranolazine and amiodarone, I Kr activators represented by HW-0168 are currently only used in biological experiments and simulation experiments, and their effective doses have not been clinically verified and side effects are not being disclosed. Moreover, whether these I Kr activators interact with ion channels other than I Kr remain unknown. If this is the case, then they must be treated as multiple-channel drugs and the potential offset or synergy effects among the involved ion currents should be considered.
Finally, according to our previous research review (Zhang et al., 2021), CO was also known to affect multiple cellular pathways other than the ion channels in this study. The present study mainly considered arrhythmias caused by changes in ionic currents directly induced by CO, without considering the mitochondrial toxicity of CO and some other complicated electrophysiological remodeling induced by cellular ischemia. Specifically, CO poisoning will increase ROS and RNS (Piantadosi, 2008), which further impair the chondrial energetics and can alter the intracellular calcium handling as well (Hegyi et al., 2021). This alteration will subsequently impact the expression and trafficking of channels (Sutanto et al., 2020). These cellular pathways warrant further investigations in the future.
Conclusion
In this study, we conducted an in silico assessment of the efficacy of some common antiarrhythmic drugs and specific I Kr activators on CO-induced arrhythmias under healthy and heart failure conditions. We showed that existent antiarrhythmic drugs like ranolazine failed to exert therapeutic effects, and even worsened the arrhythmogenic situation in failing hearts. In contrast, specific I Kr activators such as HW-0168 can effectively alleviate the proarrhythmic effects of CO, providing a promising pharmacotherapy for the treatment of CO-induced cardiotoxicity.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Materials, further inquiries can be directed to the corresponding authors. | 2022-11-18T15:42:05.577Z | 2022-11-16T00:00:00.000 | {
"year": 2022,
"sha1": "9a5b81096888885f1e12d5b958ef457bf2473f5a",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2022.1018299/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "9a5b81096888885f1e12d5b958ef457bf2473f5a",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
54410742 | pes2o/s2orc | v3-fos-license | A Lightweight Anonymous Client–Server Authentication Scheme for the Internet of Things Scenario: LAuth
The Internet of Things (IoT) connects different kinds of devices into a network, and enables two-way communication between devices. A large amount of data are collected by these devices and transmitted in this network, it is necessary to ensure secure communications between these devices, to make it impossible for an adversary to undermine this communication. To ensure secure communication, many authentication protocols have been proposed, in this study, a fully anonymous authentication scheme for the Internet of things scenario has been proposed, it enables the remote client to anonymously connect to the server and being serviced by the server. The proposed scheme has been verified by AVISPA and BAN Logic, and the result shows that it is safe. Besides, the simulation shows that the proposed scheme is more efficient in computation cost and communication cost.
Introduction
The Internet of Things is a network that connects all kinds of sensors, actuators, and other embedded devices. These devices can exchange data remotely via the network. A significant amount of data are collected by these devices and transmitted in this network. Among these data, there are many personal data, for example, blood pressure, pulse, and electrocardiogram, as well as home environment data, home humidity, and home temperature, etc. People are reluctant to let any party use the data without authorization. There is a need for an authentication scheme to make sure that the data is only accessible to authorized members. Authentication schemes have been studied in the past to solve this problem.
However, in some cases, mutual authentication is not sufficient for protecting the privacy of the clients. In the healthcare environment, an adversary can eavesdrop the information flow and find out which patient's data is being transmitted. The client's medical condition is revealed in this way. In this study, a light weighted authentication and key establishment scheme was proposed, which enables the remote client to be authenticated anonymously by the server. In the proposed scheme, we only used some light weighted security operations: XOR operations, hash functions and a minimal amount of asymmetric encryptions to fulfill perfect forward secrecy, as discussed in the previous work, these operations are relatively light weighted ones, we will continue to discuss this problem in Section 7.1. As energy consumption is of paramount importance in the context where energy are provided by small batteries, there is a high demand for a lightweight authentication scheme [1,2]. For these two reasons, we come up with this authentication scheme. Our contributions are mainly three-fold: We propose a lightweight anonymous authentication for the Internet of things scenario; the scheme achieves various security features: perfect forward privacy, user anonymity, resistance to an offline dictionary attack, etc. In addition, to verify the security features of the proposed scheme, the proposed scheme is also verified by AVISPA and the BAN Logic.
2.
We specially design the password changing phase, making it more efficient compared to that in the related works. 3.
We simulate the proposed scheme and other related schemes using C++. The results show the communication cost and the computation cost are reduced compared with related proposals.
In Section 2, we discussed the related works, in Section 3, we introduced the proposed scheme, Sections 4 and 5 are security analyses using AVISPA and BAN logic, Section 6 is the formal security analysis section. In Section 7, we compared the proposed scheme with related works. In Section 8, we analyzed the security features. Section 9 is the conclusion part.
Related Work
Tu et al. proposed an authentication protocol based on a smart card; the protocol is a two-factor authentication scheme based on an elliptic curve [3]. However, this scheme is found to be vulverable to impersonation attacks; an attacker can impersonate as a legal server according to Farash [4]. Ibrahim et al. proposed secure anonymous mutual authentication for star two-tier wireless body area networks [5]. Chaudhry et al. proposed a remote user authentication scheme using elliptic curve cryptography that can withstand various attacks in the internet of things scenario, for example, smart card lost attack, replay attack [6]. Kumari analyzed the scheme of Farash [7], and they found that Farash's scheme is vulnerable to various attacks, for example, impersonation attack, password guessing attack and temporary session specific information reveal attack, etc.
Jing et al. proposed an authentication between user and server, which could protect well the identity privacy of the user [8], however, their scheme requires extra storage capacity at the server side. In the scheme of Xiong [9], only registered users can authenticate each other and build a shared key, besides, this shared key is only known by the two registered users and the network manager could not know this shared key. According to the public information transmitted between the two users, an adversary is unable to learn this shared key. The scheme of Jing et al. is a scheme equipped with elliptic curve cryptographic primitives. Their scheme achieves anonymity regardless of network infrastructure. Their scheme enables the server to provide various services for a client more than once with a negligible computational cost [10]. Idrissi proposed a security scheme for mobile agent based on two techniques: anonymous authentication and intrusion detection [11]. In the work of Xiong et al. [12], the anonymity is enabled, however the gateway has to store a lot of the identity and key pairs.
In some schemes, the gateway assigns a random number, and a unique key based on this number to the clients. This number is used as an indicator of the key, the user encrypts his identity with this key. Many other schemes use this way to protect the identity of the users, for example, the scheme in the works of [13][14][15][16][17][18]. Biometrics are used in the scheme of Wu et al. [19], Odelu et al. [20], Wang et al. [21] and Islam et al. [22]. Human beings' biometrics are extracted as random strings by using the fuzzy extractor.
The partial public key method is a popular method that has been used. He et al. proposed an efficient identity-based privacy-preserving authentication scheme for vehicular ad hoc networks [23], batch verification is used in this study. The concept of partial public key is also used in the scheme of Islam et al. [24]. In their scheme, a user register at the server several times, in order to get more than one authentication keys, then the user can use different keys for authentication to achieve anonymity. The scheme of Porambage et al. [25] also used the partial public key concept. Tsai et al. proposed a scheme for distributed mobile cloud computing services [26], the security strength of their scheme is based on bilinear pairing and dynamic nonce generation. There are other schemes that based on the elliptic curve security [27][28][29].
1.
A client is the one who wants to access the services provided by the server. A client first registers at the server, after the registration, he can conduct a mutual authentication with the server, after authentication, the two can build a shared key, the client can access to the server's service using this key.
2.
A server is the one that provides different kinds of services to the client. A server is also responsible for the registration and password modification for the client. Before the server provides a service to a client, the server has to make sure if the client is a registered one or not.
Structure of the Scheme
There are two types of entities in the scheme: remote clients and the server, which is shown in Figure 1.
1. A client is the one who wants to access the services provided by the server. A client first registers at the server, after the registration, he can conduct a mutual authentication with the server, after authentication, the two can build a shared key, the client can access to the server's service using this key. 2. A server is the one that provides different kinds of services to the client. A server is also responsible for the registration and password modification for the client. Before the server provides a service to a client, the server has to make sure if the client is a registered one or not. The proposed scheme is a mutual authentication scheme between the client and the server. The scheme consists of three phases: registration phase of the client, mutual authentication and key establishment phase and the phase for a client to change his password.
System Initialization
In the beginning, the server generates and publicizes the parameters of an elliptic curve, which is { , , , , , ℎ}. After that, generates its private key , and keeps it as a secret. The symbols that will be used in this study are summarized in Table 1.
Registration Phase
All the clients have to register at the server, a client with identity generates a registration request message, and sends this request to the server . When the server receives the message, server generates the keys for client , after that, the server sends these keys to the client . Table 2 is a description of the process. The proposed scheme is a mutual authentication scheme between the client and the server. The scheme consists of three phases: registration phase of the client, mutual authentication and key establishment phase and the phase for a client to change his password.
System Initialization
In the beginning, the server S generates and publicizes the parameters of an elliptic curve, which is {p, a, b, P, n, h}. After that, S generates its private key X GW N , and keeps it as a secret. The symbols that will be used in this study are summarized in Table 1.
Registration Phase
All the clients have to register at the server, a client C i with identity ID i generates a registration request message, and sends this request to the server S.
1.
Client C i chooses a random number r i .
2.
Client C i calculates a hash message MP i = h(r i ||ID i ||PW i ).
3.
Client C i sends {ID i , MP i } to the server.
1.
Server S calculates a hash message d i = h(ID i ||X GW N ).
2.
Server S calculates Server S chooses a random number k i .
4.
Server S calculates a hash message e i = h(k i ||X GW N ).
5.
Server S calculates h i = e i ⊕ MP i . 6.
Server S sends { f i , h i , k i } and other system parameters to the client C i . Table 2. Registration phase.
Client Server
incoming message, if the client verifies the message, he will build a shared key with the server. , and now client and the server can communicate using the shared key = , otherwise the scheme terminates here. 5. Client updates ℎ = ⊕ and = .
Now the client and the server have authenticated each other and built a shared key. The Table 3 below depicts the whole process.
Agree on the key =
Password Change Phase
When a client wants to change his password, he can send a request to the server , this request is sent in public channel. Table 4 is a description of this process.
, and now client and the server can communicate using the shared key = , otherwise the scheme terminates here. 5. Client updates ℎ = ⊕ and = .
Now the client and the server have authenticated each other and built a shared key. The Table 3 below depicts the whole process.
Agree on the key =
Password Change Phase
When a client wants to change his password, he can send a request to the server , this request is sent in public channel. Table 4 is a description of this process.
1. The client inserts his smart card into a card reader, inputs his identity and password and .
3. SC uses to get = ⊕ and = ℎ ⊕ . 4. SC gets the current timestamp and the random number .
Authentication Phase
If a client C i with identity ID i wants to ask a service from the server S, first, the two have to authenticate each other and build a shared key. The client C i inserts the smart card into a card reader, inputs his identity ID i and password PW i . The smart card (SC) prepares the following message and sends it to the server S.
1.
The client C i inserts its smart card into a card reader, inputs his identity ID i and password PW i .
2.
SC computes: SC gets the current timestamp T 1 and the random number k i .
SC gets the hash When the server S receives the incoming message, it first checks the correctness of the message, after the verification, the server will generate the shared key between himself and the client. Then the server prepares the message for sending back to the client.
1.
Server S checks the freshness of the T 1 , if T 1 is not fresh, server S abandons the incoming message, the scheme ends here.
2.
Server S calculates the key h( Server S checks if M 1 = h A 1 ID i k i d i T 1 , if they are equal, the server accepts the incoming message, otherwise, the scheme terminates here. 6.
Server S calculates a new random number k inew = h 1 (SK||T 1 ). 9. Server S calculates a hash message e inew = h(k inew ||X GW N ). 10 When client C i gets the message {B 2 , M 4 }, C i will do the following steps to authenticate the incoming message, if the client verifies the message, he will build a shared key with the server.
1.
Client C i computes the shared key as if they are equal, C i accepts the shared key SK , and now client C i and the server S can communicate using the shared key SK = SK , otherwise the scheme terminates here.
5.
Client Now the client C i and the server S have authenticated each other and built a shared key. The Table 3 below depicts the whole process.
When client gets the message{ , }, will do the following steps to authenticate the incoming message, if the client verifies the message, he will build a shared key with the server. , and now client and the server can communicate using the shared key = , otherwise the scheme terminates here. 5. Client updates ℎ = ⊕ and = .
Now the client and the server have authenticated each other and built a shared key. The Table 3 below depicts the whole process.
Agree on the key =
Password Change Phase
When a client wants to change his password, he can send a request to the server , this request is sent in public channel. Table 4 is a description of this process. 3. SC uses to get = ⊕ and = ℎ ⊕ .
Checks the freshness of T 1 When client gets the message{ , }, will do the following steps to authenticate the incoming message, if the client verifies the message, he will build a shared key with the server. Now the client and the server have authenticated each other and built a shared key. The Table 3 below depicts the whole process.
Agree on the key =
Password Change Phase
When a client wants to change his password, he can send a request to the server , this request is sent in public channel. Table 4 is a description of this process.
1. The client inserts his smart card into a card reader, inputs his identity and password and .
Password Change Phase
When a client C i wants to change his password, he can send a request to the server S, this request is sent in public channel. Table 4 is a description of this process.
1.
The client C i inserts his smart card into a card reader, inputs his identity and password ID i and PW i . 2.
SC computes: SC gets the current timestamp T 1 and the random number k i . 5.
SC gets the hash Finally, SC sends {k i , M 2 , T 1 } to the server S.
When client gets the message{ , }, will do the following steps to authenticate the incoming message, if the client verifies the message, he will build a shared key with the server. , and now client and the server can communicate using the shared key = , otherwise the scheme terminates here. 5. Client updates ℎ = ⊕ and = .
Now the client and the server have authenticated each other and built a shared key. The Table 3 below depicts the whole process.
Agree on the key =
Password Change Phase
When a client wants to change his password, he can send a request to the server , this request is sent in public channel. Table 4 is a description of this process. When client gets the message{ , }, will do the following steps to authenticate the incoming message, if the client verifies the message, he will build a shared key with the server. , and now client and the server can communicate using the shared key = , otherwise the scheme terminates here. 5. Client updates ℎ = ⊕ and = .
Now the client and the server have authenticated each other and built a shared key. The Table 3 below depicts the whole process.
Agree on the key =
Password Change Phase
When a client wants to change his password, he can send a request to the server , this request is sent in public channel. Table 4 is a description of this process.
When the server S receives the message, server S will verify if the message is from a legitimate client, after that, the server S sends a replay to the client C i .
1.
Server S checks the freshness of the T 1 , if T 1 is not fresh, server S abandons the incoming message.
2.
Server S calculates the key h(k i ||X GW N ) based on k i .
3.
Server S uses the key h(k i ||X GW N ) to decrypt M 2 to get ID i M 1 , Server S calculates d i = h ID i X GW N based on the identity ID i .
5.
Server S checks if M 1 = h ID i ||k i ||d i T 1 , if they are equal, the server verifies the incoming message, otherwise, the scheme terminates here. When a client C i receives the replay message from the server S, the smart card checks the correctness of this message, if it is from the server S, then the smart card will allow the client C i to input his new password.
1.
SC checks if M 3 = h ID i ||d i ||k i T 1 , if they are equal, then the client is allowed to change his password.
2.
SC computes d i = f i ⊕ MP i using the stored f i and the old MP i .
3.
SC computes e i = h i ⊕ MP i using the stored h i and the old MP i 4.
Client C i inputs the new password PW * i .
5.
SC updates MP i to be SC uses this new MP * i to update the stored version of f i and h i to get
Security Analysis by AVISPA
Automated Validation of Internet Security Protocols and Applications (AVISPA) is "a push-button tool for the automated validation of Internet security-sensitive protocols and applications" [30]. To test security features of the scheme in this study, we write the scheme in a role-based language called High-Level Protocols Specification Language (HLPSL), which is used for describing protocols and specifying their intended security features. The HLPSL code is listed in Appendix A.
We run the security check by using the CL-based Model-Checker [31], and the checker of On-the-Fly Model-Checker (OFMC) [32,33]. The simulation result shown in Table 5 demonstrates that the proposed scheme is safe.
Security Analysis Using BAN Logic
We conducted a security analysis of the proposed scheme using Burrows-Abadi-Needham Logic (BAN logic) [34]. By using BAN logic, we can determine whether the exchanged information is trustworthy, secure against eavesdropping. For more information on the symbols and primary postulates of BAN logic, please refer to our previous work [35].
The Premise and Proof Goals
Suppose there are two entities in the system: client C i and the server S. Before we start the proof, we first translate the messages into an idealized form of BAN logic, the results are shown in Table 6. Table 6. The idealized form of the messages.
Message
Flow Idealized Form The goals in BAN Logic are described below. These goals can ensure C i and S to agree on a shared key SK.
Assumptions
We make some assumptions to help us to prove the protocol; assumptions are listed in Table 7. First, we show the proof of assumption A1 and A3.
According to (1) and the "promotion #" rule: 3. According to (2) and the "promotion #" rule: 4. According to (3) and the "elimination of multipart messages" rule: In this part, we show the proof of assumption A2 and A4. By checking the timestamp T 1 , the server S can judge if T 1 is fresh or not, if T 1 is not fresh, the server S will abandon the message and the scheme ends here. Thus, we only consider the situation that server S believes timestamp T 1 is fresh, which is S| ≡ #(T 1 ) .
5.
According to the "promotion #" rule: 6. According to (5) and the "elimination of multipart messages" rule: After registration, both server S and the client C i believe that they have a shared key d i . Translating into BAN Logic, we get assumptions A6: We can get assumptions has complete control over the data B 2 , assumption A8 says that server S believes client C i has complete control over the data A 1 .
The Proof of the Proposed Scheme
In this section, we start the proof. According to the message k i , A 1 , {A 1 , ID i , k i , T 1 } d i , T 1 , which the client C i sends to server S, we can get the followings:
Formal Security Analysis
Suppose G 1 is a cyclic additive group of prime order q, P is the generator of G 1 , the Elliptic Curve Computational Diffie-Hellman (ECCDH) problem is thought to be a computational hardness. The security of the shared key of the proposed scheme is based on the computational hardness of the ECCDH problem.
Definition 1. ECCDH problem. For any a, b, c ∈ Z * q , given an instance < aP, bP >, it is computationally intractable to compute cP = abP.
Theorem 1. The proposed scheme achieves shared key security if and only if the ECCDH problem is unable to be solved in polynomial time.
We define the shared key security as that an adversary is unable to get the shared key between the client C i and server S based on the messages transferred publicly between them.
Proof.
(⇒) Suppose there is an efficient algorithm O I which could break the ECCDH problem in probabilistic polynomial time. The adversary is able to get the messages publicly sent between the client C i and the server S: {k i , A 1 , M 2 , T 1 }, and {B 2 , M 4 }. Suppose a·P = A 1 = k 1 ·P and P = B 2 = k 2 ·P, adversary A I is able to get the cP = k 1 ·k 2 ·P by using efficient algorithm O I , the adversary is able to break the security of the shared key and get the shared key h(k 1 ·k 2 ·P ||T 1 ).
(⇐) Suppose there is an efficient algorithm O I I which could get the shared key between client C i and server S, as the hash operation is secure, the adversary has to get the shared key by calculating k 1 ·k 2 ·P. This means given A 1 = k 1 ·P and B 2 = k 2 ·P, an adversary A I I is able to get k 1 ·k 2 ·P. For the ECCDH problem, suppose a·P = A 1 = k 1 ·P and b·P = B 2 = k 2 ·P, the adversary is able to get c·P = a·b·P = k 1 ·k 2 ·P. This apparently contradicts the hardness of the ECCDH problem.
Proof.
The proof of perfect forward privacy is similar to Theorem 1. Even if the private key of the client is leaked to the adversary. What the adversary get is the same public information {k i , A 1 , M 2 , T 1 } and {B 2 , M 4 }. Thus it is unable to get the past session key, neither.
Comparison
In this section, we compared our scheme with related works in computation cost, computation at the registration phase and the authentication phase. The schemes are implemented in C++, the running codes have been upload to a public repository in the github.com [36]. The MIRACL C/C++ Library is used in this study [37], the library can be accessed at github.com [38]. The experiment is conducted in Visual Studio C++ 2017 on a 64-bits Windows 7 operating system, 3.5 GHz processor, 8 GB memory. The hash function is SHA-256, the symmetric encryption/decryption function is AES in MR_PCFB1 form, the 256-bit long key for symmetric encryption/decryption function is generated by SHA-256 hash operation. The Koblitz curve secp256k1 which is recommended by NIST is used in this study [39]. The parameters of this curve are listed in Appendix B. The code is compiled in x86 form, this simulation does not take into account the transmission of the data.
Computational Performance Analysis
First, we compared the computation costs of these schemes in the form of operation per phase, T H , T MUL , T ADD , T E/D are used for the computation cost for SHA-256 operation, element multiplication operation of G 1 , element addition operation of G 1 , and AES symmetric encryption/decryption operation. The results are listed at Table 8. As shown in the table, we can find that in all conditions, the computation cost of the proposed scheme is the minimal, as T MUL > T H and T E/D > T H . Thus, the proposed scheme has an advantage in the computation cost and energy consumption compared to related works. To test the analysis of the computation cost, we also simulated the schemes in the aforementioned environment respectively.
Reference Registration Phase Authentication Phase Password Change Phase
Tu et al. [3] 2T H + 1T MUL 10T H + 7T MUL + 1T ADD 6T H + 1T MUL + 4T E/D Chaudhry et al. [6] 5T H + 1T MUL 14T H + 6T MUL + 1T ADD -Wu et al. [19] 4T First, we run the registration phase of different schemes 5, 10, 15, 20 and 25 times separately. The computation times are shown in Figure 2. The horizontal axis represents the number of runs of the experiment, the vertical axis represents the time required for the experiment to run, and the unit is milliseconds. The computation cost of Wu et al. [19] and that of the proposed scheme are relatively smaller, while the scheme of Chaudhry et al. [6], and that of Tu et al. [3] cost more computation time. This is mainly because the proposed scheme and the scheme of Wu et al. [19] only need lightweight operations, SHA-256 hash operations and XOR operation, while for the scheme of Chaudhry et al. [6], and that of Tu et al. [3], symmetric encryption/decryption operations are required, these operations cost more computation time.
Reference Registration Phase Authentication Phase Password Change Phase
Tu et al. [3] 2TH + 1TMUL 10TH + 7TMUL + 1TADD 6 TH + 1TMUL + 4TE/D Chaudhry et al. [6] 5TH + 1TMUL 14TH + 6TMUL + 1TADD ---Wu et al. [19] 4TH 12TH + 4TMUL + 4TE/D 9 TH + 1TMUL + 2TE/D Our scheme 3TH 14TH + 4TMUL 9 TH First, we run the registration phase of different schemes 5, 10, 15, 20 and 25 times separately. The computation times are shown in Figure 2. The horizontal axis represents the number of runs of the experiment, the vertical axis represents the time required for the experiment to run, and the unit is milliseconds. The computation cost of Wu et al. [19] and that of the proposed scheme are relatively smaller, while the scheme of Chaudhry et al. [6], and that of Tu et al. [3] cost more computation time. This is mainly because the proposed scheme and the scheme of Wu et al. [19] only need lightweight operations, SHA-256 hash operations and XOR operation, while for the scheme of Chaudhry et al. [6], and that of Tu et al. [3], symmetric encryption/decryption operations are required, these operations cost more computation time. Second, we run the authentication and key establishment phase of different schemes 5, 10, 15, 20 and 25 times separately. The computation costs are shown in Figure 3. The horizontal axis represents the number of running the experiment, the vertical axis stands for the number of milliseconds to accomplish the experiment. The computation cost of Wu et al. [19] and that of the proposed scheme are relatively smaller, while the scheme of Chaudhry et al. [6], and the scheme of Tu et al. [3] cost more computation time. The computation cost of the proposed scheme is the minimal. Second, we run the authentication and key establishment phase of different schemes 5, 10, 15, 20 and 25 times separately. The computation costs are shown in Figure 3. The horizontal axis represents the number of running the experiment, the vertical axis stands for the number of milliseconds to accomplish the experiment. The computation cost of Wu et al. [19] and that of the proposed scheme are relatively smaller, while the scheme of Chaudhry et al. [6], and the scheme of Tu et al. [3] cost more computation time. The computation cost of the proposed scheme is the minimal.
Third, we run the password change phase 5, 10, 15, 20 and 25 times separately. The computation costs are shown in Figure 4. In this figure, the horizontal axis indicates the number of times the experiment was run; the vertical axis indicates the number of milliseconds to accomplish the experiment. The computation cost of the proposed is the minimal, the computation cost of Wu et al. [19], and that of Tu et al. [3] are much higher, this is because in the proposed scheme only SHA-256 hash operations and XOR operation are needed, while in the scheme of Wu et al. [19], and in the scheme of Tu et al. [3], symmetric encryption/decryption, and elliptic curve operation are needed, these operations cost more computation time. Third, we run the password change phase 5, 10, 15, 20 and 25 times separately. The computation costs are shown in Figure 4. In this figure, the horizontal axis indicates the number of times the experiment was run; the vertical axis indicates the number of milliseconds to accomplish the experiment. The computation cost of the proposed is the minimal, the computation cost of Wu et al. [19], and that of Tu et al. [3] are much higher, this is because in the proposed scheme only SHA-256 hash operations and XOR operation are needed, while in the scheme of Wu et al. [19], and in the scheme of Tu et al. [3], symmetric encryption/decryption, and elliptic curve operation are needed, these operations cost more computation time.
Communication Performance Analysis
In this part, we compared all the schemes in communication cost. We use the same criteria as that in the study of Jing et al. [8], the identity costs 2 bytes. The general hash operation in this study is SHA-256, the result of a hash operation is set to be 32 bytes. In this study, the random number is set to be 4 bytes, the timestamp is set to be 4 bytes. The element of the of the Koblitz curve secp256k1 is 64 bytes. The order | | of is 32 bytes long. At the registration phase, the client sends { , } to the server, is a result of hash, it is 32 bytes long. The length of this message is 2 + 32 = 34 byte. The server sends { , ℎ , }, is 32 byte Third, we run the password change phase 5, 10, 15, 20 and 25 times separately. The computation costs are shown in Figure 4. In this figure, the horizontal axis indicates the number of times the experiment was run; the vertical axis indicates the number of milliseconds to accomplish the experiment. The computation cost of the proposed is the minimal, the computation cost of Wu et al. [19], and that of Tu et al. [3] are much higher, this is because in the proposed scheme only SHA-256 hash operations and XOR operation are needed, while in the scheme of Wu et al. [19], and in the scheme of Tu et al. [3], symmetric encryption/decryption, and elliptic curve operation are needed, these operations cost more computation time.
Communication Performance Analysis
In this part, we compared all the schemes in communication cost. We use the same criteria as that in the study of Jing et al. [8], the identity costs 2 bytes. The general hash operation in this study is SHA-256, the result of a hash operation is set to be 32 bytes. In this study, the random number is set to be 4 bytes, the timestamp is set to be 4 bytes. The element of the of the Koblitz curve secp256k1 is 64 bytes. The order | | of is 32 bytes long. At the registration phase, the client sends { , } to the server, is a result of hash, it is 32 bytes long. The length of this message is 2 + 32 = 34 byte. The server sends { , ℎ , }, is 32 byte
Communication Performance Analysis
In this part, we compared all the schemes in communication cost. We use the same criteria as that in the study of Jing et al. [8], the identity costs 2 bytes. The general hash operation in this study is SHA-256, the result of a hash operation is set to be 32 bytes. In this study, the random number is set to be 4 bytes, the timestamp is set to be 4 bytes. The element of the G 1 of the Koblitz curve secp256k1 is 64 bytes. The order |q| of G 1 is 32 bytes long. Table 9. Communication costs of different schemes.
Security Feature Analyses
In this section, we analyzed the security features of different schemes. At the end of this section, we concluded the results into a table.
Client Anonymity
Regarding client anonymity, in the proposed scheme, the identity of the user is encrypted by a shared key between the client and the server, the adversary is unable to find out the real identity of the client. In the scheme of Tu et al. [3], the identity of the user is transmitted transparently; the adversaries can get the identity easily. In the scheme of Chaudhry et al. [6] and Wu et al. [19], the identity is encrypted, too.
Perfect Forward Privacy
Perfect forward privacy means that even when an adversary gets the private key of the client or the server, it is unable to recover the past session key based on this private key and the publicly transmitted messages. As we have proved in Section 5, the proposed scheme gains perfect forward privacy.
Meanwhile, the scheme of Chaudhry et al. [6] cannot ensure perfect forward privacy, if the adversary gets the private key msk and the session related messages DID ua , EID ua , Q ua and T sb , H sb . The adversary is able to compute the past session key in the following manner:
Reply Attack
In the proposed scheme, there is a timestamp T 1 in the message {k i , A 1 , M 2 , T 1 }, and the timestamp T 1 is also concealed in the hash message M 1 = h A 1 D i k i d i T 1 . If an adversary sends a former message to the server, the server will abandon this message after checking the timestamp. However, if the adversary replaces the timestamp T 1 with a new one, the server can still find it out by checking the hash message M 1 = h A 1 D i k i d i T 1 . Thus, an adversary is unable to launch a replay attack. For the scheme of Chaudhry et al. [6], if an adversary sends a former message to the server, the server is unable to judge if the message is a previous one or not, therefore, their scheme is subjected to replay attack.
Offline Dictionary Attack
In the proposed scheme, if the adversary gets the message in the smartcard { f i , h i , k i , r i }. The adversary could conduct an offline dictionary attack in the following steps: 1.
The adversary insert the smart card into a card reader, inputs a random identity and password pair ID i and PW i .
2.
SC computes: SC gets the current timestamp T 1 , and gets k i .
SC gets the hash Finally, SC sends {k i , A 1 , M 2 , T 1 } to the server S. 9.
If the server sends back a replay message, the identity and password pair is correct, otherwise, go to step 1. Now, q send is used as the number of times an adversary can send a message to the server S in a time period, the server will set a limit on q send , if the q send exceeds this preset limit, The server will no longer process the incoming messages from this adversary, the adversary cannot continuing the dictionary attack in this time period. The |D id |, D pass are used as the dictionary size of the identity and the password. Thus the probability p adv that adversary correctly guesses the identity and password pair correctly is: p adv = q send |D id | * D pass Set |D id |, D pass to be large enough, the p adv will be a small value, the aforementioned analysis is based on the authentication phase, the attack on the password changing phase is the same.
Meanwhile, in the scheme of Chaudhry et al. [6], the adversary could conduct an offline dictionary attack in the following steps: 1.
The adversary inserts the smart card into a card reader, inputs a random identity and password pair ID i and PW i .
2.
The adversary waits for the computation of the smart card. 3.
If the smart card sends out a message, the identity and password pair is correct, otherwise, goes to step 1.
As there is not a limit, the adversary can try as many times as he wants, thus the adversary will finally get the correct identity and password pair. This also means our scheme can withstand the smart card lost attack, when the smart card is lost, the adversary cannot launch an offline dictionary attack to get the private key of the client.
Impersonation Attack
In the scheme of Tu et al. [3], an adversary can impersonate the server. Given the message a user sends to the server, {username, V, W}, an adversary can forge the following message, the user is unable to find out if this message is coming from an adversary or the server: Generate random numnber c, r ∈ Z n C = c·P, K = c·V However, in the proposed scheme, if an adversary wants to impersonate the server, it has to get d i = h ID i X GW N , the probablity that an adversary correctly guesses d i is p d i = 1/ |D id | * D X GW N , where D X GW N means the dictionary size of the server's private key.
Secret Information Leakage Problem
In the scheme of Tu et al. [3], if an adversary accidentally get the session ephemeral information b. The adversary is able to get the secret information h(username||s)·P in the following manner: h(username||s)·P = b −1 ·V With this secret information, the adversary can impersonate a legitimate client. However, in the proposed scheme, even the session ephemeral information is leaked, the adversary is unable to get the client's secret information.
Finally, we get Table 10, we find that the proposed scheme has more security features than the schemes in the related works.
Conclusions
In this study, an authentication and key establishment scheme between remote clients and a server is proposed. The proposed scheme has been verified by AVISPA and BAN Logic, the verification results show that the proposed scheme can withstand various attacks. The proposed scheme has been simulated in C++, by comparison, it shows clearly that the proposed scheme is more efficient compared to the related works regarding the computation cost and the communication cost. Besides, the proposed has more security features compared to the related works. Our work is part of the LifeWear project, in which we focus on the safety of data transmission and identity privacy problem. | 2018-11-15T16:40:35.848Z | 2018-10-30T00:00:00.000 | {
"year": 2018,
"sha1": "abb8a3ade61128b2acf542a3dc66082e0027a9f0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/18/11/3695/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "abb8a3ade61128b2acf542a3dc66082e0027a9f0",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
221463275 | pes2o/s2orc | v3-fos-license | Relationship between volume and outcome for gastroschisis: a systematic review protocol
Background Gastroschisis is a congenital anomaly that needs surgical management for repositioning intestines into the abdominal cavity and for abdominal closure. Higher hospital or surgeon volume has previously been found to be associated with better clinical outcomes for different especially high-risk, low volume procedures. Therefore, we aim to examine the relationship between hospital or surgeon volume and outcomes for gastroschisis. Methods We will perform a systematic literature search from inception onwards in Medline, Embase, CENTRAL, CINAHL, and Biosis Previews without applying any limitations. In addition, we will search trial registries and relevant conference proceedings. We will include (cluster-) randomized controlled trials (RCTs) and prospective or retrospective cohort studies analyzing the relationship between hospital or surgeon volume and clinical outcomes. The primary outcomes will be survival and mortality. Secondary outcomes will be different measures of morbidity (e.g., severe gastrointestinal complications, gastrointestinal dysfunctions, and sepsis), quality of life, and length of stay. We will systematically assess risk of bias of included studies using RoB 2 for individually or cluster-randomized trials and ROBINS-I for cohort studies, and extract data on the study design, patient characteristics, case-mix adjustments, statistical methods, hospital and surgeon volume, and outcomes into standardized tables. Title and abstract screening, full text screening, critical appraisal, and data extraction of results will be conducted by two reviewers independently. Other data will be extracted by one reviewer and checked for accuracy by a second one. Any disagreements will be resolved by discussion. We will not pool results statistically as we expect included studies to be clinically and methodologically very diverse. We will conduct a systematic synthesis without meta-analysis and use GRADE for assessing the certainty of the evidence. Discussion Given the lack of a comprehensive summary of findings on the relationship between hospital or surgeon volume and outcomes for gastroschisis, this systematic review will put things right. Results can be used to inform decision makers or clinicians and to adapt medical care. Systematic review registration Open Science Framework (DOI: 10.17605/OSF.IO/EX34M; 10.17605/OSF.IO/HGPZ2)
(Continued from previous page) Systematic review registration: Open Science Framework (DOI: https://doi.org/10.17605/OSF.IO/EX34M; https:// doi.org/10.17605/OSF.IO/HGPZ2) Keywords: Gastroschisis, Congenital anomalies, Hospitals, high-volume, Hospitals, low-volume, Hospital volume, Surgeon volume, Volume-outcome Background Gastroschisis is a congenital anomaly with an incidence that increased during the last decades up to 2.7 to 4.9 cases per 10,000 live births based on current population studies [1][2][3][4]. It is characterized by a full-thickness defect in the abdominal wall, usually located to the right of the umbilical cord, which leads to extrusion of intestines or other organs into the amniotic fluid [5,6]. The etiology is not fully understood so far, but maternal (e.g., young maternal age) and environmental teratogenic (e.g., smoking) factors seem to be of importance [7]. Gastroschisis can be categorized into simple and complex cases. Complex cases are associated with intestinal atresia, stenosis, perforation, or volvulus while simple cases are not associated with any of these intestinal pathologies. Therefore, complex cases have an increased mortality and morbidity compared to simple cases [8,9]. Based on the results of a meta-analysis, intra-hospital mortality rate is about 17% for complex and 2% for simple gastroschisis. Results also show that risk of sepsis and short bowel syndrome is increased for complex cases and that length of parenteral nutrition as well as length of hospital stay is prolonged compared to simple cases [8].
Diagnosis of gastroschisis is usually made during prenatal ultrasound from the end of the first trimester [5]. Some fetal surgeons argue for amnioexchange in order to reduce digestive compounds in the amniotic fluid that are involved in inflammatory reactions. However, results of a recent randomized controlled trial indicate that amnioexchange should not be used for care in fetuses with simple gastroschisis [10]. Nevertheless, saline amnioinfusion might be beneficial for oligohydramnios in fetuses with gastroschisis [10][11][12].
There is no definitive evidence on the optimal mode (cesarean section vs. vaginal [13,14]) and timing of delivery for neonates with gastroschisis [15,16]. After delivery, initial treatment aims at maintaining the physiologic homeostasis with intravenous fluids, respiratory support if required, thermoregulation, and bowel protection [6]. Surgical management follows to reposition intestines into the abdominal cavity and to close the abdominal wall [17,18]. It can be conducted with sutured fascial and/or skin closure (primary closure) or with placement of a silo followed by a delayed closure (staged closure) or by using the umbilical cord to cover the defect (sutureless closure) [17][18][19]. The staged closure is recommended if the neonate is unstable immediately after birth, if the reduction is likely to cause an abdominal compartment syndrome, or if herniated loops are very edematous, tightly matted together, and covered by a thick peel [6,18]. Postoperative management includes adequate sedation and analgesia, parenteral nutrition, and mechanical ventilation if needed. The establishment of enteral nutrition depends on the initiation of gastrointestinal functions of the neonate. It is prolonged for complex cases compared to simple cases [6,17].
Management of gastroschisis is not broadly standardized across institutions leading to variability in care between different centers [20]. However, different initiatives started to develop and introduce standardized protocols and pathways for the management and care of gastroschisis [21][22][23]. Systematic reviews on various other surgical procedures indicate a positive relationship between hospital as well as surgeon volume and clinical outcomes [24][25][26][27]. This relationship seems to be stronger for high-risk, lowvolume procedures [28][29][30][31]. Given the characteristics of gastroschisis (long length of stay; need for hospital-based services; multidisciplinary care teams consisting of obstetricians, neonatologists, and pediatric surgeons) and the insights on the positive relationship between hospital volume and outcomes for other indications [29], it is plausible that such a relationship might also exist for gastroschisis. Taylor and Shew examined the effects of hospital and other health care system factors on outcomes [32]. In their non-systematic review, they conclude that "the majority of the evidence points to an improvement in gastroschisis outcomes when infants are born at, or treated at, higher volume centers and higher level NICUs" [32]. Regarding hospital volume, they refer to results of four primary studies [33][34][35][36]. Also, sound knowledge about the relationship between surgeon volume and outcomes for gastroschisis is important and can lead to methodological refinement of clinical studies on different surgical procedures. Without decent consideration of learning effects, trials might lead to better outcomes for established procedures only due to its longer existence and not due to the procedure itself [37]. Moreover, in general, only few multicenter trials report about provider effects due to variation in expertise which might cause misleading conclusions if low-volume and high-volume providers are included in the same trial [38].
To the best of our knowledge, there is no systematic review that analyzes the relationship between hospital or surgeon volume and outcomes for gastroschisis. Hence, it seems reasonable to conduct a systematic review on this issue. It is suggested that outcomes of neonates suffering from gastroschisis that are operated in a high volume hospital or by a high volume surgeon are favorable compared to outcomes of neonates that are operated in lower volume hospitals or by lower volume surgeons. The aim of our systematic review is to examine the available literature on the relationship between hospital as well as surgeon volume and outcomes for gastroschisis.
Methods/design
This protocol is reported in accordance with the reporting guidance provided in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Protocols (PRIS MA-P) statement (see checklist in additional file 1) [39]. The present protocol has been registered within the Open Science Framework platform (registration numbers https://doi.org/10.17605/OSF.IO/EX34M; https://doi.org/ 10.17605/OSF.IO/HGPZ2). Planned methods will be in line with those reported in our systematic review on the relationship between volume and outcomes for congenital diaphragmatic hernia [40,41].
Literature search strategy
We will perform a systematic literature search to identify all published studies on the relationship between hospital or surgeon volume and clinical outcomes for gastroschisis. Medline (via PubMed), Embase (via Ovid), CENTRAL (via Cochrane Library), CINAHL (via EBSCO), and Biosis Previews (via Ovid) will be searched from inception until the day of search. We will use a combination of controlled vocabulary terms (e.g., Medical Subject Headings (MeSH)) and free text words (a draft of the search strategy for Medline can be found in additional file 2). No language restrictions or other limits will be applied. Additionally, we will search trial registries (clinicaltrials.gov and the International Clinical Trials Registry Platform (ICTRP)). Reference lists of relevant articles will be inspected to identify additional articles that could have been missed by the search strategy. Additionally, we will screen individual conference proceedings (see additional file 3 for a list of conferences). Furthermore, we will use Google Scholar to identify grey literature. We will contact authors for detailed information in case of perceived relevance of abstracts. The search results will be uploaded and managed using Endnote.
Eligibility criteria
The following inclusion criteria will be applied to each publication: -Patients: We will include studies involving newborns with gastroschisis. We will not use a specific definition for gastroschisis but report the definition used in the corresponding studies. -Exposure/control: Volume (i.e., hospital volume or surgeon volume) is the number of cases treated or surgeries conducted by a hospital or by a surgeon in a particular period of time. We will include studies if volume is assessed as a categorical variable or a continuous variable and if at least two different hospitals or surgeons are analyzed. -Outcomes: We will include studies if at least one of the outcomes listed in the section "Outcomes and prioritization" is analyzed. -Study design: We will include (cluster-) randomized controlled trials (RCTs) and prospective or retrospective cohort studies. -Language: We will include studies written in English or German.
Study selection
All titles and abstracts of articles identified through systematic literature search will be screened independently by two members of the research team. The full texts of potentially eligible articles will be obtained, and the eligibility of the full texts against the review inclusion criteria will be assessed by two reviewers independently. Any disagreements will be resolved by discussion. The study selection will be documented in Endnote.
Data collection
For each included publication, the following characteristics will be extracted: year of publication, country, study design and methodology, data source, study period, definition of gastroschisis, number of patients, number of hospitals and/or surgeons, patient characteristics, casemix adjustments, statistical methods, volume categories for hospitals, volume categories for surgeons, analyzed outcomes, results regarding these outcomes, and the funding source as well as authors' reported conflicts of interest. With regards to wording, we use the term registry also for administrative databases. All data will be extracted into structured summary tables using Microsoft Word. We already piloted and used these tables for our systematic review on congenital diaphragmatic hernia [40,41]. Results will be extracted independently by two reviewers. Other data will be extracted by one reviewer and checked for accuracy by a second reviewer who will read the paper in detail and ensure that no relevant information was missed. Any disagreements will be discussed until consensus is reached. Study results will be recorded separately for each unit (surgeon or hospital) and outcome. If study authors present adjusted and unadjusted results, we will focus our synthesis on adjusted results. Nevertheless, we will extract unadjusted results as well. In case volume is classified in categories, we will report hazard ratios for timeto-event analyses and odds ratios or risk ratio for dichotomous outcomes and mean difference for continuous outcomes. We will provide the measures with 95% confidence levels if reported by authors or calculable given the available data. Moreover, we will recalculate effect measures (e.g., odds ratio, hazard ratio, risk ratio) so that higher volume is compared to lower volume and not vice versa. We will calculate effect measures if results can be calculated from information in the text but effect measures are not presented. In case volume is treated as a continuous variable, we will present results of the analyses conducted within primary studies. We will contact study authors for clarification in any case of uncertainty regarding data collection and critical appraisal.
Outcomes and prioritization
The primary outcomes that will be analyzed in our systematic review are survival and mortality (surgery-related; up to discharge; long-term, e.g., 2-year or 5-year) given that gastroschisis is a life-threatening disease. Secondary outcomes are sepsis, growth (i.e., weight, length, and head circumference), number of operations, severe gastrointestinal complications (i.e., intestinal perforation; any intestinal resection, regardless of amount of bowel removed or the indication for the resection; mechanical intestinal obstruction resulting in a repeat laparotomy; abdominal compartment syndrome; enterocolitis), time on parenteral nutrition, liver disease (e.g., persistent conjugated hyperbilirubinemia (> 50 μmol/L) for ≥ 2 weeks with no known other underlying liver disease), and quality of life for the child as these outcomes have been included into a gastroschisis core outcome set in addition to death [42]. Moreover, we include length of stay, time on ventilation, amount of bowel resection in case of complex gastroschisis, gastrointestinal dysfunctions, and measures of neurodevelopment and cognition as they are also reported as being important outcomes [43][44][45][46].
Risk of bias in individual studies
So far, there is no consensus on which tool to use for quality appraisal of studies on the relationship between volume and outcomes when conducting a systematic review [30]. These studies are almost exclusively based on observational data. We will use the tool for assessing risk of bias in non-randomized studies of interventions (ROBINS-I) that was recently developed by members of Cochrane methods groups [47] for assessing risk of bias of cohort studies. We already used ROBINS-I successfully when conducting our systematic review on the relationship between volume and outcomes for surgery on CDH [40,41]. We will assess the risk of bias of adjusted results, if available. We will assess the effect of starting and adhering to intervention and use a cluster randomized trial as target trial. We will use the Cochrane riskof-bias tool 2.0 (RoB 2) if any individually RCT will be identified [48]. We will use RoB 2 including special issues in assessing risk of bias in cluster-randomized trials mentioned in the Cochrane Handbook if any cluster-RCTs will be identified [49]. Methodological quality of the eligible studies will be assessed independently by two reviewers. Any disagreements will be resolved by discussion.
Data synthesis
We expect the included studies to be clinically and methodologically diverse, e.g., including neonates with diverse illness severities (simple or complex gastroschisis) and applying different types of operative repair [17] as well as using different cut-off values for high and low volume [30,31]. Therefore, we will provide a systematic synthesis without meta-analysis to summarize and explain the findings of the included studies. We will report our planned synthesis according to the Synthesis Without Meta-analysis (SWiM) guideline [50]. Due to the different prognosis for diverse illness severities [8,9] and the different use of the types of operative repair [17][18][19], we will group studies according to illness severity (simple or complex gastroschisis) and types of operative repair (primary closure, staged closure, sutureless closure), if possible. We will present findings of included studies in tables, and we will structure studies in the tables by risk of bias of the systematic review's primary outcomes and study size in case of a similar risk of bias. Also, we will use these criteria to prioritize results for summary and synthesis. We will focus our synthesis on adjusted results and consider point estimates and 95% confidence intervals for conclusions. We will investigate whether results for diverse illness severities and types of different repair differ if possible. The certainty of the evidence will be assessed by using GRADE. We will synthesize findings based on outcomes separately for each unit (surgeon or hospital) and consider risk of bias, imprecision, indirectness, inconsistency, and publication bias as well as the magnitude of treatment (or exposure) effect, the presence of a dose-response gradient, and plausible residual confounding as recommended by GRADE [51,52]. We are aware that the risk of publication bias is particularly high for studies based on automatically collected observational data (e.g., in electronic medical records or registries) [53]. However, to our knowledge, there is no tool to assess presence and extent of publication bias when results are not pooled across studies. Therefore, we will discuss potential impact of publication bias narratively.
The proposed systematic review will be reported in accordance with the reporting guidance provided in the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) statement [54]. Any amendments made to this protocol when conducting the study will be outlined and reported in the final manuscript.
Discussion
The aim of this review is to evaluate the relationship between hospital or surgeon volume and outcomes for gastroschisis. So far, there is no up-to-date analysis that synthesizes results from different studies on the volumeoutcome relationship systematically and comprehensively. Therefore, it is important to evaluate this relationship so that insights can be used to inform decision makers or clinicians and to adapt medical care.
However, based on previous research, we do not expect to find any relevant (cluster-)RCT so that our systematic review will rely on findings from cohort studies. This might limit the certainty of our conclusions. As we restrict eligible studies to English and German documents, this might introduce language bias. Also, based on experience from previous work, we expect it sometimes to be difficult to assess whether volume in the primary studies refers to the number of cases treated or to the number of surgeries performed by a hospital or by a surgeon. We plan to disseminate results of our systematic review through publication in a peer-reviewed journal. | 2020-09-03T09:02:19.617Z | 2020-09-02T00:00:00.000 | {
"year": 2020,
"sha1": "6f565e2013d99c5b64671435803f45b0855ced47",
"oa_license": "CCBY",
"oa_url": "https://systematicreviewsjournal.biomedcentral.com/track/pdf/10.1186/s13643-020-01462-y",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2e02f89d7bd834d007aff2f1de1615158e15370a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
154648546 | pes2o/s2orc | v3-fos-license | Radiocarbon-based source apportionment of elemental carbon aerosols at two South Asian receptor observatories over a full annual cycle
Black carbon (BC) aerosols impact climate and air quality. Since BC from fossil versus biomass combustion have different optical properties and different abilities to penetrate the lungs, it is important to better understand their relative contributions in strongly affected regions such as South Asia. This study reports the first year-round 14C-based source apportionment of elemental carbon (EC), the mass-based correspondent to BC, using as regional receptor sites the international Maldives Climate Observatory in Hanimaadhoo (MCOH) and the mountaintop observatory of the Indian Institute of Tropical Meteorology in Sinhagad, India (SINH). For the highly-polluted winter season (December–March), the fractional contribution to EC from biomass burning (fbio) was 53 ± 5% (n = 6) at MCOH and 56 ± 3% at SINH (n = 5). The fbio for the non-winter remainder was 53 ± 11% (n = 6) at MCOH and 48 ± 8% (n = 7) at SINH. This observation-based constraint on near-equal contributions from biomass burning and fossil fuel combustion at both sites compare with predictions from eight technology-based emission inventory (EI) models for India of (fbio)EI spanning 55–88%, suggesting that most current EI for Indian BC systematically under predict the relative contribution of fossil fuel combustion. A continued iterative testing of bottom-up EI with top-down observational source constraints has the potential to lead to reduced uncertainties regarding EC sources and emissions to the benefit of both models of climate and air quality as well as guide efficient policies to mitigate emissions.
Introduction
Black carbon (BC) aerosols have multiple negative impacts, including on human respiratory health (e.g., Janssen et al 2012) and on climate (e.g., IPCC 2013). The large uncertainties in BC emissions both with respect to absolute fluxes and the relative contribution from fossil versus biomass combustion sources complicate our ability both to accurately understand and model the multiple BC climate effects, as well as to efficiently mitigate emissions to reduce the health impact. Hence, a major challenge with respect to both climate and air quality aspects of BC and other aerosols are to assess and reduce the large uncertainties in existing BC emission inventories (EI) (e.g., Zhao et al 2011, Bond et al 2013. Comparisons between predictions based on bottom-up technology-based EI and estimates based on atmospheric observations suggest that EI-driven models underestimate the BC climate effects by a factor of 2-3 (Andreae and Ramanathan 2013, Bond et al 2013, Cohen and Wang 2014. Further, the relative contribution of fossil versus biomass sources to BC, suggested by bottom-up EI, appear to systematically under-predict the fossil contribution relative to top-down source apportionment based on the source-diagnostic 14 C composition of BC aerosols in the actual atmosphere of both South and East Asia (e.g., Gustafsson et al 2009, Chen et al 2013. However, such observation-based source constraints have to this point been limited to shorter campaigns (weeks-month) and there is no single year-round 14 C-based assessment of BC sources for Asia or anywhere in the World.
Natural abundance radiocarbon (Δ 14 C) analysis is a powerful method for quantitatively differentiating between fossil versus biomass sources of carbonaceous aerosols in the actual atmosphere. Such information is important for diagnosing and reducing the current large uncertainties in EI of OC and BC aerosols. The present study provides, for the first time, measurements for a full-year cycle in Asia of 14 C of elemental carbon (EC)-the common thermal-optical massbased correspondent to optical BC. Aerosol samples were collected at two well-established regional receptor sites in South Asia: Sinhagad (SINH) at a mountain-top in Western India and at the international Maldives Climate Observatory on the island of Hanimaadhoo (MCOH).
Methods and materials
2.1. Sampling locations and approach Sampling was conducted at two regional receptor sites in South Asia (figure 1). The MCOH (latitude 6°78′N, longitude 73°18′E, 15 masl) is located at the Northern tip of Hannimadhoo, a small island in the Republic of the Maldives in the Indian Ocean. The other site, Sinhagad (SINH, 18°21′ N and 73°45′ E, 1450 masl) is a rural-high altitude site on a mountaintop in the Western Ghat mountain ranges in Western India. Both MCOH (e.g., Corrigan et al 2006, Ramanathan et al 2007, Granat et al 2010, Sheesley et al 2012, Bosch et al 2014 and SINH (e.g., Momin et al 2005, Gustafsson et al 2009, Kirillova et al 2013, Budhavant et al 2014, are frequently used for studies of S. Asian aerosols. Samples were collected near-continuously at both sites during fifteen months in 2008-2009 (83% coverage at SINH and >99% coverage at MCOH), comprising two dry winter seasons, a monsoon season and the transition periods. The large mass of aerosol EC required to meet accelerator mass spectrometry detection limits for microscale 14 C measurements, were obtained using high-volume total suspended particle (TSP) samplers operated at 14-19 m 3 hr −1 . The samples were collected on 140 mm quartz fibre filters (Tissuquartz filters from Pall Gelman) in custom-built filter holders as described earlier (e.g., Gustafsson et al 2009, Sheesley et al 2012, Kirillova et al 2013. The TSP approach (which collects all aerosols smaller than approximately 30 μm in aerodynamic radius) was used in this study as it was desired to assess the sources of the full population of aerosol EC, as opposed to using a finer cut-off such as PM 2.5 selecting mainly for respiratory particles. The sampling interval at MCOH was ∼one week during the non-monsoon periods and ∼two weeks during
Carbon aerosol mass concentration and isotope analyses
Organic carbon (OC) and EC concentrations and radiocarbon composition were measured using previously established methods. Quantification of OC and EC used a thermal-optical transmission analyzer (Sunset Laboratory, Tigard, OR, USA) using the National Institute for Occupational Safety and Health 5040 method (Birch and Cary 1996). The average of the concentrations of the field blanks was taken into account for the calculation of atmospheric concentrations (SINH, OC = 0.063 μg m −3 ; MCOH, OC = 0.075 μg m −3 and EC = 0.001 μg m −3 ). Triplicate analyses of laboratory standards and field reference material showed that the analytical uncertainties were less than 5%.
The isotope composition of other carbon aerosol fractions have earlier been measured and reported for this campaign, including for total OC and soot carbon (Sheesley et al 2012) as well as for water-soluble organic carbon (WSOC; Kirillova et al 2013). The current study reports on the important EC fraction, which was isolated for offline 14 C analysis by separation and cryogenic isolation of the CO 2 evolved from the EC peak, as described in detail by Chen et al (2013). The subsequent 14 C analysis was then conducted collaboratively with the US National Ocean Sciences Accelerator Mass Spectrometry Facility in Woods Hole, MA, USA as described earlier (e.g., Zencak et al 2007a, 2007b, Gustafsson et al 2009, Chen et al 2013. The radiocarbon data are reported as fraction modern (f m ) which is converted to the Δ 14 C scale (Zencak et al 2007b). By constraining S Asia specific source end-member values for both biomass combustion of Δ 14 C biomass = +199‰ (Gustafsson et al 2009) and a fossil fuel combustion end member of Δ 14 C fossil = −1000‰, the fraction biomass (f bio ) of the EC may be established directly from the Δ 14 C signature of the sample using the following mass-balance relation: ( ) A recent community inter-comparison of 14 C measurements on aerosol samples demonstrated comparable results for aerosol total carbon (Szidat et al 2013). There is a need for future such comparisons of different techniques and operational definitions for isolating the EC fraction for 14 C measurements. While there certainly are some uncertainties, the current top-down 14 C-based EC source apportionment study is based on the same commonly employed thermal-optical transmission method that is also used to determine EC emission factors for different combustion systems used in the EC EI (e.g., Bond et al 2004. Hence, since the same operational definition of EC is used in both the bottom-up EI and the present top-down source apportionment this allows for a direct comparison between the two. To test the sensitivity of the obtained 14 C-based source apportionment results to a putative exchange in the isolation between method-induced pyrolysis of WSOC and ambient EC, a sensitivity analysis, based on instrument-generated pyrogenic C (PyrC) and our earlier reported 14 C-WSOC has been performed (supplementary information text S1 and table S2). The results suggest at most a moderate influence of such a hypothetical instrument-methodological process, well within the existing variability of the 14 C-EC data.
Seasonal variations of EC and OC
The South Asian climate is governed by the monsoon system, with rainy summers and dry winter periods, and two transitional periods, the pre-and post-monsoon phases. The monsoon period is characterized by Southerly winds and, over land, by an elevated atmospheric boundary layer. In contrast, the dry period have on average Northerly winds and a shallower boundary layer. The onset of the different seasons depends on the passing of the Intertropical Convergence Zone, and is thus different for the two presently investigated sites: Sinhagad (SINH) and the MCOH. A correlation between OC and EC concentrations may indicate similar source and geographic origin of carbonaceous particles. Significant correlations were observed between OC and EC both at MCOH (Pearson's correlation coefficient, r 2 = 0.79, p < 0.0001 for 39 samples) and at SINH (r 2 = 0.34, p < 0.0001 for 55 samples). These co-varying patterns suggested that the ambient concentration levels of carbonaceous species were controlled largely by processes such as primary source emissions and atmospheric dispersion rather than by secondary OC formation. This is consistent with findings by Kirillova et al (2013) for OC, and especially for WSOC, based on dual δ 13 C-Δ 14 C data for this same campaign. When split seasonally, OC and EC presented somewhat different correlations. At MCOH, they were highly correlated in all the seasons with r 2 value ranging from 0.57 (p < 0.005) to 0.59 (p < 0.0001), while at SINH they were strongly correlated in the dry winter season of 2009 with r 2 = 0.83 (p < 0.0001), but not distinctly correlated in the monsoon and dry season of 2008 with r 2 value ranging from only 0.18 to 0.20. The lower correlation coefficient during the SINH summer suggests contributions from different sources for OC and EC (e.g., biogenic secondary organic aerosol (SOA) to OC), effects of atmospheric processing on (primarily) OC or differential contributions from long-range versus regional emissions during the wet season.
OC/EC ratios
The mass ratio of OC to EC (OC/EC) reflects multiple processes in the atmosphere. (1) The OC/EC ratio is typically higher from biomass combustion than from fossil sources (e.g., Ram and Sarin 2010), (2) the OC/ EC ratio is elevated by (mainly) biogenic SOA contributions (e.g., Saarikoski et al 2008), (3) the OC/ EC ratio is affected by atmospheric processing (e.g., aging) of organic chemicals (e.g., Kroll et al 2011) and (4) the atmospheric lifetime for OC is shorter due to higher chemical reactivity and greater tendency for washout during rain events. Here, the OC/EC ratios ranged from 0.9 to 26 with an average of 8.0 ± 5.4 at SINH, and from 1.1 to 42 with an average of 7.4 ± 10 at MCOH (figure 2). The highest values were observed during the monsoon season for both sites (26.3 (13.7) for SINH and 38.3 (23.2) for MCOH, bracket values represents mean), whereas the lowest values were found during the dry winter period (0.88 (6.29) for SINH and 1.13 (3.51) for MCOH). These distinct seasonal trends suggests comparably larger contributions from biogenic SOA or pollen to OC during the wet monsoon period, which is also expected from the warmer and wetter weather conditions during this phase, which favours biological activity (e.g., Genberg et al 2011).
Year-round source apportionment of EC aerosols
The 14 C/ 12 C characteristic of carbonaceous aerosol samples is a direct indication of the relative contribution of biomass (f bio ) and fossil fuel (f fossil ) combustion sources (equation (1)). The studied receptor sites were each influenced by seasonally-varying air masses with different geographical origins. Nevertheless, the f bio varied over the 16 months of observations over a very similar range for both SINH (36-64%) and MCOH (31-59%) (figures 2(B) and (D) and SI table 1). At SINH, the total dry season (winter) average f bio for EC was 56 ± 3% (n = 5; table 1). The remainder of the year (summer monsoon and transition periods) exhibited a mean f bio for SINH EC of 48 ± 8% (n = 7; table 1). The winter season aerosol samples at SINH experienced influence by air masses from East/North East and Central India (detailed back trajectory cluster analyses shown in Sheesley et al 2012, Kirillova et al 2013. These times, the EC may be influenced from high emissions of aerosols due to biomass burning as suggested by the MODIS active fire counts' data (SI figure 1) that show higher incidences of fires over India and Bangladesh during the 2009 dry months. This may be due to burning of agricultural crop residues from primarily wheat and rice. Largescale emissions from paddy-residue burning does however fall slightly outside the winter period during October-November and the same for wheat-residue burning in April-May. However, the timing of these activities are not absolute and they are ubiquitous features in the Indo-Gangetic Plain (IGP) (Badarinath et al 2009, Rajput et al 2014. The estimated emission budgets from the agricultural-waste burning on an IGP scale contributes a predicted ∼22% of primary OC [252 ± 34 Gg y −1 ] and 21% of EC [59 ± 2 Gg y −1 ] (Rajput et al 2014). On the other hand, aerosols in the monsoon were associated with air masses primarily from Southwest i.e. from the Northern Indian Ocean and the Arabian Sea. The overall 2008-annual average value of f bio at SINH was 51 ± 8% (table 1).
A similar source apportionment of EC between biomass versus fossil fuel combustion was observed at MCOH. There f bio at MCOH was indistinguishable between the winter (53 ± 5%; n = 6; table 1) and nonwinter seasons (53 ± 11%; n = 6; table 1). High f bio at MCOH may be related to the fact that during the Southwest monsoon season; winds have been touching the Southeast coast of the African continent before passing through Indian Ocean and reaching the sampling location at MCOH. On a global scale, the largest open-burning emissions happen in Africa (Bond et al 2013). The major types of biomass burning in Africa include forest and savanna fires (Mkoma et al 2013). Field burning of agriculture residues and forest/wild fires occur in dry season (July-October) in Southern Africa and Madagascar (Schultz et al 2008, Mkoma et al 2013. Active fire spots (SI figure 1) were observed from MODIS satellite images during the dry season (wet season for MCOH) in June-October in Southern Africa and Madagascar, from where the air masses travelled more frequently to the sampling site at MCOH than to SINH. Hence, it is likely that biomass burning in SE Africa and Madagascar could have contributed to ambient carbonaceous aerosols at MCOH.
During dry season, air masses originated mainly in and around the Indian sub continent (figure 1), contributing to the high f bio (EC) values at both stations. This is consistent with the earlier picture of about equal contributions from fossil and biomass combustion during the high-aerosol impact from the highly polluted IGP spreading over great South Asian scales during dry periods (e.g. Lawrence and Lelieveld 2010, Sheesley et al 2012, Kirillova et al 2014. These yearround 14 C-based averages in EC source apportionment, suggest similar source attributions but with a somewhat smaller divergence in the biomass contribution to EC at the two regional sites than was apparent in the shorter 2006 campaign (Gustafsson et al 2009). These new winter results are inseparable from those reported for MCOH for a three-week highintensity campaign in 2012 (Bosch et al 2014) (table 1).
Understanding the emission sources of carbonaceous aerosols in South Asia
Bottom-up EI of BC, which feed into climate and other atmospheric chemical-transport and health/air quality models, are challenged by large uncertainties related to both activity (tons fuel burnt) and especially the emission factor (kgBC/ton fuel burnt) for different sources (Zhao et al 2011, Bond et al 2013. These uncertainties, relating to the total amount of BC, also propagate into the estimates of the fractional contributions from different sources, e.g., fraction biomass ( (2005) for this region. However, the higher fossil contributions found by Wang et al (2014) are in better agreement with the year-round observationallybased 14 C-constrained source apportionment of the present study, with f bio = 0.53 ± 0.08% at MCOH and f bio = 0.51 ± 0.08% at SINH, which in turn also are consistent with previous shorter-term 14 C-source forensics estimates (Gustafsson et al 2009). It should be emphasized, however, that a direct comparison between bottom-up EI data and top-down constraints is also associated with some uncertainties owing to, for instance, the variability of air mass transport. Taken together, the present and first-ever yearround study of 14 C-EC based source apportionment shows that both fossil and biomass combustion processes are about equally responsible for the emission of EC to the extensive Atmospheric Brown Cloud phenomena over South Asia. This highlights the importance of long-term campaigns, which gives a more comprehensive picture of the sources of climateaffecting aerosol carbon in this dynamically changing region. | 2019-05-16T13:06:43.805Z | 2015-06-05T00:00:00.000 | {
"year": 2015,
"sha1": "d5f02af47628a0d4a1d86fee6d35661ba03fe832",
"oa_license": "CCBY",
"oa_url": "http://iopscience.iop.org/article/10.1088/1748-9326/10/6/064004/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "89987f2e5d0593d4afe0216a985e67171da3e679",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science",
"Physics"
]
} |
5341463 | pes2o/s2orc | v3-fos-license | Adult Astrogenesis and the Etiology of Cortical Neurodegeneration
As more evidence points to a clear role for astrocytes in synaptic processing, synaptogenesis and cognition, continuing research on astrocytic function could lead to strategies for neurodegenerative disease prevention. Reactive astrogliosis results in astrocyte proliferation early in injury and disease states and is considered neuroprotective, indicating a role for astrocytes in disease etiology. This review describes the different types of human cortical astrocytes and the current evidence regarding adult cortical astrogenesis in injury and degenerative disease. A role for disrupted astrogenesis as a cause of cortical degeneration, with a focus on the tauopathies and synucleinopathies, will also be considered.
Introduction
Mature astrocytes in the cerebral cortex are capable of local proliferation under certain conditions, with indications that their progeny may resume normal physiological function over time. 1,2 It is known that injury and degeneration can cause reactive astrogliosis and subsequent astrocyte proliferation, although the exact molecular stimulus of proliferation is currently unknown. 3 The process whereby reactive astrocytes proliferate was originally considered detrimental, as after injury or stroke, glial scar formation inhibited axonal regrowth in the area. 4 However, reactive astrogliosis and astrocyte proliferation are now understood to be neuroprotective, provide factors to promote cell survival, and in severe lesions in injury and degeneration seal off an area of robust necrosis. 1,5,6 One of the hallmarks of neurodegenerative disease is synapse loss, 7 which occurs early in disease and has long been associated with cognitive decline in cortical dementias, such as Alzheimer's disease (AD), Parkinson's disease dementia, and dementia with Lewy bodies (DLBs). [7][8][9] Protoplasmic astrocytes in the cortical gray matter are closely associated with synapses and synapse monitoring. [10][11][12] Recent evidence points to astrocytes in the cortex as contributors to synaptogenesis in response to neuronal communication 13,14 and responsible for general control of synapse number. 15 Astrocytes are responsible for a variety of homeostatic functions that can lead to neurodegeneration if unchecked. Astrocytes supply neurons with glutathione precursors and protect neurons from cell death as a result of reactive oxygen species production. 16 Clearance of toxins from the parenchyma occurs through astrocytes via the glymphatic system. [17][18][19] Astrocytes also remove glutamate through glutamate transporters to avoid excitotoxicity and neuronal cell death. 20 It is no longer disputed that astrocytes express the same transmitters and receptors as neurons, are active contributors in central nervous system communication, 21 and maintain ionic and osmotic homeostasis in the brain. 22 Astrocytes monitor and regulate cerebral blood flow in the cortex. 23,24 This is particularly significant as local astrogenesis occurs in the cortex perivascularly, 2 and vascular defects are commonly associated with cortical dementia. 25 Astrocytes also remove and clear amyloid-β from the extracellular space before the accumulation of the protein into amyloid plaques, the pathological hallmark of AD. 17 Indeed, because of the ubiquitous number of functions involved, there is increasing evidence that many neurodegenerative diseases are astrocytic in nature, [26][27][28][29] and it is becoming clear that astrocytes are an important avenue for the treatment and prevention of neurodegenerative disease. 30,31 Because of the ability for proliferation, and if cortical dementias have an astrocytic cause, astrogenesis could lead to an understanding of prevention through regeneration, or a disruption of astrogenesis could be involved in disease etiology. Here we review human cortical astrocytes and conditions that lead to cortical astrogenesis, with a special focus on neurodegenerative disease. layers III, IV, and V, making numerous cellular contacts and blood vessel projections. 37 Varicose projection astrocytes are completely unique to the hominid and have not been seen in other primates or in rodents. 32,37 The function of these cells, much similar to that of interlaminar astrocytes, is uncertain at the moment. However, when human glial progenitor cells were grafted into the forebrains of mice, they differentiated into interlaminar and varicose projection astrocytes and had increased learning and memory capabilities compared to murine astrocytes. This was evidenced behaviorally, as well as on the cellular level, through neuronal long-term potentiation studies. 47 Although the tauopathies, synucleinopathies, and other cortical dementias appear to be more prevalent in humans than other mammals, 48 interlaminar and varicose projection astrocytes have not been researched in degenerative disease and are interesting avenues for future studies. Because of the unique primate nature of interlaminar and varicose projection astrocytes, experimental studies in rodent models described here specifically consider cortical protoplasmic astrocytes, which are the only subtype of astrocyte currently described in the rodent cortical gray matter. In human and primate experiments, protoplasmic astrocytes are also the predominant cell discussed, unless otherwise indicated.
Cortical Reactive Astrogliosis and Astrogenesis
Protoplasmic astrocytes. Reactive astrogliosis is typically defined as the change in morphology of astrocytes and subsequent upregulation of proteins involved in neuroprotection near injury, ischemia, or neurodegeneration. 49 However, due to the nature of the cortex, astrocytes are consistently responding to complex changes and subjected to stimuli that can produce astrogliosis, so even subtle perturbations in the nervous system are likely to produce a local reaction. 50 This can happen along a graded continuum of a series of responses, where it is unknown what series of signals produce local astrogenesis. 51 Reactive astrogliosis has most been studied after injury and lesion, and during these insults, astrocyte proliferation is observed, a process traditionally termed astrocytosis. 2 Reactive astrogliosis in injury and disease states was originally considered detrimental because in severe cases a GFAP+ glial scar forms and sends signals that inhibit neurite outgrowth; 52 however, it is now established to play a neuroprotective role. 6 Proliferating astrocytes appear to be crucial to the recovery after injury, as transgenic experiments that eliminated astrocyte proliferation demonstrated a larger lesion-reduced scar formation and persistent blood-brain barrier dysfunction. 53 In vitro, reactive astrocytes formed neurospheres and have shown stem cell potential. 1,54 In vivo, the cortex appears to foster a gliogenic environment, as ectopic grafting of cells with immature neuronal precursor markers into the cortex reverted back to oligodendrocyte or astrocyte phenotypes. 55 An increased expression of GFAP is typically used as a marker of cortical reactive astrogliosis, but this is not indicative of proliferation, but rather an upregulation of the protein in response to insult. 49 In milder insults and distal to the injury site, astrocytes can exhibit reactivity, which has been termed "isomorphic," and are predominantly devoted to promoting neurite growth, synaptogenesis, and neuronal modeling. 49 "Anisomorphic" astrocyte morphology with overlapping domains and proliferation appears to occur in more severe injury, 49 although proliferation occurs along a continuum, with graded increases from mild to severe, and the exact molecular stimulus is currently unknown. 50 Genetic lineage tracing demonstrated that reactive protoplasmic astrocytes in the cortex began to proliferate within 3-5 days of injury, with half of them reentering the cell cycle up to a week after injury. Known progenitor cell markers, such as nestin, DSD1 proteoglycan, and CD15, are upregulated, and protoplasmic astrocytes isolated from injured tissue also produce neurospheres in vitro. 1,56,57 After controlled cortical impact, it was shown that astrocytes proliferate at all three stages after injury, with 70% of proliferating cells staining for GFAP. 58 In hypoxic-ischemic stroke model of the cortex, GFAP colocalized with bromodeoxyuridine (BrdU), demonstrating astrocyte proliferation in this experimental model as well. 59 Astrocytes have been shown to increase Notch-1 expression to induce proliferation, which reduced proliferation when blocked. 60 GFAP-CreER-Notch-1-cKO mice also exhibited a defect in astrocyte proliferation in adults after ischemic injury. 61 Proliferation vs. astrogenesis. Although certain cortical reactive astrocytes can proliferate, it remains to be seen whether the progeny can differentiate into a nonreactive physiological state. Recently, it was shown through genetic fate-mapping studies and live imaging that 45% of proliferating protoplasmic astrocytes after stab wound in the mouse cortex had soma in direct contact with blood vessels. With unique juxtaposition on blood vessels, the cells proliferate and maintain the bushy morphology of functional astrocytes in response to injury. 2 The cells divided only once, creating two daughter cells after injury, and can function for many weeks after division, indicating astrogenesis after proliferation, even while retaining the bushy reactive morphology. Interestingly, these cells were typically further from the injury site. Cells adjacent to the stab lesion did not appear to proliferate. Instead, they became reactive and exhibited a polar morphology, by extending long processes to the injury site. 2 Proliferating astrocytes also did not seem to contribute to the glial scar, indicating in this study that GFAP+ cells in the scar must be derived from another source (Fig. 1). 2
NG2 cells (synantocytes).
In vitro studies demonstrate that two other cortical cell types, namely neural/glial antigen 2 (NG2) proteoglycan-expressing cells and blood vessel lining pericytes, can differentiate into astrocytes. 62,63 In vivo studies on the ability for pericytes to differentiate into astrocytes, however, have not been studied in the cortex. Additionally, NG2 cells in the cortex were long considered to be a subset of astrocytes because of their morphology and functionality. 64 However, it is now known they are not a subset of astrocytes, as the mature NG2 cell is functionally different and does not express many typical astrocyte markers. 65 NG2 cells are also the main oligodendrocyte precursor in the cortex, which is distinct from mature cortical astrocyte stem cell function in vivo. 65 They have a long cell cycle of one month in the adult human cortex and consistently regenerate to contribute to the oligodendroglial pool. 66,67 NG2 cells also proliferate after injury and have been shown in severe cases to differentiate into astrocytes in the cortex and contribute to the glial scar and neuroprotection. [68][69][70][71] NG2 cells are the second type of cell to proliferate after injury to the brain, 66 after microglia, which are recruited to the injury site by astrocytes. 72 After recent studies on astrocyte proliferation demonstrated they did not incorporate into the scar in stab wound studies, it is possible NG2 progenitors differentiate into scar-forming GFAP+ cells and contribute more than thought to the scar. Early genetic fate-mapping studies also confirmed that NG2 cells differentiate into astrocytes in the cortex after injury. 73 Immunohistochemical studies showed that 5-8 days after injury, 20% of GFAP+ cells colocalize with NG2, indicating that NG2 progenitors differentiate into reactive astrocytes under certain conditions. 69 Another study showed that in the cortex, within a week after injury, a small population of NG2 cells will express vimentin and nestin, two immature astrocyte markers. 71 However, there is some conflicting evidence for their astrocyte lineage, as other studies demonstrated that NG2 cells do not appear to be proliferating into astrocytes. 74 In the spinal cord of amyotrophic lateral sclerosis (ALS) mice, it was determined through fate-mapping studies that NG2+ cells were committed to an oligodendrocyte fate postnatally. 75 However, another study showed that genetic fate-mapping studies indicated that NG2 cells could become astrocytes in postnatal development. 76 Further studies demonstrated that a subset of unclear to what extent local cortical astrogenesis occurs in the healthy brain. (B) reactive astrogliosis occurs as an astrocytic response to changes in the extracellular environment-in some cases this can lead to proliferation in C. (D) it is generally believed that protoplasmic astrocytes contribute to the glial scar when reactive in injury, although recent evidence indicates that the Gfap+ cells in the scar may derive from a different source. 2 (E) recent evidence indicates that reactive astrocytes can retain their physiological function after proliferation, although the molecular process and time course are still unclear. 2 NG2 cells can indeed differentiate into protoplasmic astrocytes postnatally; however, by postnatal day 60, no new astrocytes were born from NG2 cells. 77 Although most evidence points to astrocyte and NG2 cell proliferation in injury conditions, 78 there is currently no indication that NG2 cells can become mature functional protoplasmic astrocytes (Fig. 2).
Adult stem cells in germinal layers. While experimental evidence indicates that mature cortical protoplasmic astrocytes can proliferate in certain conditions, one question is whether it is possible that cells can migrate from germinal niches, such as the ventricular-subventricular zone (V-SVZ) 79,80 or the subgranular zone (SGZ) in the hippocampus to contribute to the cortical astrocyte population. GFAP+ adult neural stem cells with astrocyte properties in the subventricular zone of mammals have been shown to differentiate into neuronal and astrocytic precursors and migrate to other areas of the brain, most notably the olfactory bulb. [81][82][83][84][85] However, focal ischemia of the striatum adjacent to the V-SVZ produced mainly glial lineages, with 60% astrocytes from neural stem cells. 86 Similarly, in rodents, GFAP+ astrocytes in the SGZ of the hippocampus can give rise to new functioning neurons and mature astrocytes. 87 After birth, in the human brain prior to 18 months, proliferating cells in the V-SVZ migrate to the prefrontal cortex with an astrocytic fate instead of a neuronal one, before subsiding, with astrocytes proliferating in the cortex locally afterward. 88 In piglets by postnatal day 7, it was seen that few, if any, proliferating V-SVZ cells colocalized with immature neurons. 89 After cortical injury, nestin+ cells that did not express GFAP were shown to migrate and become new ipsilateral astrocytes to the injury site. 90 Also, in normal conditions, V-SVZ fate-mapped nestin+ cells become astrocytes in the corpus callosum but did not appear to contribute to astrogenesis in the cortex. 91 Interestingly, it was recently shown with tamoxifeninduced nestin-Cre tm 4 lineage tracing that the majority of cells that contribute to a cortical injury site are produced through astrogenesis, with cells deriving from a V-SVZ lineage. 92 Some cells from the V-SVZ contributed to the glial scar and were also high thrombospondin-producing cells, a protein released by astrocytes known to induce synaptogenesis (Fig. 3). 14 KO mice for thrombospondin 4 caused alterations in the glial scar and increased microvascular hemorrhage. 92 In the hippocampus, it is known that neural precursor cells in the SGZ contribute to the mature astrocyte population in CA1 under normal physiological conditions and that this process is disrupted in injury and disease. 93 However, although of interest because of known hippocampal degeneration in AD, the evidence is scant for astrocyte proliferation from the SGZ to the cortex. 94 In aging mice, division happened less frequently, and GFAP-expressing cells began to exhibit characteristics of reactive astrogliosis. 95 Immediately after injury to the cortex, astrocyte proliferation occurred locally in the hippocampus without migration to the cortex. 96
Cortical Astrogenesis in Other Conditions
Aged and normal healthy cortex. It appears that the astrocyte capability of reactivity and proliferation is inherent to astrocytic cell function. During development, astrocytes derive from radial glial cells, but postnatally, in the mouse cortex, they proliferate locally and incorporate into functional units with defined astrocyte regions. 97 Because of the 92 (D) it is also known that they can contribute to adult astrogenesis. However, it is unclear whether the reactive astrocytes they produce can proliferate locally (E), although this proliferation likely occurs if derived locally from a mature astrocyte (F). it is also uncertain whether reactive astrocytes produced by the V-sVZ can become mature protoplasmic astrocytes (G), which may also be able to proliferate (H). astrocyte ability to respond to even slight perturbations in the parenchyma, 50 studies on cortical astrogenesis in "normal" healthy adult brains or aged cortex might reveal the mechanisms of astrocytic proliferation.
Early studies of aged human brains are somewhat conflicting because gray matter human astrocytes were designated as "fibrous" astrocytes in many cases. Fibrous astrocytes currently refer to astrocytes residing in white matter tracts, but before this designation, protoplasmic or interlaminar astrocytes were determined to be "fibrous" in some studies if they were labeled immunohistochemically with GFAP. 98 It is believed that astrocytes undergo a change in morphology in aging, and originally, it was thought that astrocyte reactivity was increased in aging, as well as consequential astrogenesis, whereby an increase was seen on an average of .20% of astrocytes in the cortex of aged brains. 99,100 Furthermore, in the aged rat cortex, a 20%-22% increase in astrocytes and pericytes was shown. 101 However, it must also be remembered that an increase in GFAP expression of cortical protoplasmic astrocytes is not indicative of proliferation 102 but traditionally a marker of reactive astrogliosis. 6 One study analyzed the brains of several aged controls with no neurodegenerative disease diagnosis and noticed an increase in the GFAP expression of the cortex, in both the molecular layer and cellular layer. 103 As the interlaminar astrocyte is unique to primates in the molecular layer, it is likely that interlaminar astrocytes undergo reactive astrogliosis in response to aging. Additionally, in cellular layers II-VI, increased GFAP expression was associated with perivascular location and typically, in duplicate, indicating possible proliferation in aging. 103 In rats, increased GFAP expression was also noticed in the aging cortex. 104 In the entorhinal cortex of aged mice, an area that degenerates early in AD a decrease in GFAP expression and astrocyte atrophy was observed. 105 Also, in another human study of female brains, aged 65-75, 76-85, and 94-105 years of age, it was observed that there was no change in neuron or astrocyte numbers. 106 Additionally, other sources indicate no increase in the amount of astrocytes in the cortex. 107 In rats, an electron microscopic study concluded that there was an increase in the number of astrocytes in aging animals compared to controls. 108 Nonneuronal cells stained with cresyl violet were also increased in the parietal cortex of aged rats compared to controls. 109 Studies of nonnervous system origin cancer patients injected with BrdU demonstrated that new cells formed in the cortex were nonneuronal with a small subset colocalizing with GFAP ,0.5 cell/mm 3 , indicating the prevalence of astrogenesis in the noninjured nondegenerating adult cortex. 110 Neurons, however, did not colocalize with BrdU, indicating that new neurons were not formed in the cortex in the lifespan of the organism and that proliferation was strictly glial. 110 In rhesus monkeys, neuronal cell loss was not observed in the cortex of aged monkeys, and astrocytes had exhibited an increase in cellular inclusions. The older monkeys had significant memory impairment compared to the younger monkeys. 111 However, in subsequent studies, they noticed that only microglia increased in numbers with aging. 112 Although many studies have indicated that increased GFAP staining correlates with age, indicating an increase in astrocyte reactivity, 113 studies on astrogenesis in healthy human cortex are few.
Learning, exercise, and environmental enrichment. Environmental enrichment has been known to increase cell division in the adult brain because a study by Altman and Das in the 1960s demonstrated a significant increase in gliogenesis in the brains of rats. 114 The cell type was not determined, and they noticed increased cell division in all the areas of the white matter of the coronal radiations. Cell division occurred in the gray matter as well, but this was not statistically studied. 114 The hippocampus is currently the region of the brain with the most evidence for increased astrogenesis in environmental enrichment conditions, where astrocytes from the SGZ proliferated into mature astrocytes in the CA1 region. 93 Cells were shown to be mature and distinct from the GFAP+ progenitor cells where they arose in the SGZ. 115 Cells in the SGZ that are GFAP+ can also differentiate into neurons, 116 and this is increased in environmental enrichment and learning conditions. 117 GFAP expression and increased size and complexity of astrocytes were seen in the dentate gyrus after physical activity and environmental enrichment. 118 There is also evidence for cortical astrogenesis in environmental enrichment, as in the motor cortex of mice, a noticeable increase in astrogenesis was observed, without an increase in oligodendrocytes and with no new neurons formed. 119 Additionally, operant conditioning tasks showed that astrogenesis occurred in the prefrontal cortex and that learning maintained cell survival, whereas if learning did not occur, new cells were not maintained. 120 Voluntary exercise also resulted in a 3× increase in astrogenesis compared to normal controls in the medial prefrontal cortex of mouse brains. 121 Although the lineage of the proliferating astrocytes in the prefrontal cortex of the mouse brain has not yet been studied, they appear to be from local progenitors or originating from cells in the V-SVZ.
Cortical spreading depression. Studies have shown that cortical spreading depression (CSD) can cause the proliferation of astrocytes in cortical regions. This is preceded by cortical spreading depolarization, which is associated with migraine, stroke, and epilepsy and results in the excitation spread of coordinated neuronal firing. 122 CSD results in an increase in a number of dividing cells that coexpress GFAP. [123][124][125][126] Cortical brain slice preparation demonstrated the origin of the cells were NG2 cells differentiating into astrocytes. 127 In the entorhinal cortex, a robust increase in cell proliferation to CSD remained astrogenic, and no subsequent cortical neurogenesis was observed. 128 CSD shifted the relative frequencies of glial cells from NG2 cells to astrocytes and microglia. 128 Nestin+ astrocytes were also increased after CSD in the cortex. 97
Cortical Astrocytes and Neurodegeneration
Tauopathies and amyloid-β. Reactive astrogliosis as described by hypertrophy and subsequent proliferation is found in chronic neurodegenerative lesions. 6,49,50 In human, similar to what was studied in aging brains, where an increase in GFAP+ expression of protoplasmic astrocytes was noticed in cortical layers II-VI after age 70, a much larger increase in GFAP+ cells was seen, which was greater than four times in the cellular layer of patients diagnosed with AD compared to age-matched controls. 98 Amyloid precursor protein (APP) when cleaved by beta-secretase 1 (BACE-1) and gamma secretase produces amyloid-β 1-40 and amyloid-β 1-42 , peptides that accumulate in amyloid plaques in neurodegenerative disease. 129 Additionally, amyloid-β 1-42 is closely associated with disease 130 and has been shown to preferentially stimulate astrogenesis from human embryonic neural stem cells in vitro, 131 while BACE-1 null mice show diminished astrogenesis in the hippocampus. 132 In vitro, the amyloid-β 1-42 peptide treatment of postnatal primary mouse astrocytes increased proliferation 133 and was shown to disrupt calcium signaling between astrocytes, which is also diminished in disease progression. 134,135 APP has been shown to stimulate astrogenesis in development as well. 136 Amyloid-β 1-40 and amyloid-β 1-42 are cleared from the extracellular space by astrocytes through the glymphatic system. 17 It was observed that astrocytes surrounding plaques increase the expression of GFAP and vimentin; however, in GFAP and vimentin KO mice, the plaque load was not diminished, but lysosomal and inflammation genes increased expression. 137 Importantly, researchers observed that proliferative circumferential reactive astrogliosis around amyloid-β plaques correlated with cognitive scores in disease. 138 Synapse loss in the cortex also occurs early on in disease and correlates with cognitive decline, [7][8][9] and astrocytes contribute to the regulation of synaptogenesis. 15 This lack of circumferential astrogliosis also correlated with apolipoprotein E ε4 genotype, a known genetic precursor for late-onset AD, 138 which is a risk factor after early life incidence of head injury, another known stimulator of astrogenesis. 139 Astrocytes are the predominant apolipoprotein E-producing cell in the cortex, 140 and the protein appears to be involved in cholesterol transport via lipid rafts, a contributor to synaptogenesis. 141 In the TgCRND8 mouse AD model, it was observed that GFAP+ cells colabeled with BrdU in aged mice, indicating a proliferative response. 142 However, when proliferating cells in another mouse model of AD, the APPswe/PS1dE9 (APPPS1) transgenic mice were studied; microglia were the main proliferating cell type. GFAP+ reactive astrocytes were not proliferative around the plaque, compared to those in injury, where reactive astrocytes are prevalent and begin to proliferate, as the severity of the injury increases. 143 In vitro, neurospheres indicative of stem cell properties have been produced from astrocytes after injury in APPPS1 mice. Also, it has been shown that 2.7% of cortical proliferating cells were astrocytes, which account for only about 1.1% of the astrocytes in the cortex. 144 Many of the cells were microglia and NG2 cells. Many astrocytes not proliferating also produce immature cell markers, such as nestin, DSD1, and tenascin-C, which are upregulated in reactive astrocytes. 144 However, it appears that sonic hedgehog signaling is responsible for astrocyte proliferation from reactive astrocytes. 144 Glial atrophy has been shown in the cortex of the APPPS1 AD mouse. 145 In AD transgenic mice, the hippocampus exhibited extensive reactive astrogliosis, but not the entorhinal cortex where astrocyte atrophy was observed, which is one of the areas of the brain to exhibit selective early vulnerability in AD. 146 Additionally, astrocyte atrophy was observed in medial prefrontal cortex in AD transgenics. 147 Additionally, neurofibrillary tangles formed as a result of hyperphosphorylated tau protein aggregation are observed in AD neurodegeneration and can occur in cortical tauopathies independent of plaque formation. 148 In other tauopathies, such as frontotemporal dementia, astrocyte apoptosis preceded neuronal apoptosis in the disease progression. 149 In a transgenic model of tauopathy, there was an age-related increase in tau accumulation in astrocytes, which is similar to what is seen in neurons in disease. 150 A reduction of astrocyte glutamate transporter 1 was observed in corticobasal degeneration, and a tau mouse model from the GFAP promoter demonstrated similar vascular defects and neurofibrillary tangle formation in disease states. 151 The hyperphosphorylation of tau, and accumulation within astrocyte end feet processes was also observed to contribute to vascular defects in corticobasal degeneration and progressive supranuclear palsy. 152 In many cases, AD and other tauopathies can be thought of as a cerebrovascular disease and have been considered as such, where it has been estimated that as many as 84% of cases show both morphologies. 153 Finally, early on in AD, it is noticed that many genes and proteins involved in cell cycle stimulation are upregulated. 154 This has been considered from a neuronal perspective, with a hypothesis that cell cycle reentry and dysfunction in neurons lead to degeneration. 155,156 However, because of the known proliferative nature of astrocytes, cell cycle biomarkers in astrocytes in disease states provide a future avenue for study.
Synucleinopathies. Synucleinopathies are characterized by the accumulation of protein α-synuclein (α-syn) in Lewy Bodies inclusions. 157-159 α-Syn is abundantly expressed at neuronal synapses [160][161][162] and can be released extracellularly as a possible signaling protein as evidenced by its binding to postsynaptic protein and ability to be transferred to neighboring neurons to form Lewy body inclusions. [163][164][165] Extracellular α-syn has also been shown to assimilate in human cortical astrocytes in vivo and in vitro. [166][167][168] Common synucleinopathies affecting the cortex are multiple system atrophy (MSA), Parkinson's disease dementia, and DLBs. 169 Genes involved in familial Parkinson's disease, such as Pink1, Parkin, DJ-1, and LRRK2, are specifically expressed Journal of ExpErimEntal nEurosciEncE 2015:9(s2) by astrocytes and shown to produce proteins associated with lipid rafts. 170,171 Pink1, Parkin, DJ-1, and LRRK2 were also shown to be involved in cell cycle regulation. 172 Additionally, human cortical astrocytes in culture treated with α-syn revealed apolipoprotein Eredistribution to the cytoplasm and an increase in GFAP+ astrocytes. 167 α-Syn signaling to astrocytes at the synapse was also shown to be increased in the songbirds developing song control system and demonstrates a possible involvement in neuroplasticity. 160 In midbrain regions affected early in Parkinson's disease, reactive astrogliosis in the substantia nigra was similar to normal control tissue, whereas in another synucleinopathy, multiple system atrophy, reactive astrogliosis was increased in the substantia nigra. 173 However, in the frontal cortex in both Parkinson's disease and multiple system atrophy, an increase in reactive astrogliosis as marked by GFAP, vimentin, and heat shock protein-27 immunoreactivity was observed. 173 Additionally, astrocyte and microglia marker YKL-40 were significantly reduced in the cerebral spinal fluid of patients with synucleinopathies (PD, MSA, and DLB) compared to tauopathies, where the reduction was significant compared to controls but higher than the synucleinopathies. 174 A model of MSA demonstrated that α-syn can induce reactive astrogliosis in the frontal and visual cortex of human brain via astrocyte proximity to accumulated α-syn inclusions. 175 In other GFAP and vimentin expression studies, it was observed that, unlike that in AD, cortical reactive astrogliosis does not correlate with cognitive decline in Parkinson's disease dementia compared to normal controls. 176,177 In vivo, it appears that there is early dysfunction in astrocytes in disease progression, as there is an indication that cortical protoplasmic astrocytes become nonreactive and susceptible to α-syn accumulation while recruiting microglia to attack the affected neurons. 178 Selective expression of A53T mutant α-syn in astrocytes also resulted in aggressive disease progression in mice. 179 Recently, in human neuropathological studies γ-synuclein (γ-syn), another member of the synuclein family, was shown to be expressed in cellular inclusions along with α-syn. 180 γ-Syn is upregulated in glioblastomas 181 and is known to be involved in cell cycle regulation. 182 An increase in the expression of γ-syn was also seen along with α-syn in the cerebral spinal fluid of patients diagnosed with synucleinopathy and vascular disease, 183 and a mouse model overexpressing γ-syn demonstrated widespread neuropathy. 184
Conclusion
Although the general notion is that astrocytes achieve a quiescent mature cell fate in adulthood, the physiology of astrocytes in the brain during neuronal communication and neuronal dysfunction allows for them to be dynamic in their response, whereby they undergo reactive astrogliosis and proliferation to protect the neuronal environment. The exact type of perturbations on the molecular level that stimulate proliferation is currently unknown. It is also unknown to what extent over time reactive astrocytes can then resume normal function after proliferation.
Additionally, fate-mapping studies have provided clearer observations on the lineage of cells proliferating in the cortex in disease and injury states, but the evidence is still murky. The contribution and function of cells arising from local proliferation is yet to be determined. Also, the function in early disease states of reactive astrocytes, in addition to biomarkers for the manipulation of astrocytic mechanisms in disease, will be useful. In particular, due to upregulated cell cycle markers and for cell replacement, neurogenesis has been studied in injury and disease cause and prevention; however, because of the inherent proliferative abilities of adult astrocytes compared to neurons, as well as neuroprotective functions, studies on astrogenesis could provide insights into disease onset.
Because much of the early research in neurodegenerative disease has focused on the response of astrocytes to neuronal degeneration, there are many avenues to consider for the study of astrocyte involvement in the cause of degenerative disease. For instance, it is now becoming clear that astrocyte dysfunction can lead to synaptic loss, neurodegeneration, and protein accumulation in the form of Lewy bodies, neurofibrillary tangles, and amyloid plaques. Therefore, an exploration of astrogenesis, in the normal aging brain and neurodegenerative disease, could provide fruitful studies on the cause and prevention of degenerative diseases of the brain. | 2016-06-02T01:01:19.980Z | 2015-01-01T00:00:00.000 | {
"year": 2015,
"sha1": "94c57ec821333bdc2a1bfc92a9f4677e4975e497",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.4137/jen.s25520",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "94c57ec821333bdc2a1bfc92a9f4677e4975e497",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244270351 | pes2o/s2orc | v3-fos-license | Properties of Fast and Slow Bars Classified by Epicyclic Frequency Curves from Photometry of Barred Galaxies
We test the idea that bar pattern speeds decrease with time owing to angular momentum exchange with a dark matter halo. If this process actually occurs, then the locations of the corotation resonance and other resonances should generally increase with time. We therefore derive the angular velocity $\Omega$ and epicyclic frequency $\kappa$ as functions of galactocentric radius for 85 barred galaxies using photometric data. Mass maps are constructed by assuming a dynamical mass-to-light ratio and then solving the Poisson equation for the gravitatonal potential. The location of Lindblad resonances and the corotation resonance radius are then derived using the standard precession frequency curves in conjunction with bar pattern speeds recently estimated from the Tremaine-Weinberg method as applied to Integral Field Spectroscopy (IFS) data. Correlations between physical properties of bars and their host galaxies indicate that bar {\it length} and the corotation radius depend on the disk circular velocity while bar {\it strength} and pattern speed do not. As the bar pattern speed decreases, bar strength, length, and corotation radius incease, but when bars are subclassified into fast, medium, and slow domains, no significant change in bar length is found. Only a hint of an increase of bar strength from fast to slow bars is found. These results suggest that bar length in galaxies undergoes little evolution, being instead determined mainly by the size of their host galaxy.
INTRODUCTION
Bars in galaxies can be described by three properties: bar length, strength, and pattern speed . Numerical simulations suggest that these parameters will change over time, owing to the angular momentum exchange between the bar and the dark halo. When the bar is deprived of its angular momentum by the dark halo, the pattern speed of the bar slows In the presence of weak ovals or low contrast bars, testparticle simulations (Schwarz 1981(Schwarz , 1984Simkin et al. 1980) have shown that nuclear, inner, and outer rings secularly develop near the principal resonances: ILR, the 4:1 (Ultraharmonic) resonance (UHR), and the OLR, respectively. In the presence of a strong bar, the concept of an ILR may not exist, and formation of a nuclear ring will depend on the presence of the x 2 orbit family (Regan & Teuben 2004). The lengths of nuclear bar and of large-scale bar are correlated to the ILR and corotation radius (CR), respectively (Rautiainen & Salo 1999). It means that the pattern speed of a bar determines the sizes of rings and bars.
To measure the pattern speed, we need kinematic information from spectroscopy, while the bar length and strength can be calculated from photometric images. Although many indirect ways to measure the bar pattern speed from photometry have been proposed (Roberts et al. 1979;Prendergast 1983;Puerari & Dottori 1997;Rautiainen et al. 2008;Buta & Zhang 2009;Pérez et al. 2012;Buta 2017), the most reliable way is the Tremaine-Weinberg method (Tremaine & Weinberg 1984, hereafter TW method) that measures the bar pattern speed from spectroscopy directly. The pattern speed is derived from the mean line-of-sight (LOS) velocity over several positions, based on the continuity equation. The measurement requires time-consuming long-silt observations with several positions parallel to the line of nodes (Merrifield & Kuijken 1995). However, the recent Integral Field Spectroscopy (IFS) data facilitates the measurement of the bar pattern speed by making it possible to obtain multiple pseudo long-slits from a single observation. The Calar Alto Legacy Integral Field Area (CALIFA) (Sánchez et al. 2012), the Mapping Nearby Galaxies at APO (MaNGA) (Bundy et al. 2015), and the Multi Unit Spectroscopic Explorer (MUSE) surveys have been used to measure the bar pattern speeds for ∼ 100 galaxies at 0 < z < 0.15 so far Guo et al. 2019;Cuomo et al. 2019;Garma-Oehmichen et al. 2020;Williams et al. 2021).
When the pattern speed of a bar is known, the corotation radius R CR can be estimated from the rotation curve. While the IFS data allows rotation curves to be derived, a reasonable assumption that can be made is that the rotation curve is flat in the corotation region Guo et al. 2019;Cuomo et al. 2019). With an estimate of the bar radius, R bar , we are then able to derive the important ratio R = R CR /R bar from infrared images. Debattista & Sellwood (2000) suggested that in a minimum halo, a bar would be limited to R = 1.4. The limit to how far a bar can extend is given by the orbital calculations of Contopoulos (1980). He showed that stellar orbits are aligned parallel to the bar supporting the shape of the bar inside the corotation radius, but they are perpendicular to the bar beyond it. Therefor, the ratio has been used to classify barred galaxies into slow (R > 1.4), fast (1 ≤ R ≤ 1.4) and ultrafast (R < 1) bars (Aguerri et al. 2003Guo et al. 2019;Cuomo et al. 2019;Garma-Oehmichen et al. 2020).
However, previous observations have not yet found any coherent clues for bar evolution (Pérez et al. 2012;Aguerri et al. 2015;Guo et al. 2019;Cuomo et al. 2020;Kim et al. 2021); most galaxies are in the phase of a fast bar Cuomo et al. 2019Cuomo et al. , 2020, which could imply the dark halos have low concentration (Debattista & Sellwood 2000) or inefficient angular momentum exchange (Athanassoula 2002(Athanassoula , 2013. On the other hand, simulations including the Evolution and Assembly of GaLaxies and their Environments (EAGLE) (Schaye et al. 2015;McAlpine et al. 2016) and the Illustris The Next Generation (IllustrisTNG) (Nelson et al. 2018(Nelson et al. , 2019 showed that most barred galaxies have slow bars (Algorry et al. 2017;Roshan et al. 2021). When it comes to host galaxies, observations do not show any significant correlation between the ratio R and galactic properties, which include morphological type, stellar mass, dark matter, age, and metallicity Guo et al. 2019;Garma-Oehmichen et al. 2020;Cuomo et al. 2020). Pérez et al. (2012) explored the evolution of R at z < 0.8 but found no change of R with redshift. Kim et al. (2021) also showed little or no evolution in the bar length at 0.2 < z ≤ 0.835 from the Cosmological Evolution Survey with Hubble Space Telescope (HST/COSMOS).
In this paper, we try a different approach to study the bar evolution by examining the bar pattern speed on the frequency curves of Ω − κ/2, Ω − κ/4, Ω, and Ω + κ/2. Each of these curves decreases with increasing galactocentric radius, such that if the bar pattern speed decreases with time, the radius of corotation, the locations of the inner and outer Lindblad resonances (ILR, OLR) and ultraharmonic resonance (inner 4:1 resonance, or UHR) all increase with time. To construct frequency curves, Schmidt et al. (2019) estimated a potential profile from velocity curves by assuming an axisymmetric Miyamoto-Nagai gravitational potential (Miyamoto & Nagai 1975). Garma-Oehmichen et al. (2020) obtained angular velocity curves from spectroscopic data by fitting the VELFIT model (Spekkens & Sellwood 2007). In this work, we derive frequency curves from photometry. We construct the mass map by applying the dynamical mass-to-light ratio (van de Sande et al. 2015) to the surface brightness distribution and analyze the poten-tial map constructed by solving the Poisson equation (Buta & Block 2001;Lee et al. 2020). We utilize the bar pattern speed measured by the TW method from spectroscopy in the literature Guo et al. 2019;Cuomo et al. 2019;Garma-Oehmichen et al. 2020).
This paper is organized as follows. Section 2 introduces our sample obtained from Pan-STARRs DR1 data archive. Section 3 describes the processes where we obtain frequency curves from photometry. Section 4 shows the results on the relation between the bar pattern speed and the frequency curves. We classify barred galaxies into fast, medium, and slow bars that might be related to the bar evolution. We compare the classifications with properties of host galaxies and bars. Sections 5 and 6 are assigned to discussion and summary, respectively.
SAMPLES AND DATA
We collect sample galaxies whose bar pattern speeds Ω bar are measured from recent IFS observations including CALIFA and MaNGA with the TW method Guo et al. 2019;Cuomo et al. 2019;Garma-Oehmichen et al. 2020). There are 89 galaxies in total (apart from duplication). We obtain their optical images from the Pan-STARRs DR1 data archive (PS1). The sample galaxies are distributed from SB0 to SBc for their Hubble type. The TW method was first designed for early-type barred galaxies where a large fraction of old stars are assumed to obey the continuity equation (Tremaine & Weinberg 1984;Corsini 2011), but has been reliably applied to various conditions, including late-type spirals or gas tracer (Hα) (Emsellem et al. 2006;Fathi et al. 2009;Aguerri et al. 2015;Guo et al. 2019;Cuomo et al. 2019;Garma-Oehmichen et al. 2020). The CALIFA sample galaxies are distributed in 0.005 < z < 0.03 and −19.5 ≤ M r ≤ −22.5 for the redshift and absolute SDSS r−band magnitude, respectively (Cuomo et al. 2020). The MaNGA sample galaxies span in the range of 0.02 < z < 0.15 and −19.5 ≤ M r ≤ −23 (Cuomo et al. 2020).
In addition, we analyze PS1 images of IC 1438 and NGC 2835 whose frequency curves were derived by Schmidt et al. (2019) to compare the results from this analysis on photometry with those measured from spectroscopy. Schmidt et al. (2019) derived their frequency curves from the potential by fitting the rotation curves along with three other galaxies. They measured the corotation radius using two photometric estimations: Fourier analysis of azimuthal profile (Puerari & Dottori 1997) and the change of the dust lane (Roberts et al. 1979;Prendergast 1983). They determined ILR, UHR, CR, and OLR by comparing the pattern speed with the frequency curves.
For the 91 galaxies, we collect g P1 and i P1 band images from the PS1. The Pan-STARRs with Gigapixel Camera, mounted at Haleakala Observatories on the island of Maui, Hawaii, provides a good image quality with pixel scale of 0. 258, and FWHM of 1. 31 and 1. 11 for g P1 and i P1 , respectively (Magnier et al. 2020). We deconvolve the images to obtain sharper rotation curves in the central region removing a seeing effect by applying the Lucy-Richardson algorithm using the FWHM of each band (Chung et al. 2020), which influences the measurement of the ILRs.
We mask foreground stars, adjacent galaxies, and stellar clumpy regions within the target galaxies for automatic analyses (Lee et al. 2019). We deproject galaxies using the orientation parameters, position angle and ellipticity, and reject five galaxies that are highly elongated due to the high inclinations of ∼70 degrees. Because the resulting bar properties including the bar length, strength, and pattern speed are very sensitive to the orientation parameter (Zou et al. 2019;Lee et al. 2020;Garma-Oehmichen et al. 2020), we use the orientation parameters reported in the literature Guo et al. 2019;Cuomo et al. 2019;Garma-Oehmichen et al. 2020) to make a fair comparison. We also reject one galaxy with the pattern speed of nearly zero, Ω bar = 0.4 km s −1 kpc −1 . Accordingly, there are 85 barred galaxies in the final sample. We present the parameters including inclination, position angle (PA), and bar pattern speed we used in Appendix.
Frequency Curves from Surface Brightness
Stellar orbits in weak non-axisymmetric potentials can be described by the epicycle theory of nearly circular orbits in an axisymmetric potential (Binney & Tremaine 1987). The circular orbit frequency, namely angular velocity Ω(r), is derived from the gravitational potential Φ(x, y) as follows, The epicyclic frequency κ(r) with which stars move inward and outward in the circular motion is determined as where stands for an azimuthal average (Binney & Tremaine 1987;Pfenniger 1990;Michel-Dansac & Wozniak 2006;Schmidt et al. 2019). The corotation radius is defined as the radius where the stellar angular velocity becomes the same as the bar pattern speed, Ω = Ω bar . Resonances occur when the difference between the stellar angular velocity and the pattern speed multiplied by an integer becomes the epicyclic frequency: m(Ω − Ω bar ) = ±κ for integer values of m (Binney & Tremaine 1987;Elmegreen 1998). Although we cannot be sure whether the stellar orbits become resonant when the axisymmetry is broken by a strong bar, the epicyclic approximation has been widely used to estimate resonance locations (Schwarz 1981;Combes & Elmegreen 1993;Byrd et al. 1994;Buta & Combes 1996;Combes 1996;Buta et al. 1999;Buta 2002;Michel-Dansac & Wozniak 2006;Schmidt et al. 2019;Williams et al. 2021).
The gravitational potential can be constructed with the assumption of a constant mass-to-light ratio (M/L) (Quillen et al. 1994). The photometric surface brightness distribution is translated into the mass distribution, which yields the two-dimensional potential through the Poisson equation. The bar strength is the ratio of the transverse force to the radial one (force ratio, hereafter) and can be calculated from the potential as well (Buta & Block 2001;Lee et al. 2020).
From Light To Mass
The determination of M/L allows us to translate the photometric luminosity to the stellar mass. Bell & de Jong (2001) first explored the relation between the M/L and the color by comparing the observed colors with the stellar population synthesis (SPS) model. van de Sande et al. (2015) developed the relation into the dynamical M/L by the direct stellar kinematic mass measurements, estimated from the effective radius, Sérsic index, and velocity dispersion measurements (Cappellari et al. 2006). This does not depend on any assumptions, including the metallicity, stellar initial mass function (IMF), and the SPS models.
We construct our mass maps from the color and the absolute magnitude using the following formula: which is derived from the relation between the dynamical M/L ratio and the color (van de Sande et al. 2015, see Table 3) by comparing the absolute magnitude of the Sun. The dynamical mass within each pixel is determined by its i−band absolute magnitude from i P 1 band images and the g − i color of the galaxy. We adopt the mean g −i color within a scale length h r in the radial color profile as the g − i color of a galaxy. We use the PS1 i-band solar absolute magnitude of M i, = 4.52 (in AB, Willmer 2018). However, we note that the relation in van de Sande et al. (2015) was explored for massive quiescent galaxies with a mass limit of M * > 10 11 M and with a color selection of U −V > (V −J)×0.88+0.59 (Williams et al. 2009).
Calculation of Potential Map
The potential map was constructed following the procedures of Lee et al. (2020) by solving the Poisson equation with the Fast Fourier Transform (FFT) on Cartesian coordinates (Hohl & Hockney 1969;Quillen et al. 1994;Buta & Block 2001). In constructing the potential map, the vertical density distribution is assumed to follow the exponential model (Laurikainen & Salo 2002a;Buta et al. 2004;Lee et al. 2020). Two-dimensional mass maps are converted to three-dimensional mass distribution by convolving it with the vertical density profile. The vertical scale height is taken from the ratio of the disk scale length and vertical scale height, h r /h z , considering the different disk thicknesses according to the Hubble types T : 4 for T ≤ 1, 5 for 2 ≤ T ≤ 4, and 9 for T ≥ 5 (de Grijs 1998; Laurikainen et al. 2004b;Díaz-García et al. 2016;Lee et al. 2020). We measure the scale length h r with the exponential fit to the surface brightness profile at i−band, obtained from the IDLbased ellipse fitting (Lee et al. 2019). Figure 1 shows examples of the frequency curves (right panels) for IC 1438 and NGC 2935 obtained from our photometric approach with our adopted dynamical M/L ratio. We display Ω − κ/2, Ω − κ/4, Ω, and Ω + κ/2 by green, gray, blue, and red curves, respectively. The bar pattern speed Ω bar determines the corotation radius where Ω bar intersects the circular orbit frequency Ω(r). In the same way, the locations of the ILR, UHR, and OLR are determined by Ω bar = Ω−κ/2, Ω bar = Ω−κ/4, and Ω bar = Ω + κ/2, respectively (Binney & Tremaine 1987). The pattern speeds Ω bar of these two galaxies were estimated by Schmidt et al. (2019) from photometric methods mentioned above. Ω bar is displayed by a horizontal line of orange color. R ILR (green), R UHR (gray), R CR (blue), and R OLR (red) are presented with uncertainties on the horizontal line of Ω bar .
Frequecy Curves
We estimate the uncertainties from (1) the scatter of the relation between the dynamical M/L and the color (van de Sande et al. 2015) and (2) the difference between the assumptions of a thick (h r /h z = 4) and thin (h r /h z = 9) disk. The scatter of logM dyn /L is larger than the orthogonal scatter of the best-fitting line by a factor of 1.5 (van de Sande et al. 2015, see Figure 3). Therefore, we measure the uncertainties of logM dyn /L i by multiplying by 1.5 the mean orthogonal scatter of the best fitting (i.e., 1.3) in i-band. The green, sky blue, and Figure 1. Examples of g − i color maps (left) and frequency curves (right) for IC 1438 (top row) and NGC 2935 (bottom row) from the potential map based on photometry. In the left panels, the nuclear, inner, and outer rings appear bluer in color than their surroundings. In the right panels, the green, gray, blue and red curves show Ω − κ/2, Ω − κ/4, Ω, and Ω + κ/2, in sequence. The orange horizontal line indicates the bar pattern speed Ω bar from the literature (Schmidt et al. 2019). The solid black circles show the intersecting points of the frequency curves and the bar pattern speed. They are RILR, RUHR, RCR, and ROLR, in sequence, from the center. The horizontal error bar represents the uncertainty of each resonance location.
The green, sky blue, and orange columns from top to bottom, respectively, represent the RILR, RCR, and ROLR with error ranges calculated by Schmidt et al. (2019).
orange columns from top to bottom indicate the R ILR , R CR , and R OLR with error ranges calculated by Schmidt et al. (2019).
Comparison with the results in the Literature
Schmidt et al. (2019) calculated the frequency curves for five spiral galaxies including IC 1438 and NGC 2935. They constructed the radial velocity curves from the Hα emission line observations and calculated the gravita-tional potential by fitting the rotational curves to the axisymmetric Miyamoto-Nagai gravitational potential, where Φ(R, z) is the Miyamoto-Nagai potential at (R, z) and M indicates the total mass (Miyamoto & Nagai 1975). The parameters a and b are shape parameters, which represent a flattened disk distribution with the ratio b/a ∼ 0.4, an ellipsoidal distribution with b/a = 1, and a spherical distribution with b/a ∼ 5 (Binney & Tremaine 1987;Schmidt et al. 2019). They estimated the potential considering two or three components designated by b/a and calculated the angular velocity with Equation 1. They used the equation for epicyclic frequency κ(r) as (Elmegreen 1998). Figure 1 shows the galaxies in common between Schmidt et al. (2019) and this study. For IC 1438 (top row), our measurements of R CR and R OLR are consistent with the estimates of Schmidt et al. (2019, see Figure 3) within errors, while R ILR is located inward compared to theirs. When comparing R ILR , we note different ways to deal with a bulge: they adopted an ellipsoidal distribution of b/a = 1, while we considered the bulge region to be exponentially distributed in the z direction like the disk. In the case of NGC 2935 (bottom row), our estimate of R ILR is similar to that of Schmidt et al. (2019), whereas R CR and R OLR are larger than theirs (but still similar considering the uncertainty).
The right panels of Figure 1 show that IC 1438 and NGC 2935 have all kinds of resonances, including ILR, UHR, CR, and OLR. In the left panels, their color index maps show blue features appearing the circumnuclear, inner, and outer rings. It is interesting that three kinds of rings are located near ILR, UHR, and OLR, in sequence (Schmidt et al. 2019). Outer rings are occasionally associated with the outer 4:1 resonance located between CR and OLR according to their subclass, R 1 or R 1 (Buta 2017).
To provide the result for a larger sample, we present Figure
Corotation Radius versus Bar Length
As an indicator of the bar pattern speed, the distanceindependent ratio R = R CR /R bar has been used in hydrodynamical simulations to model gas and shocks (Lindblad et al. 1996a,b;Weiner et al. 2001). Debattista & Sellwood (2000) suggested the limit of R ≤ 1.4 for a fast bar that ends its slowdown in a maximum disk (minimum halo). The upper limit to where a bar can extend was suggested as R = 1 in orbital calculations (Contopoulos 1980). On this basis, observational studies have classified barred galaxies into slow (R > 1.4), fast (1 < R ≤ 1.4), and ultrafast (R < 1) bars (Aguerri et al. 2003Guo et al. 2019;Cuomo et al. 2019;Garma-Oehmichen et al. 2020).
The bar length has been usually estimated by ellipse fitting (Martin 1995;Wozniak et al. 1995;Jogee et al. 2004) or Fourier analysis (Ohta et al. 1990;Laurikainen & Salo 2002a,b). Because bars do not end with sharp edges, it is not trivial to define the full length. Although there have been lots of efforts to find the best way to determine the length of a bar, each method has its own strengths and weaknesses (Athanassoula & Misiriotis 2002;Michel-Dansac & Wozniak 2006;Cuomo et al. 2021). The widely used way is to measure the radius where the ellipticity, , or the normalized Fourier amplitude, A 2 , profile reaches a maximum, at R or R A2 even though it cannot estimate the full length of the bar (Wozniak et al. 1995;Athanassoula & Misiriotis 2002;Laurikainen & Salo 2002a,b). Similarly, we can define the bar radius where the radial profile of the transverseto-radial force ratio has a plateau or a maximum peak, R Q b (Lee et al. 2020;Cuomo et al. 2021). Figure 3 shows the relation between R CR and R bar measured from R , R A2 , and R Qb , together with the regimes of slow, fast, and ultrafast bars classified by the ratio R. The dashed and solid lines represent R CR = 1.4R bar and R CR = R bar , respectively. We present the mean value and standard deviation of the R bar and R CR by black square and error bars. The mean value of R is given with the standard deviation at the top left in each panel. In panel (d), R CR and R * bar are adopted from the literature Guo et al. 2019;Cuomo et al. 2019;Garma-Oehmichen et al. 2020).
Basically, the classification by R depends on the method to measure the bar length. In our measurement, galaxies are categorized into 29 slow bars, 9 fast bars, and 40 ultrafast bars when R is adopted for R bar (Figure 3(a)). Five galaxies are rejected because they have no R CR intersected by Ω bar in their angular velocity curves because of their rapid pattern speeds. Ultrafast bars with R ≤ 1 cannot exist theoretically, but they have existed in observations (Cuomo et al. , 2021Guo et al. 2019). The possibility of real ultrafast bars was raised also by Zhang & Buta (2007). When we use R A2 (Figure 3(b)) or R Qb (Figure 3(c)) instead, the number of slow bars increases to 39 galaxies, and the number of fast bars increases to 15 or 13 galaxies. The number of ultrafast bars decreases to 24 or 26 galaxies. On the other hand, when we adopt the values directly from the literature (Figure 3(d)), sample galaxies are classified into 20 slow bars, 23 fast bars, and 40 ultrafast bars. Except for the sample of Guo et al. (2019) (orange circle), most galaxies are contained within the regimes of the fast and ultrafast bars.
The mean values of R are 1.37, 1.68, and 1.45, respectively for R , R A2 , and R Qb . They are somewhat larger compared to R = 1.01 ± 0.79 in the literature due to the smaller bar length in our measurements. The mean bar length is 7.7 kpc in the literature, while it is measured to be 5.9, 4.9, 4.7 kpc by R , R A2 , and R Qb , respectively. We obtain the smallest R from R and the largest R from R A2 . This can more or less reconcile the difference of R between observations and simulations. Observations have preferentially used the ellipse fitting method to measure the bar length Guo et al. 2019;Cuomo et al. 2019;Garma-Oehmichen et al. 2020), whereas simulations have utilized Fourier analysis (Lindblad et al. 1996a,b;Debattista & Sellwood 2000). Although observations often examined the bar length from the bar-to-interbar intensity ratio based on the Fourier analysis Guo et al. 2019;Cuomo et al. 2019), it is different from the way that the bar length is usually calculated with the simulations (Lindblad et al. 1996a,b;Debattista & Sellwood 2000). We will discuss the issues for the bar length measure-ments and the bar evolution on R in Section 5.1 and 5.2, respectively.
Bar Pattern Speed versus Frequency curves
Here, we take a different approach to utilize the bar pattern speed more effectively. That is to compare the bar pattern speed with the Lindblad precession frequency curves directly. If the bar pattern speed decreases by the exchange of angular momentum between a bar and a dark halo (Athanassoula 2002(Athanassoula , 2003Debattista & Sellwood 2000), then the locations of all resonances except the iILR will increase with time. It will cross more frequency curves from Ω to Ω−κ/4 or Ω−κ/2, in turn, producing a corotation radius (CR), an Ultraharmonic Resonance (UHR), and one or two Inner Lindblad Resonances (ILRs). Using these resonances, Byrd et al. (1994) suggested a more physical classification to define a fast, a medium, and a slow bar when the bar pattern speed sets up CR, UHR, and ILR in sequence. Although we cannot trace the process of the slowdown of the bar for a given galaxy, we can glean "snapshots" for galaxies with fast, medium, and slow bars from observational data.
Figure 4 displays example galaxies with images (top row) for each class. The second row shows the frequency curves with the pattern speed for a fast (f), a medium (g), and a slow bar (h). The horizontal lines represent the bar pattern speed with uncertainties. The red, blue, gray, and green curves describe Ω+κ/2, Ω, Ω−κ/4, and Ω − κ/2, respectively. We present the corotation radius by a dotted vertical line. Although we indicate the bar length, R (red), R A2 (green), and R Qb (blue) by vertical sticks together, this classification is irrelevant to the bar length, which is different from the classification based on the ratio R. In the bottom row, we display the radial profiles of force ratio for each galaxy, as introduced in Lee et al. (2020). It helps to understand the evolution of bars, which we will discuss in Section 5.3.
Among 83 galaxies, we find five galaxies with higher bar pattern speed, not intersecting the stellar angular velocity curve Ω (Figure 4(e)). All of them are also classified as nonbarred galaxies from the analysis of the ratio map (Lee et al. 2020). According to the definition of Byrd et al. (1994), a fast bar is the one hosting a CR and an OLR because of a high pattern speed crossing Ω and Ω + κ/2 (Figure 4(f)). A medium bar in a galaxy is defined with UHR, CR, and OLR when the pattern speed crosses the frequency curves of Ω−κ/4, Ω, and Ω+ κ/2 (Figure 4(g)). A slow bar has all kinds of resonances (Figure 4(h)). Byrd et al. (1994) used the simulations and found a hint in evolutionary stages for fast, medium, and slow bars. For example, a fast bar shows an outer ring, while a medium bar has outer and inner rings; a slow bar shows a nuclear ring as well as outer and inner rings. According to our classification scheme, there are eight slow bars, 59 medium bars, and 11 fast bars. Five galaxies without a CR are categorized into nonbarred galaxies. van Albada & Sanders (1982) showed models with one ILR or two ILRs when an x 2 family extends to the center or stops before the center. However, we do not find any galaxies with two ILRs in our sample, probably because the data we analyze are not good enough to resolve the central region within ∼ 1 kpc from the center where inner ILR is likely to be located. We also note that the measured ILRs in this study appear slightly smaller than those in Schmidt et al. (2019), which might be caused by different ways to deal with a bulge in the potential calculation (even though the difference is not larger considering the uncertainty). There could be some slow bars missed in this study for a similar reason. Figure 5 shows the relation between the bar length and the corotation radius. This plot is similar to Figure 3, but galaxies here are classified into fast (blue triangle), medium (green square), and slow bars (solid red circle) according to the relation between the bar pattern speed and the frequency curve. The newly defined fast, medium, and slow bars are placed in a similar sequence of ultrafast, fast, and slow bars classified by the ratio R, though they are not perfectly corresponding to each other. The newly defined fast bars fall in the region occupied by the ultrafast bars defined by R.
We present the mean values of R for newly defined fast, medium, and slow bars in Table 1. Although there are some differences according to the measurements of the bar length, they increase from 0.62 to 1.48 and 2.82, in average, from fast to medium and slow bars. We intend to investigate the properties of barred galaxies in terms of the newly defined classes of the fast, medium, and slow bar in Section 4.2 and 4.3. , and radial profiles of transverse-to-radial force ratio (bottom row). In the top row, we placed circles indicating OLR, CR, UHR, and ILR with red, blue, gray, and green dotted lines. Red, green, and blue crosses indicate bar positions at R bar measured by ellipse fitting (R ), and Fourier analysis (RA2), and force ratio (R Qb ), respectively. The bar positions are determined by analyzing force ratio maps from R bar measured by each method (Lee et al. 2020;Cuomo et al. 2021). The middle row shows the relation between the bar pattern speed and the frequency curves for each class. The red, blue, gray, and green curves represent Ω + κ/2, Ω, Ω − κ/4, and Ω − κ/2, respectively: OLR only for the nonbarred galaxy (e), CR and OLR for the fast bar (f), UHR, CR, and OLR for the medium bar (g), and ILR, UHR, CR, and OLR for the slow bar (h). The blue dotted vertical line displays the corotation radius, and short vertical sticks represent the bar length measured by ellipse fitting (red), Fourier analysis (green), and force ratio (blue). The bottom row shows the radial profile of the transverse-to-radial force ratio. The bar classification based on the force ratio map and the bar strength are organized at the top right. When the radial profile has a plateau, it is classified as type P (i). Type M indicates a galaxy with a maximum peak on the radial profile (j)-(l). Figure 6 shows the bar length and strength as a function of the disk circular velocity V circ , which is a parameter tightly correlated with the galaxy luminosity through the Tully-Fisher relation (TF relation, Tully & Fisher 1977). We obtain the disk circular velocity derived from spectroscopy in the literature Cuomo et al. 2019;Garma-Oehmichen et al. 2020). We calculate the bar length R bar (top row) and strength S bar (bottom row) using the ellipse fitting (R and max in the left), Fourier analysis (R A2 and A 2 in the middle), and force ratio (R Qb and Q b in the right). All the calculations are conducted following Lee et al. (2020) except the definition of A 2 , which is (c) Figure 5. Relation between the corotation radius versus the bar length measured by (a) ellipse fitting, (b) Fourier analysis, and (c) force ratio. Newly defined fast, medium, and slow bars (Byrd et al. 1994) are represented by blue triangles, green squares, and solid red circles. The fast, medium, and slow bars are roughly placed in the regions of ultrafast, fast, and slow bars, respectively, classified by ratio R. The black symbols indicate the mean values of the bar length and the corotation radius for each class.
Bar Properties and Host Galaxy
where a 0 , a 2 , and b 2 are the Fourier coefficients; this is to compare the results with more studies (Athanassoula 2013;Seo et al. 2019). We note another indicator of bar strength, Max(∆µ), the maxiumum difference between luminosity profiles along and perpendicular to the bar axis, which correlates well with A 2 (Buta 2017; Kim et al. 2021).
The top row shows that the bar length depends on the circular velocity of its host galaxy. The dependence becomes strongest when we measure the bar length by the force ratio r Qb (Figure 6(c)). Previous studies reported that the bar length depends on several galaxy properties including galaxy luminosity (Kormendy 1979;Cuomo et al. 2020), effective radius, disk scale length (Ann & Lee 1987;Erwin 2019), and stellar mass (Díaz-García et al. 2016;Erwin 2019). In particular, Erwin (2019) showed that the bar length is a strong function of galaxy size in terms of effective radius R e or disk scale length h r . He also showed an additional dependence of the bar length on galaxy mass for massive galaxies.
On the other hand, in the bottom row, we find hardly any dependence of the bar strength on the circular velocity of the host galaxy. In previous studies, the maximum ellipticity max appears constant across the Hubble type sequence ( Lee et al. (2020) reported the opposite tendencies of A 2 and Q b at both ends of the Hubble sequence. It is because the measurements of A 2 and Q b are influenced in opposite directions by a large bulge (Lee et al. 2020). However, we find similar distributions of A 2 and Q b on the circular velocity for our sample constrained by T ≤ 5 due to the limit for ap-plying to the TW method (Figure 6(e) and (f)). Cuomo et al. (2020) also reported no correlation between the bar strength and the galaxy luminosity estimated with A 2 for their sample galaxies analyzed by the TW method. However, it is interesting that strong bars measured with A 2 and Q b are prominent in galaxies with lower velocity, V circ ∼ 150 km s −1 (Figure 6(e) and (f)). It seems different from the distribution of long bars, which are hosted by galaxies with higher velocity, V circ > 250 km s −1 (top row). Figure 7 displays other important properties of bars, bar pattern speed Ω bar and corotation radius R CR , as a function of Hubble type T and disk circular velocity V circ . We present the mean values for the Hubble type (gray solid lines) and linear fits with disk circular velocity (blue dotted line). We find that the bar pattern speed has no significant dependence either on the Hubble type or on the disk circular velocity (Figure 7(a) and (b)), even though there is an S0 galaxy that has an exceptionally high pattern speed. On the other hand, the corotation radius shows a weak correlation with the disk circular velocity (Figure 7(d)). When it comes to the Hubble type, Figure 7(c) shows that the earlier-type spirals (0 ≤ T ≤ 3) have a larger corotation radius than the later-type spirals (4 ≤ T ≤ 5).
When we investigate this in terms of the new classification of fast (blue triangle), medium (green square), and slow (solid red circle) bars, Figure 6 does not show any significant difference in the correlation between bar properties (the length and the strength) and the disk circular velocity. Galaxies with different types are blended in a wide range of disk velocities. However, in Figure 7, fast bars are distinguished by having a higher bar pat- tern speed and a smaller corotation radius contrary to slow bars. In particular, slow bars have the largest corotation radius for a given Hubble type bin or a specific velocity of host galaxies (Figure 7(c) and (d)). The (a) and (c) panels in Figure 7 show that fast bars are concentrated in the later-type spirals (T ≥ 3), whereas slow bars are distributed throughout the Hubble sequence. Rautiainen et al. (2008) reported the opposite results: earlier-type spirals have only fast bars, whereas later-type spirals host both fast and slow bars, though the definition of fast and slow bars are not the same. In terms of R, they showed that later-type spirals have a larger value of R. In Figure 8, we also compare R with the Hubble type using different bar length measurements, ellipse fitting (a), Fourier analysis (b), and force ratio (c). Figure 8(d) shows the galaxies based on the measurements of R * CR /R * bar from the literature. In our measurements, the mean R appears slightly larger in earlier-type spirals (T ≤ 1), even though later-type spirals have a wider range of R. On the other hand, other studies have reported no correlation between R and the Hubble type Garma-Oehmichen et al. 2020;Cuomo et al. 2020). It seems to require a much larger sample size to better understand the relation between the ratio R and the Hubble type. Figure 9 displays relations between the bar pattern speed Ω bar and other properties of bars including corotation radius R CR , bar length R bar , and strength S bar . We present the bar lengths in an absolute scale (top row) and in a scale normalized by the disk scale length h r (middle row). The relation between the bar pattern speed and the disk scale length is displayed at the left most panel in the middle row.
Relations Between Bar Properties
First, we find that the bar pattern speed Ω bar is anticorrelated with other properties of bars: as the bar pattern speed decreases, the values of other parameters increase. Figure 9(a) shows that the pattern speed is anticorrelated with the corotation radius (confirmed by the ρ = −0.73). It is expected because the disk angular velocity decreases in proportion to r −3/2 so that a large corotation radius for low pattern speed. On the other hand, the pattern speed has weak anti-correlations with In (d), we present the ratio R derived from the corotation radius RCR * and the bar length R bar * in the literature. We display the mean value of R with error bars at each bin. The blue triangle, green square, and solid red circle represent newly defined fast, medium, and slow bars. The gray dotted horizontal lines indicate R = 1 and R = 1.4 distinguishing ultrafast, fast, and slow regimes designated by R.
the bar length and the strength with ρ ∼ −0.35. It appears to support the concept of bar growth in terms of length and strength through the slowdown of a bar by losing its angular momentum from a dark halo (Debattista & Sellwood 2000;Athanassoula 2003;Seo et al. 2019).
However, if the relation between the bar pattern speed and the frequency curves gives a hint for the evolutionary stage of barred galaxies, we can investigate the relations between the bar pattern speed and other bar properties with another view. In Figure 9, we present the mean values with error bars for fast (blue triangle), medium (green square), and slow bars (solid red circle).
The panel (a) shows that slow bars have lower pattern speed and larger corotation radius than fast and medium bars as expected. However, we cannot find any increase in the bar length from fast bars to medium or slow bars (top row). In the case of R Qb , it even decreases from fast bars to slow bars (Figure 9(d)). When we investigate the normalized bar length, it shows a tendency of larger bar lengths for slowly rotating bars (middle row). However, it is caused by the decrease of disk scale length from fast to slow bars, as shown in panel (e). We note that the disk scale length also could be changed between fast and slow bars. For the bar strength, we find a weak tendency of increasing bar strength from fast to slow bars, even though the increases are within error bars (bottom row).
To examine the difference among new subclasses of fast, medium and slow bars, we perform the Anderson-Darling (A-D) test for each combination of subclasses. We list the relevant p-value in each panel, which indi- Figure 9. Relations between the bar pattern speed Ω bar and other properties of bars, corotation radius RCR (a), bar length R bar (top and middle rows), and strength S bar (bottom row). The bar length and strength are measured by ellipse fitting (left), Fourier analysis (middle), and force ratio (right). The bar length is represented in absolute scale (top row) and in a scale normalized by the disk scale length hr (middle row). The relation between the bar pattern speed and the disk scale length hr is shown in panel (e). The Spearman's (ρ) correlation coefficient is presented with significance (P) in each panel. The newly defined fast, medium, and slow bars are denoted by the blue triangle, green square, and solid red circle. The mean and standard deviation (σ) for each class are represented by the same color. The probability (p) from the Anderson-Darling test for the property index in the ordinate on the newly defined fast, medium, and slow bars is displayed in each panel. The superscripts, s vs. f, m vs. f, and s vs. m, stand for the two groups, slow vs. fast, medium vs. fast, and slow vs. medium bars, respectively.
cates the probability that the two samples are drawn from the same parent distribution. Firstly, p-values of the A-D test for the pattern speed distributions (Ω bar ) between fast and medium bars and between medium and slow bars are 0.009 and 0.002, respectively. This means that the newly defined subclasses of bars is relevant to the differentiation of the pattern speed. Secondly, the probability from the A-D test for each property index, R CR , h r , R bar , R bar /h r , and S bar , is noted as p in each panel, which shows that the three subclasses show different distributions in the corotation radius, but do not show differences in other properties. We will discuss these results on the bar evolution in Section 5.3.
Bar Length and Resonance
In this work, we have used three measures of bar length defined by the radius, R , R A2 , and R Qb , where the ellipticity ( ), Fourier amplitude (A 2 ), and force ratio (Q b ) reach their maxima in the radial profiles. Figure 10 shows correlations between bar length measurements. Figure 10. Comparison between the bar lengths measured by different methods, ellipse fitting (R ), Fourier analysis (RA2), and force ratio (R Qb ). We present the Spearman's (ρ) correlation coefficient with the significance (P). The solid line denotes the linear fit between bar lengths from different methods. similar to each other, resulting in the slope of one for the correlation between the two (Figure 10(c)). However, they are measured to be shorter than R by 20% (Figure 3(a) and (b)). Díaz-García et al. (2016) reported that R is the best indicator of the visually estimated bar length. Figure 4 shows four example galaxies with three bar length measurements overlaid on their images. We present the positions of the tip of the bar measured from ellipse fitting (red), Fourier analysis (green), and force ratio (blue) on the images, in the same manner as Cuomo et al. (2021). When we investigate the bar length measurements on images one by one, all three measurements are compatible for galaxies with simple structures such as shown in Figure 4(a). However, when a galaxy hosts a pseudo ring or ring around a bar, R tends to be located on the ring (Figure 4(b)-(d)): the ellipticity gradually increases up to the ring radius. It could make the bar length overestimated (Cuomo et al. 2021). On the contrary, in Figure 4(d), R A2 and R Q b are located at a very inner region despite a long bar that is visible in the image. This may mean that the maximum radius of A 2 and Q b cannot reflect the bar length in the case of a strong bar, in particular. We also note that R A2 could be easily affected by spiral arms (Laurikainen & Salo 2002a;Laurikainen et al. 2004a). Buta et al. (2003) introduced an extrapolation method to separate a bar from spiral arms on the Fourier amplitude profile. In this work, we set a radius limit in finding the maximum A 2 to avoid the contamination from spiral arms through a visual inspection.
From N-body simulations, Michel-Dansac & Wozniak (2006) showed that the bar lengths measured by dif-ferent methods are correlated with different resonances. They investigated various radii available to define the bar length where a maximum, a minimum, or a transition between a bar and a disk appears on the ellipticity or Fourier amplitude profile. They showed that the bar length defined as the radius of a minimum ellipticity corresponds to the corotation radius (CR), while the bar length defined by the transition between a bar and a disk locates close to the Ultraharmonic Resonance (UHR). They reported that the length measured by force ratio, R Qb , is located in the circumnuclear region, and the length measured by maximum ellipticity does not show any correlation with dynamical resonances. As a result, they considered that the bar length measured by the maximum ellipticity, R , is not a proper estimator of the bar length. Therefore, we compare the correlations between the bar length estimates and the dynamical resonance locations for our whole sample in Figure 11. R ILR , R UHR , R CR , and R OLR are measured where the pattern speed of a bar Ω bar intersects the frequency curves, Ω − κ/2, Ω − κ/4, Ω, and Ω + κ/2, in sequence. The plot shows R ILR , R UHR , R CR , and R OLR from top to bottom and the bar length measurements by R , R A2 , and R Qb from left to right. We display Spearman's correlation coefficient (ρ) with the significance (P) at the top right and the best-fit relation between the resonance radii and the bar length with the solid line.
The plot shows that all the dynamical resonances are strongly correlated with the bar lengths regardless of the method to measure the bar length. The correlations between the bar length and the resonance locations seem to be tighter for the resonances such as OLR or CR. The bar length by R A2 and R Qb locates near CR ( Figure 11(h) and (i)), but the scatter is very large. Compared to the results in Michel- Dansac & Wozniak (2006), our resonance radii tend to be located inward, because both the minimum ellipticity and the transition between a bar and a disk occur after the maximum ellipticity. In conclusion, we hardly find the different links with specific dynamical resonances for different bar length measurements. The simulations may not be sufficient to compare with observations because few are available. We need more simulation models with various properties for detailed comparison.
Evolution of Barred Galaxies in terms of R
In Section 4.1.1, we introduced that most barred galaxies with measured bar pattern speeds belong to fast bars in terms of R Cuomo et al. 2019;Garma-Oehmichen et al. 2020). The observed galaxies with lower R or with a small number of slow bars have led to concerns about less-concentrated dark matter halo or inefficient angular momentum exchange between a bar and a dark halo (Pérez et al. 2012;Aguerri et al. 2015). First, we suggest that the different bar length measurements can more or less explain the discrepancy of R between observations and simulations. Because observations usually used the ellipse fitting method to measure the bar length Guo et al. 2019;Cuomo et al. 2019;Garma-Oehmichen et al. 2020), while simulations obtained the bar length from Fourier anlysis (Lindblad et al. 1996a;Debattista & Sellwood 2000;Athanassoula 2013;Seo et al. 2019). In our measurements, R A2 from Fourier analysis is shorter than R by 20%, which can explain a larger R in simulations.
Secondly, we argue that a certain criterion of R = 1.4 may not be appropriate to classify barred galaxies into fast and slow bars or to constrain dark halo density. The simulation that suggested R ≤ 1.4 only for a dark halo with low density did not consider the existence of gas in their simulations (Debattista & Sellwood 2000). However, Athanassoula (2014) showed that barred galaxies even in a dense halo evolve within R = 1.4 when they have enough gas initially. Moreover, the shape or spin of a halo influences the evolution of barred galaxies: triaxial or fast spinning haloes drive lower R (Athanassoula 2014;Long et al. 2014;Collier et al. 2018).
Nevertheless, observations show large differences from the simulation results, including the EAGLE and the Illustris TNG projects. Roshan et al. (2021) obtained R ∼ 2.5, in average, for simulated galaxies of EA-GLE and Illustris by measuring the bar pattern speed and the bar length through the TW method and R A2 . The simulated galaxies have a much longer corotation radius over 10 kpc and a smaller bar length (< R A2 >= 3.1 kpc) than those in observations. Algorry et al. (2017) also reported that strong bars in EA-GLE simulations have corotation radii lager than 10 times the bar length. However, we cannot find such highly evolved barred galaxies in observations. The Il-lustrisTNG simulations also yield a slower bar pattern speed < Ω bar >= 25.2 km s −1 kpc −1 (Roshan et al. 2021) than those of our sample galaxies, which have < Ω bar >= 44.1 ± 29.1 km s −1 kpc −1 .
Little Secular Evolution of Barred Galaxies
In Section 4.2, we examined the dependence of the bar length on the disk circular velocity of host galaxy, and found that rapidly rotating disks host long bars ( Figure 6). The Tully-Fisher relation (Tully & Fisher 1977) dictates that rapidly rotating disks are luminous and massive. The observation of longer bars in brighter, massive, and larger galaxies in the local universe (Kormendy 1979;Ann & Lee 1987;Erwin 2019;Cuomo et al. 2020) could be an outcome of various processes. Long bars could inherit their size from their host galaxies; larger and massive galaxies could make their bars evolve longer effectively. Host galaxies and bars could evolve together by mutual interactions. In any case, we need to consider the disk velocity when we investigate the evolution of barred galaxies.
In Figure 4, we showed examples of a nonbarred galaxy along with those of fast, medium, and slow bars. We selected them by fixing their velocity V circ = 190 ± 10 km s −1 except nonbarred galaxies. In our sample, nonbarred galaxies without a CR are mainly slowly rotating galaxies with V circ < 150 km s −1 . The circular velocity of UGC 3944 is 148 km s −1 (Figure 4(a)). In the bottom row, we present the radial profile of the force ratio for each galaxy. Lee et al. (2020) introduced a way to analyze a force ratio map defined as the transverseto-radial force ratio by investigating the radial and azimuthal profiles. From a comparison with simulations, they suggested an evolution process of barred galaxies on the radial profile of force ratio. Galaxies grow from a plateau (type-P) to a maximum peak (type-M) on the radial profile with increasing force ratio Q b (Lee et al. 2020, see their Figure 19). The galaxies in Figure 4 seem to follow the evolution process suggested in Lee et al. (2020): the radial profiles show a plateau for a nonbarred galaxy (Figure 4(i)), whereas fast, medium, and slow bars have a maximum peak on the radial profile (Figure 4(g)-(l)). They show increasing force ratios Q b from a fast bar to a slow bar. On the other hand, we hardly find the increase in the bar length from a fast bar to a slow bar. In particular, the maximum radii of A 2 and Q b seem to be located inward compared to the bar end in the slow bar.
In Figure 9, the anti-correlation between the bar pattern speed and the bar length seems to show the growth of the bar length as the bar pattern speed decreases through an exchange of angular momentum between a bar and a dark halo (Athanassoula 2002(Athanassoula , 2003. However, we are concerned that all of the longer bars with R bar > 10 kpc are found only in rapidly rotating systems, namely massive galaxies. When we investigate them by classifying into fast, medium, and slow bars, we can not find any difference in the bar length between fast and slow bars. Although the normalized bar length shows a trend to increase from a fast bar to a slow bar, it is caused by the decrease in the disk scale length. Therefore, long bars in massive galaxies seem to inherit the size from their host galaxy where they form. The bar instability in a massive disk galaxy may yield a large massive bar from the beginning of bar formation. When we normalize the bar length by indicators of the galaxy size, we need to be careful that the disk scale length could be changed as well during the bar evolution. When it comes to the bar strength, we can find a hint of increase by evolution, but the amount of increase is not large. In conclusion, we do not find the increase of bar length and strength for the bar evolution predicted by numerical simulations (Debattista & Sellwood 2000;Athanassoula 2003;Seo et al. 2019). It is in line with previous observations that most galaxies stay in the phase of fast bars in terms of R Guo et al. 2019;Cuomo et al. 2019). The recent observational study of Kim et al. (2021) supports little secular evolution of barred galaxies as well. They investigated the evolution of bar length and strength at 0.2 < z ≤ 0.835 from HST/COSMOS data. They showed that the absolute and normalized bar lengths have rarely been changed over the last 7 Gyr. They found only a slight increase of the bar strength over cosmic time. Therefore, they discussed the cases of simulation models that bars could experience very little secular evolution, including gaseous disk, triaxial halo (Athanassoula 2013), or increasing dark halo spin (Long et al. 2014). Okamoto et al. (2015) also showed that bars do not always grow by evolution in self-consistent hydrodynamical simulation for two Milky Way-mass galaxies in cosmological context.
SUMMARY
We have derived the stellar frequency of the circular orbit Ω and the epicyclic precession frequencies, Ω ± κ/2 and Ω − κ/4 for barred galaxies from photometry. We constructed mass maps using the dynamical mass-to-light ratio from a surface brightness distribution and a galaxy color. The gravitational potential is calculated by solving the Poisson equation for the mass map. We determined the resonance locations, ILR, UHR, CR, and OLR, by directly putting the bar pattern speed on the frequency curves. We utilized the bar pattern speed measured with the TW method from Integral Field Spectroscopy (IFS) data in the literature.
Our main results are summarized as follows, 1. We show that the ratio R = R CR /R bar depends on the method of bar length measurements. The bar length from the Fourier analysis and the force ratio are measured to be smaller than that from the ellipse fitting by 20%. It explains, at least partly, the larger R values in simulations that usually used the Fourier analysis to measure the bar length.
2. We take a different approach to classify barred galaxies into fast, medium, and slow bars by putting the bar pattern speed on the frequency curves. It reflects an evolutionary process that lower pattern speeds by losing angular momentum intersect with more frequency curves. We found 11 fast, 59 medium, and 8 slow bars in this way even though we might have missed some of slow bars due to the resolution limit. Five galaxies have no corotation radius because of high bar pattern speed not intersecting the angular velocity curve.
3. We find that the bar length and corotation radius depend on the disk circular velocity of its host galaxy, while the bar strength and the pattern speed are independent of the disk circular velocity. Long bars are found at galaxies with higher velocity, V circ > 250 km s −1 . However, strong bars are prominent in galaxies with lower velocity V circ ∼ 150 km s −1 .
4. The bar pattern speed is anti-correlated with other properties of bars: as the bar pattern speed decreases, the corotation radius, the bar length, and the strength increase. However, if we divide the galaxies into fast, medium, and slow bars, there is no increase in the bar length. We only find a hint of the increase in the strength. The bars in galaxies seem to experience little evolution in terms of bar length and strength.
We thank the reviewer for detailed and insightful comments on the manuscript, which greatly improved the paper. MGP acknowledge support from the Basic Science Research Program through the National Research | 2021-11-18T02:15:54.981Z | 2021-11-17T00:00:00.000 | {
"year": 2021,
"sha1": "729c1c3c184c30116a3a84bfb3ec2e76dda39c3b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "729c1c3c184c30116a3a84bfb3ec2e76dda39c3b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
53213891 | pes2o/s2orc | v3-fos-license | Enlarged Optic Nerve Axons and Reduced Visual Function in Mice with Defective Microfibrils
Abstract Glaucoma is a leading cause of irreversible vision loss due to retinal ganglion cell (RGC) degeneration that develops slowly with age. Elevated intraocular pressure (IOP) is a significant risk factor, although many patients develop glaucoma with IOP in the normal range. Mutations in microfibril-associated genes cause glaucoma in animal models, suggesting the hypothesis that microfibril defects contribute to glaucoma. To test this hypothesis, we investigated IOP and functional/structural correlates of RGC degeneration in mice of either sex with abnormal microfibrils due to heterozygous Tsk mutation of the fibrilin-1 gene (Fbn1Tsk /+). Although IOP was not affected, Fbn1Tsk /+ mice developed functional deficits at advanced age consistent with glaucoma, including reduced RGC responses in electroretinogram (ERG) experiments. While RGC density in the retina was not affected, the density of RGC axons in the optic nerve was significantly reduced in Fbn1Tsk /+ mice. However, reduced axon density correlated with expanded optic nerves, resulting in similar numbers of axons in Fbn1Tsk /+ and control nerves. Axons in the optic nerves of Fbn1Tsk /+ mice were significantly enlarged and axon diameter was strongly correlated with optic nerve area, as has been reported in early pathogenesis of the DBA/2J mouse model of glaucoma. Our results suggest that microfibril abnormalities can lead to phenotypes found in early-stage glaucomatous neurodegeneration. Thinning of the elastic fiber-rich pia mater was found in Fbn1Tsk /+ mice, suggesting mechanisms allowing for optic nerve expansion and a possible biomechanical contribution to determination of axon caliber.
Introduction
Glaucoma, a leading cause of irreversible vision loss and blindness, is a neurodegenerative disease associated with aging defined by a specific pattern of optic nerve damage and visual field loss (Davis et al., 2016;Jonas et al., 2017). A leading hypothesis of glaucoma is that deficits in axon transport, likely resulting from mechanical stress at the optic nerve head, initiate slowly developing axon degeneration and eventual death of retinal ganglion cells (RGCs;Calkins, 2012). While elevated intraocular pressure (IOP) is a likely cause of optic nerve stress, many patients with apparently normal IOP develop glaucoma.
Previously, we identified a mutation in a microfibrilassociated gene ADAMTS10, as causative for primary open angle glaucoma in a colony of dogs (Kuchtey et al., 2011). This finding has been verified and expanded by independent studies to include an additional mutation in ADAMTS10 and mutations in ADAMTS17 as causative for glaucoma in other dog breeds (Ahonen et al., 2014;Forman et al., 2015;Oliver et al., 2015). In human genome wide association studies, loci near ADAMTS8 were found associated with IOP and vertical cup-disk ratio, which are important glaucoma endophenotypes (Springelkamp et al., 2014(Springelkamp et al., , 2017, suggesting that ADAMTS genes are involved in human glaucoma. ADAMTS10 and AD-AMTS17 both contribute to formation and function of fibrillin-1 microfibrils (Kutz et al., 2011;Hubmacher and Apte, 2015;Hubmacher et al., 2017), leading us to form the hypothesis that microfibril defects can cause glaucoma (Kuchtey and Kuchtey, 2014). Other genes involved in microfibril function, such as LOXL1 (Thorleifsson et al., 2007) and LTBP2 (Ali et al., 2009;Narooie-Nejad et al., 2009;Kuehn et al., 2011), are associated with human glaucoma, lending further support to our microfibril hypothesis of glaucoma.
Microfibrils are polymers of fibrillin-1 in the extracellular matrix that contribute to mechanical properties of a variety of tissues (Ramirez and Sakai, 2010). Although microfibrils can form fibrous structures on their own, such as the zonule fibers which support the lens of the eye, they are most commonly associated with elastic fibers. Microfibrils are required for formation of elastic fibers, which invariably consist of an elastin core surrounded by a sheath of microfibrils (Yanagisawa and Davis, 2010;Baldwin et al., 2013). Microfibrils and elastic fibers are found in key tissues for glaucoma pathogenesis, such as the optic nerve and the trabecular meshwork, which is involved in IOP elevation (Wheatley et al., 1995;Gelman et al., 2010). In diseases caused by microfibril defects, elastic fiber networks can be disrupted, as in the aorta of mice with a mutation in the fibrillin-1 gene (Fbn1) used as a model of Marfan syndrome (Habashi et al., 2006). In addition to their structural roles, microfibrils act as a depot for latent transforming growth factor- (TGF) and bone morphogenetic protein (BMP), thereby playing a central role in the localization and regulation of signal transduction via TGF superfamily members (Ramirez and Rifkin, 2009;Horiguchi et al., 2012), which is particularly relevant since multiple studies implicate TGF in glaucoma pathogenesis (Fuchshofer and Tamm, 2012).
The objective of this study was to test the hypothesis that microfibril defects can cause glaucoma. To this end, IOP and RGC function and degradation were characterized in mice with well-established microfibril abnormalities due to heterozygosity of the Tsk mutation of Fbn1 (Fbn1 Tsk/ϩ ; Kielty et al., 1998;Gayraud et al., 2000). Although IOP was not affected, age-related decline of visual acuity and RGC function was accelerated in Fbn1 Tsk/ϩ mice. While decreased RGC function occurred without corresponding reduction in RGC somas or axons, the optic nerves and optic nerve axons were significantly enlarged in Fbn1 Tsk/ϩ as compared to wild-type control mice. The elastic fiber network of the pia mater was thinner in Fbn1 Tsk/ϩ mice, suggesting a mechanism for accelerated nerve enlargement. Our results indicate that microfibril deficiency accelerates age-dependent changes in the optic nerve at normal IOP that resemble early-stage glaucoma and may increase susceptibility to glaucoma.
Animal breeding and genotyping
All animal studies were performed in accordance with the Association for Research in Vision and Ophthalmology guidelines for the Use of Animals in Ophthalmic and Vision Research and were approved by the Institutional Animal Care and Use Committee of Vanderbilt University. Male mice heterozygous for the tight skin (Tsk) allele of Fbn1 (B6.Cg-Fbn1 Tsk/ϩ /j) and female mice homozygous for wild-type Fbn1 (B6.Cg-Fbn1 ϩ/ϩ /j), that had been backcrossed at least 14 generations to C57BL/6J were obtained from The Jackson Laboratory (https://www.jax.org/ strain/014632) and bred to produce cohorts of experimental animals heterozygous for the Tsk allele, hereafter referred to as Fbn1 Tsk/ϩ , and control animals homozygous for wild-type Fbn1, hereafter referred to as wt. The Tsk allele harbors a tandem duplication within the Fbn1 gene that results in a larger than normal in-frame transcript. Malformation of microfibrils are well characterized in Fbn1 Tsk/ϩ mice which, have thickened skin, visceral fibrosis, increased bone growth, lung emphysema, and myocardial hypertrophy (Kielty et al., 1998;Gayraud et al., 2000). The genotype of each experimental mouse was determined at weaning and confirmed after sacrificing. Breeding animals were screened for and found negative for the rd8 mutation associated with retinal degeneration that is present in the C57BL/6N strain (Mattapallil et al., 2012). Animals were housed in a facility operated by the Vanderbilt University Division of Animal Care, with 12/12 h light/dark cycle and ad libidum access to food and water.
IOP measurements
Mice were anesthetized with 2.5% isoflurane in oxygen delivered at 1.5 l/min by an inhalation anesthesia system (Vet Equip). IOP of the right eye was measured using the iCare Tono Lab rebound tonometer (Colonial Medical Supply), calculated as the average of 3 separate IOP determinations, each consisting of the mean of six consecutive error-free IOP readings, excluding the highest and lowest readings. To avoid effects of anesthesia on IOP (Ding et al., 2011), measurements were completed within 2 min of loss of consciousness. IOP was measured at the same time of the day to control for diurnal variation (Dalvin and Fautsch, 2015).
Tonometer calibration
Mice were euthanized by inhalation of carbon dioxide, followed by cervical dislocation. The anterior chamber of one eye was cannulated with a 30-gauge needle attached via thick-walled rigid tubing to a 10-ml reservoir filled with PBS. IOP was set to various pressures from 10 -45 mmHg by placing the reservoir at various heights from 136 to 612 mm above the eye. IOP in mmHg was calculated as the height of the reservoir above the eye in mm divided by 13.6-mm water/mmHg. For each mouse, the procedure was repeated for the fellow eye.
Spectral domain optical coherence tomography (SD-OCT)
Mice were anesthetized with ketamine/xylazine (100/7 mg/kg), wrapped in gauze and placed in a holder. Eyes were kept moist using lubricant eye drops (Refresh Optive, Allergan). The anterior segment of the eye was imaged using the BioptigenEnvisu R2200 SD-OCT system for rodents (Leica Microsystems). Mouse position was adjusted until Purkinje lines perpendicular to and parallel to the visual axis and centered on the corneal surface. Images were acquired in a rectangular scan pattern consisting of 100 B-scans, each consisting of 1000 A-scans. Image acquisition was completed before lens opacity or corneal damage appeared due to anesthesia (Bermudez et al., 2011;Koehn et al., 2015;Zhou et al., 2017). Central corneal thickness was determined by digital caliper.
Optomotor response
Photopic visual acuity of mice was assessed using the optomotor system (OptoMotry; Prusky et al., 2004). Briefly, mice were placed on an elevated platform centered among four LCD screens on which vertical gratings traveled in either a clockwise or counterclockwise direction at temporal frequency of 12°/s. Mice were acclimated in the testing arena for five minutes before the initiation of each test. Head tracking movement of the tested mice was identified by observation. Acuity threshold was determined by increasing spatial frequency (cycles/degree) until the optomotor response could not be elicited.
Flash electroretinogram (ERG)
Scotopic ERG responses were measured using the Espion system (Diagnosys). After dark adaptation overnight, mice were prepared for recordings under dim red illumination. Mice were anesthetized with ketamine/xyla-zine/urethane (28/11.2/800 mg/kg), and their eyes dilated with one drop of tropicamide (1%, Bausch & Lomb) and one drop of phenylephrine (2.5%, Paragon Bioteck). After placing mice under a Ganzfeld dome with a heating pad, gold electrodes were placed on the corneas and ground electrodes placed subcutaneously at the flank. Flash stimuli consisted of flashes of white light of 4-ms duration generated by light emitting diodes. Waveforms were recorded in response to flashes ranging in intensity from -5 to 0 log cd·s/m 2 , in 1 log increments by averaging responses to multiple consecutive flashes at each intensity (30 flashes at -5 to -3 log cd·s/m 2 , 10 flashes at -2 to 0 log cd·s/m 2 stimulus intensities). Interflash intervals were 5 s for -5 to -3 log cd·s/m 2 , 15 s for -2 and -1 log cd·s/m 2 , and 20 s for 0 log cd·s/m 2 . Recordings included a 100-ms prestimulus baseline with data collected up to 500 ms after stimulus onset. Results were averaged from both eyes. Raw data were exported into Excel (Microsoft) for analysis. The pSTR, nSTR and a-wave amplitudes were determined by the peak or trough to baseline. The b-wave amplitudes were measured from the a-wave trough to the b-wave peak. Response latency was defined as the time interval between stimulus onset and the corresponding peak or trough.
Immunohistochemistry of retinal whole mounts and RGC quantification
Mice were sacrificed by carbon dioxide inhalation and cardiac perfused with 20-ml PBS followed by 20-ml 4% PFA in PBS. Eyes were then enucleated and post-fixed in 4% PFA in PBS for 1 h. Whole-mount retinas were dissected and oriented with a notch cut into the dorsal aspect, placed in PBS and stored overnight at 4°C. Retinas were then placed in 100-l 0.5% Triton X-100 in PBS and incubated at -80°C for 20 min. After thawing, retinas were placed in blocking buffer (5% normal donkey serum, 0.5% Triton X-100 in PBS) and incubated for 4 h at room temperature on a rocker. Retinas were then immunostained for the RGC-specific marker Brn3a (Nadal-Nicolás et al., 2009) using a goat polyclonal anti-Brn3a antibody (1:100, Santa Cruz Biotechnology, catalog #sc-31984) and for phosphorylated neurofilaments using the SMI-31 mouse monoclonal antibody (1:1000, BioLegend, catalog #SMI-31P) diluted in blocking buffer. After incubation in primary antibodies for 40 h at 4°C with rocking motion, retinas were washed and then incubated in donkey antigoat IgG Alexa Fluor 488 and donkey anti-mouse IgG Alexa Fluor 546 antibodies (Thermo Fisher Scientific), each diluted 1:1000 in 0.5% Triton X-100 in PBS, for 3 h at room temperature on a rocker, protected from light. Retinas were washed, then cover-slipped with mounting medium (Prolong Gold, Thermo Fisher Scientific). Two-color 20ϫ montage images were acquired using a confocal microscope equipped with a computer-controlled stage (Flu-oView 1000, Olympus) and assembled into montage images of the entire retina using FluoView software.
RGC quantification was conducted using Photoshop (Adobe) and ImageJ (https://imagej.nih.gov/ij/index.html). RGCs were counted manually in 50,000 m 2 boxes drawn at 1000, 1800, and 2500 m (proximal, medial, and distal) from the optic nerve head, with five boxes counted for each quadrant. Boxes were placed at the same distances from the optic nerve for each retina, but their location adjusted to exclude damaged tissue. The total area counted represented ϳ10% of the retinal area. Average RGC density was determined for each quadrant at proximal, medial, and distal locations.
Optic nerve histology and axon quantification
Mice were sacrificed by carbon dioxide inhalation and cardiac perfused with 20-ml PBS followed by 20-ml 4% PFA in PBS. After perfusion, eye globes were pulled from the orbit with optic nerve attached. Optic nerves were cut from the globes then placed in 1% glutaraldehyde/4% PFA in PBS. The length of optic nerve stub remaining attached to the globe was measured using precision calipers (Instant Read-Out Digital Caliper, Electron Microscopy Sciences). Optic nerves were transferred to glass vials, washed in PBS then placed in 2% osmium in PBS for 1 h.
Epon embedding of optic nerves was conducted similar to published protocols (Bosco et al., 2016). All incubations before placing in molds were conducted in capped glass vials using 2 ml reagent volumes. Incubations were at room temperature unless otherwise noted. Epon resin (Electron Microscopy Sciences) was made fresh each day of use by mixing 5.5-ml DDSA, 1.5-ml Araldite 502, 2.5-ml Embed 812, and 190-l DMP30. After a 5-min wash in PBS and dehydration in graded ethanol concentrations, nerves were incubated in 1:1 polypropylene/100% ethanol for 30 min., followed by 100% polypropylene for 1 h. Nerves were then incubated with 1:1 polypropylene/epon resin overnight at 4°C with rocking motion. Polypropylene/epon was replaced with fresh 100% epon and nerve incubated for 8 h, after which epon was replaced with fresh epon and nerve incubated overnight under vacuum. Nerves were transferred to a silicon mold (Electron Microscopy Sciences) filled with epon, oriented and bubbles removed under a dissecting microscope, and incubated overnight at 60°C before sectioning.
To normalize the distance from the globe from which cross sections were taken, the block containing the nerve was cut in along the nerve at a distance such that the length of the optic nerve stub plus the cut-in distance were ϳ1.5 mm for all nerves. Cross sections 1-m-thick perpendicular to the long axis of the optic nerve were cut using an EM UC7 ultramicrotome equipped with a diamond blade (Leica), with 10-20 sections placed on a glass slide. Sections were stained with paraphenylenediamine (PPD) by immersing the slide in 1% PPD in 1:1 methanol/ 2-propanol for 3 min. Slides were washed three times in 1:1 methanol/2-propanol, then once in 100% ethanol. Coverslips were mounted using Permount Mounting Medium (Thermo Fisher Scientific).
Bright field images were acquired with an inverted microscope equipped with a 100 ϫ 1.3 NA oil immersion objective (UPlanApo, Olympus) and 5-megapixel CCD camera (DS-Fi2-U3, Nikon). Images were assembled to form montage images of the entire nerve using Image Pro software (Media Cybernetics). Analysis of optic nerve montage images was conducted using ImageJ and Photoshop. Nerve area was determined as the area of a polygon drawn around the nerve, not including the pia mater. To determine axon density, a mask drawn in Photoshop with 24 boxes of 310 m 2 each was placed over the optic nerve image. The mask consisted of four central, eight medial, and 12 peripheral boxes placed 20, 72, and 120 m from the center. The locations and size of the boxes were identical for all nerves and covered ϳ10% of the area of the nerve area. Axons in each box were counted manually by a single operator to determine axon density. To estimate total axon number, the average axon density was multiplied by nerve area.
Axon size was reported as the inner axon diameter, determined as the area of polygons drawn around individual axons, not including the myelin sheath, using Im-ageJ, and diameter calculated as diameter ϭ 2 x ͌(area/ ). Outer axon diameter was determined by measuring the area of the axon plus its myelin sheath, and calculating diameters, as above. Myelin sheath thickness was one half of the outer diameter minus inner diameter and g-ratio was calculated as the inner diameter divided by the outer diameter. Pia mater thickness was determined in cross sections of optic nerves taken at the lamina region and stained using the Luna method to visualize elastic fibers. The inner area of the nerve and the outer area of the nerve plus pia mater were determined and converted to diameters, as above. Pia mater thickness was calculated as one half of the outer minus inner diameter.
Glia area of the optic nerves, identified as areas not corresponding to axons was measured by a semiautomated method using ImageJ. Analysis was performed on a circular region of 0.067 mm 2 cropped from the center of optic nerve region to avoid edge effects. The cropped image was normalized using the Gaussian Blur function of ImageJ, with sigma set to eight pixels. Image brightness and contrast was adjusted to highlight glial cell bodies and processes, followed by image thresholding to create a binary image mask representing glial area, which was converted to red. To verify accuracy of the glial mask, the original cropped nerve image was overlaid onto the mask. Areas overlaying axons were manually removed from the glial mask. The percentage glial area was calculated as the area of the glial mask divided by the area of the cropped image of the optic nerve.
Experimental design and statistical analysis
All experiments were conducted in a blinded fashion. Sample sizes were not predetermined by statistical methods but are similar to or exceed numbers typical of similar experiments. Comparisons between male and female mice did not show statistically significant differences and therefore approximately equal numbers males and females were used in experiments and combined for analysis at six and 16 months of age. Statistical analysis was performed using GraphPad Prism version 7.02 for Windows (GraphPad Software). Significance of differences between groups was analyzed using either one-way ANOVA followed by Bonferroni correction for multiple comparisons (Figs. 1, 2, 4-6, 7B, 8) or Kruskal-Wallis test followed by Dunn's multiple comparisons test (Fig. 7A). F values of ANOVA tests and numbers of subjects in each group are reported in the results and/or shown in the figures. Significant corrected p values are indicated by blue brackets for comparisons between wt mice, red brackets for comparisons between Fbn1 Tsk/ϩ mice and black brackets for comparisons between wt and Fbn1 Tsk/ϩ mice. Results are presented as median or as mean Ϯ SD and were considered significant for corrected p Ͻ 0.05.
IOP not affected by microfibril defect
IOP was measured by rebound tonometer at three, six, nine, and 12 months of age to determine whether IOP is elevated in microfibril deficient mice. On the contrary, Fbn1 Tsk/ϩ mice displayed a trend toward lower IOP compared to wt at three, six, and nine months (Fig. 1A), although the differences did not reach statistical significance (p Ͼ 0.67, one-way ANOVA, F (7,170) ϭ 4.8, followed by Bonferroni's correction for multiple comparisons). A trend toward decreasing IOP with advancing age was apparent in both lines of mice, which was statistically significant for wt (three vs 12 months, p ϭ 7.2 ϫ 10 Ϫ5 and six vs 12 months, p ϭ 0.0028), but not significant for Fbn1 Tsk/ϩ mice.
IOP measurement by rebound tonometer can be underestimated due to thin cornea (Iliev et al., 2006). Since thin cornea is associated with microfibril deficiency (Sultan Figure 1. IOP is not affected by microfibril deficiency. Fbn1 Tsk/ϩ (red symbols) and wt mice (blue symbols) had similar IOPs at three, six, nine, and 12 months of age (A). Central cornea thickness measured by SD-OCT imaging (B, red lines) was significantly thinner in Fbn1 Tsk/ϩ mice at three, six, and nine months (C). Tonometer calibration was not affected by difference in corneal thickness (D, black lines: linear best fits, 10 wt and six Fbn1 Tsk/ϩ mice). Open symbols: individual mice, closed symbols: mean Ϯ SD. Numbers of mice indicated in italics below each group (A, C). Representative SD-OCT images are from three-month-old mice (B).
Figure 2.
Decreased visual acuity in Fbn1 Tsk/ϩ mice at advanced age. Age-dependent decline in visual acuity was observed for wt (blue symbols) and Fbn1 Tsk/ϩ mice (red symbols). At six months of age, there was no significant difference between wt and Fbn1 Tsk/ϩ mice. At 16 months of age, Fbn1 Tsk/ϩ mice showed decreased acuity with respect to wt. Open symbols: individual mice; closed symbols: mean Ϯ SD. Numbers of mice indicated in italics below each group. New Research et al., 2002;White et al., 2017), we measured central corneal thickness by SD-OCT imaging (Fig. 1B). The corneas of Fbn1 Tsk/ϩ mice were 17-26% thinner than wt at three, six, and nine months of age (p ϭ 0.00053, 2 ϫ 10 Ϫ6 and 1 ϫ 10 Ϫ6 , respectively, one-way ANOVA, F (5,83) ϭ 17.96, followed by Bonferroni's multiple comparison tests; Fig. 1C), consistent with microfibril deficiency. To determine whether tonometer calibration was affected by cornea thickness, we measured IOP in eyes of Fbn1 Tsk/ϩ and wt mice with IOP fixed at various pressures by variable-height reservoir. Calibration curves for Fbn1 Tsk/ϩ and wt mice were nearly identical (Fig. 1D), indicating that the lack of elevated IOP in Fbn1 Tsk/ϩ mice was not an artifact of cornea thinning.
Decreased visual acuity threshold in Fbn1 Tsk/؉ mice
Visual acuity was determined in optomotor response experiments with animals at six and 16 months of age (Fig. 2). In wt and Fbn1 Tsk/ϩ mice, age-related decreases in visual acuity were observed (one-way ANOVA, F (3,53) ϭ 34.7, followed by Bonferroni's multiple comparison tests). For wt mice, visual acuity threshold decreased 15.8%, from 0.38 Ϯ 0.01 cycles/degree at six months to 0.32 Ϯ 0.03 cycles/degree at 16 months of age (p ϭ 6.5 ϫ 10 Ϫ5 ). A larger magnitude decrease with age was found for Fbn1 Tsk/ϩ mice, with a 29.7% decrease from 0.37 Ϯ 0.03 cycles/degree at six months to 0.26 Ϯ 0.03 cycles/degree at 16 months of age (p Ͻ 10 Ϫ6 ). No significant difference between wt and Fbn1 Tsk/ϩ mice was observed at six months of age (0.38 Ϯ 0.01 vs 0.37 Ϯ 0.03 cycles/degree). However, at 16 months of age, the acuity threshold of Fbn1 Tsk/ϩ mice was 21% lower than wt (0.26 Ϯ 0.03 vs 0.32 Ϯ 0.03 cycles/degree, p Ͻ 10 Ϫ6 ). These results show that microfibril deficiency results in acceleration of agedependent decline in visual acuity.
Inner retinal dysfunction in Fbn1 Tsk/؉ mice
To investigate retinal dysfunction as a possible cause of decreased visual acuity, scotopic ERG response waveforms were acquired for dark-adapted mice exposed to full-field flashes of white light of varying intensities at six and 16 months of age ( Fig. 3). At the lowest stimulus intensity (-5 log cd·s/m 2 ), small-amplitude responses were observed for wt and Fbn1 Tsk/ϩ at six months of age and for wt mice at 16 months of age but were nearly absent for 16-month-old Fbn1 Tsk/ϩ mice (Fig. 3, top row). At stimulus intensity of -4 log cd·s/m 2 , response waveforms displayed well-defined positive and negative scotopic threshold responses (pSTR, nSTR; Fig. 3, green and orange arrowheads), which arise from the inner retina, with major contributions from RGCs, particularly for the pSTR (Saszik et al., 2002;Bui and Fortune, 2004;Alarcón-Martínez et al., 2010;. With increasing stimulus intensities, waveforms became dominated by outer retinal responses consisting of an initial negative response, corresponding to the a-wave, originating from photoreceptors, rapidly followed by a positive response, corresponding to the b-wave, originating from bipolar cells (Fig. 3, red and blue arrowheads). A general trend toward reduced amplitude responses for Fbn1 Tsk/ϩ mice compared to wt could be seen at six months, becoming more prominent at 16 months of age (Fig. 3), consistent with the reduced visual acuity developing at 16 months of age (Fig. 2).
Amplitudes and latencies (stimulus-to-peak time intervals) were analyzed at flash intensities of -4 log cd·s/m 2 , which gave consistent threshold responses (pSTR and nSTR) and at 0 log cd·s/m 2 , which resulted in fully developed a-and b-waves. At six months of age, no statistically significant differences in the pSTR, nSTR, a-or b-wave responses were found in the response amplitudes ( Fig. 4A-D) or latencies (Fig. 4E-H), using one-way ANO-VAs with post hoc Bonferroni's multiple comparison tests (F statistics indicated in the figures). However, at 16 months of age, differences between Fbn1 Tsk/ϩ and wt mice became significant.
At 16 months of age, the pSTR amplitude of Fbn1 Tsk/ϩ mice was 58% lower than wt (Fbn1 Tsk/ϩ : 28.2 Ϯ 15.3; wt: 67.1 Ϯ 23.6 V, p ϭ 0.0001) and latency was increased 13% (Fbn1 Tsk/ϩ : 138.9 Ϯ 23.8; wt: 121.4 Ϯ 8.6 ms, p ϭ 0.0012) as compared to wt (Fig. 4A,E). Amplitude of the nSTR was also significantly reduced in 16-month-old Fbn1 Tsk/ϩ , with a 51% reduction compared to wt (Fbn1 Tsk/ϩ : 13.7 Ϯ 7.0; wt: 28.1 Ϯ 13.1 V, p ϭ 0.031), although latency was not different (Fig. 4B,F). Since the pSTR and nSTR largely arise from RGC responses (Saszik et al., 2002;Bui and Fortune, 2004;Alarcón-Martínez et al., 2010;, these results suggest an RGC-specific dysfunction developing with increased age in Fbn1 Tsk/ϩ mice. Amplitudes of the a-and b-waves decreased with age, but comparing Fbn1 Tsk/ϩ and wt mice, there were no statistically significant differences (p Ͼ 0.11; Fig. 4C,D). However, at 16 months of age, there was a trend toward lower amplitude b-waves for Fbn1 Tsk/ϩ mice compared to wt (Fig. 4D). Since the b-wave is generated by bipolar cells, which are the main drivers of RGC responses (Pang et al., 2003;Abd-El-Barr et al., 2009), lower amplitude b-waves could account for the reduced pSTR (Frankfort et al., 2013;Khan et al., 2015) and nSTR amplitudes seen in Fbn1 Tsk/ϩ mice. Normalization of the nSTR amplitude to the b-wave amplitude resulted in loss of significance of the differences between Fbn1 Tsk/ϩ and wt mice (Fig. 4J). However, normalization of the pSTR amplitude to the b-wave amplitude resulted in significant reductions (48%) at 16 months, similar to the unnormalized data (58% reduction, compare Fig. 4, compare J, A), indicating that the reduction of pSTR was not caused by reduced bipolar cell responses and therefore likely due to RGC dysfunction.
RGC density not affected by microfibril defect
To determine whether the functional deficit of RGCs indicated by reduced pSTR could be caused by RGC degeneration, immunofluorescent staining of retinal whole mounts was performed for the RGC-specific marker Brn3a (Fig. 5). RGC cell density proximal, medial, and distal from the optic nerve in the superior nasal (SN), superior temporal (ST), inferior nasal (IN), and inferior temporal (IT) quadrants was determined by manually counting Brn3a ϩ cells in representative areas (Fig. 5A). In general, for both Fbn1 Tsk/ϩ and wt retinas, RGC density was lower distal from the optic nerve (Fig. 5B). However, comparing Fbn1 Tsk/ϩ and wt retinas, there were no signif-icant differences in RGC density in any quadrant or distance from the optic nerve (one-way ANOVA, F (23,264) ϭ 18.55, followed by Bonferroni's multiple comparisons test, p Ͼ 0.27). These findings indicate that microfibril deficiency does not result in loss of RGCs in the retina.
Accumulation of phosphoneurofilaments in RGC cell bodies and axons is an indicator of RGC axon transport deficits and early degeneration. In 16-month-old mice, enumeration of RGCs fluorescently labeled with SMI-31 antibody, which recognizes phosphorylated heavy chain neurofilaments, revealed few positive cells and no significant difference between Fbn1 Tsk/ϩ and wt retinas (Fbn1 Tsk/ϩ : 16 Ϯ 10, n ϭ 8; wt: 11 Ϯ 6 SMI-31 ϩ RGCs/ Figure 3. Scotopic ERG waveforms from wt and Fbn1 Tsk/ϩ mice at six and 16 months of age. Scotopic ERG waveforms (amplitude vs time) of individual mice (gray traces) and average waveforms (black traces) in response to flash intensities of -5, -4, -3, -2, -1, and 0 log cd·s/m 2 are shown, with each row of waveforms resulting from the same stimulus intensity, indicated on the left side of the figure. Stimulus onset is indicated by vertical dotted lines. Representative temporal locations of the pSTR and nSTR are indicated by green and orange arrowheads, respectively, for stimulus intensity of -4 log cd·s/m 2 , and the a-and b-waves indicated by red and blue arrowheads, respectively, for 0 log cd·s/m 2 . Representative amplitude measurements for pSTR, nSTR, a-wave and b-wave responses are shown in the first and second columns at -4 and 0 log cd·s/m 2 . Latencies are defined as time from stimulus onset (vertical dotted lines) and peak or trough response. Vertical and horizontal scales for each row of responses are shown to the right of the figure. Genotype and age of mice is indicated at the top with number in each group indicated at the bottom of each column. retina, n ϭ 9, p ϭ 0.6; data not shown). Prominent labeling by phosphoneurofilament staining was observed for intraretinal RGC axons, a few of which displayed beads-ona-string appearance, but with no apparent differences between Fbn1 Tsk/ϩ and wt (data not shown). These results suggest that microfibril deficiency does not result in impaired axon transport.
Optic nerve enlargement in Fbn1 Tsk/؉ mice
In glaucoma, RGC axon degeneration in the optic nerve precedes loss of RGC cell bodies in the retina (Calkins, 2012). To determine whether RGC axons are affected by microfibril abnormality, osmified and epon-embeded optic nerves were cross sectioned perpendicular to the long axis of the nerve 1.5 mm from the globe, stained with PPD and examined by high-resolution light microscopy (Fig. 6). For both lines of mice, optic nerves were larger at 16 months as compared to six months of age, with cross-sectional area increasing by 15% in wt (p ϭ 0.0022) and by 22% in Fbn1 Tsk/ϩ (p ϭ 5 ϫ 10 Ϫ6 ) over the 10-month interval (Fig. 6B), indicating an age-dependent expansion of the optic nerve. Optic nerves from Fbn1 Tsk/ϩ mice were significantly larger than those from wt mice at both time points (Fig. 6B): 14% larger at six months (Fbn1 Tsk/ϩ : 0.120 Ϯ 0.009, wt: 0.105 Ϯ 0.009 mm 2 , p ϭ 0.017) and 21% larger at 16 months (Fbn1 Tsk/ϩ : 0.147 Ϯ 0.017, wt: 0.122 Ϯ 0.015 mm 2 , p Ͻ 10 Ϫ6 ). These findings show that microfibril deficiency leads to accelerated enlargement of the optic nerve. are shown for wt (blue symbols) and Fbn1 Tsk/ϩ mice (red symbols) at six and 16 months of age, with group means indicated by horizontal black lines. At 16 months, statistically significant reductions in amplitude of Fbn1 Tsk/ϩ compared to wt (black brackets) were found for pSTR (A) and nSTR (B), but not for a-wave or b-wave responses (C, D). Significant age-related decline was found for the pSTR of Fbn1 Tsk/ϩ mice (A, red brackets) and the a-and b-waves of both genotypes (C, D, red and blue brackets). For Fbn1 Tsk/ϩ compared to wt, significant increased latencies were found at 16 months (black brackets) for pSTR (E), a-wave (G), and b-wave (H). Comparing six-to 16-month-old mice, significant age-related increased latency was found for Fbn1 Tsk/ϩ mice (red brackets) for the pSTR (E), a-wave (G), and b-wave (H) responses, while wt had decreased a-wave latency (G, blue bracket). Amplitude of the pSTR normalized to the b-wave is significantly lower for Fbn1 Tsk/ϩ mice compared to wt at 16 months of age (I, black bracket), but not for nSTR (J). Stimulus intensities (log cd·s/m 2 ) and F statistics are indicated above each panel.
Despite lower axon density, the total number of axons in Fbn1 Tsk/ϩ nerves was not different from wt, either at six or 16 months of age (Fig. 6D). The similar numbers of axons in Fbn1 Tsk/ϩ and wt nerves indicate that the reduced axon density is not due to loss of RGC axons, but instead is related to increased expansion of Fbn1 Tsk/ϩ optic nerves. Consistent with this, axon density was in-versely correlated with optic nerve area (Fig. 6E). The slopes of the density versus area regression lines for wt and Fbn1 Tsk/ϩ optic nerves (Fig. 6E) were not significantly different (p ϭ 0.41), indicating that the relationship between axon density and nerve area was similar for wt and Fbn1 Tsk/ϩ optic nerves.
Expansion appeared to occur for small, medium, and large axons, as indicated by a rightward shift of the cumulative plot of axon sizes, which becomes more pronounced with age (Fig. 7C). Although the shift toward larger axons could result from loss of small axons, this is not the case, since there were similar numbers of axons in wt and Fbn1 Tsk/ϩ nerves (Fig. 6D). These findings show that optic nerve axons are substantially enlarged at advanced age in Fbn1 Tsk/ϩ mice. Axon diameter was significantly correlated with nerve area (Fig. 7D). The slopes of axon diameter versus nerve area regression lines for wt and Fbn1 Tsk/ϩ optic nerves (Fig. 7D, blue and red lines) were not significantly different (p ϭ 0.26), indicating that the dependence of axon diameter on nerve area was similar for wt and Fbn1 Tsk/ϩ optic nerves.
Thickness of the myelin sheath was investigated by measuring the inner diameter (axon only) and outer diameter (axon plus myelin sheath) for 852 axons from four wt nerves and 1679 axons from seven Fbn1 Tsk/ϩ nerves at 16 months of age. The median myelin thickness was ϳ5% larger for Fbn1 Tsk/ϩ axons as compared to wt (Fbn1 Tsk/ϩ : 0.273; wt: 0.260 m; p Ͻ 0.0001, Mann-Whitney test, U ϭ 641,868; data not shown). The g-ratio, which is the inner axon diameter divided by the outer axon diameter, is an estab- lished metric for assessing axonal caliber and myelination (Chomiak and Hu, 2009). The g-ratio of Fbn1 Tsk/ϩ mice was ϳ1% lower than wt (Fbn1 Tsk/ϩ : 0.613; wt: 0.620; p ϭ 0.0023, Mann-Whitney test, U ϭ 662,255; data not shown).
Absence of optic nerve gliosis in Fbn1 Tsk/؉ mice
Glial activation or redistribution was not evident by inspection of PPD-stained optic nerve cross sections (Fig. 6A). The percentage area of the optic nerve occupied by Figure 6. Enlargement of optic nerves in Fbn1 Tsk/ϩ mice. In optic nerve cross sections (A), Fbn1 Tsk/ϩ mice appeared to have larger nerves and axons compared to wt. Optic nerve area significantly increased with age for both wt and Fbn1 Tsk/ϩ mice and was significantly larger at both ages for Fbn1 Tsk/ϩ mice as compared to wt (B). Average axon density, determined manually by counting axons within boxes (A) was significantly lower in Fbn1 Tsk/ϩ mice (C). However, the total number of axons was not different in wt and Fbn1 Tsk/ϩ nerves (D). Axon density was significantly correlated with nerve area (E; triangles, six months; circles, 16 months). Blue symbols, wt; red symbols, Fbn1 Tsk/ϩ . Closed symbols, mean Ϯ SD. Numbers of mice indicated in italics below each group (B-D). The best-fit line, R 2 , p value, and number of data points are shown for wt and Fbn1 Tsk/ϩ mice separately and for all mice combined (E, red, blue and black lines and text, respectively). Comparison of median axon diameter for individual mice shows a similar age-dependent increase in axon size in Fbn1 Tsk/ϩ (red) as compared to wt (blue) optic nerves (B, open symbols: individual mice, closed symbols: mean Ϯ SD, numbers of mice indicated in italics below each group). Cumulative plots of axon diameters shown in panel A demonstrate a shift to larger axons for all size categories defined by the first, second and third and fourth quartiles of wt axons (small, medium, and large; orange dotted lines) for Fbn1 Tsk/ϩ (red) as compared to wt mice (blue) at 16 months of age (C). Axon diameter was significantly correlated with optic nerve area (D; blue symbols, wt; red symbols, Fbn1 Tsk/ϩ ; triangles, six months; circles, 16 months). The best-fit line, R 2 , p value, and number of data points are shown for wt and Fbn1 Tsk/ϩ mice separately and for all mice combined (D, red, blue and black lines and text, respectively). glia was similar in wt and Fbn1 Tsk/ϩ mice at 16 months of age (Fbn1 Tsk/ϩ : 9.6 Ϯ 2.5%, n ϭ 11; wt: 10.7 Ϯ 3.1%, n ϭ 13; p ϭ 0.31; data not shown).
Thinner elastic fiber layer in pia mater of Fbn1 Tsk/؉ mice
The pia mater ensheathes the optic nerve and contributes significantly to its biomechanical properties (Feola et al., 2016;Hua et al., 2017). Luna staining of the glial lamina region revealed a regular network of longitudinal and radial elastic fibers (Fig. 8A,B) that occupies nearly the full thickness of the pia mater. Although structural abnormalities of elastic fibers in the skin and aorta of Fbn1 Tsk/ϩ mice have been reported, we could not discern consistent differences in elastic fiber morphology in the pia mater. However, the pia mater of Fbn1 Tsk/ϩ nerves was 22% thinner than wt at six months (Fbn1 Tsk/ϩ : 11.2 Ϯ 1.7 vs 14.5 Ϯ 1.5 m, p ϭ 0.0057) and 25% thinner at 16 months (Fbn1 Tsk/ϩ : 10.2 Ϯ 1.7 vs 13.7 Ϯ 2.1 m, p ϭ 0.0047; Fig. 8C,D). Thinning of the pia mater in Fbn1 Tsk/ϩ mice suggests that microfibril deficiency could result in substantial alteration of the mechanical properties of the optic nerve. This is relevant to glaucoma since the stresses and strains experienced by RGC axons, which are thought to initiate RGC degeneration are dependent on the mechanical properties of optic nerve tissues.
Discussion
Above-normal IOP is an important risk factor, but not a requirement, for developing glaucoma (Davis et al., 2016;Jonas et al., 2017). Since dogs with the glaucomacausing ADAMTS10 mutations have elevated IOP (Gelatt et al., 1977;Ahonen et al., 2014), we anticipated that mice with microfibril deficiencies would as well. However, IOP of Fbn1 Tsk/ϩ mice tends to be lower, but not statistically different, from controls (Fig. 1). Therefore, the reduced retinal function and optic nerve enlargements in Fbn1 Tsk/ϩ mice occurred in the absence of elevated IOP.
Many human patients develop glaucomatous optic nerve damage with IOP in the normal range, which can be referred to as normal tension glaucoma (Davis et al., 2016;Jonas et al., 2017), and mouse models of glaucoma without elevated IOP have been described (Harada et al., 2007;Mi et al., 2012). Normal tension glaucoma may reflect increased sensitivity to pressure-induced mechanical stress that is thought to contribute to reduced axonal transport and subsequent Wallerian degeneration of RGC axons (Calkins, 2012;Nickells et al., 2012;Davis et al., 2016). Glaucoma-related phenotypes in Fbn1 Tsk/ϩ mice may be specific to normal tension glaucoma and may reflect differences in biomechanical properties of relevant tissues, such as the optic nerve, that could lead to increased susceptibility to glaucomatous damage, even at normal IOP. The Tsk mutation of Fbn1 is known to alter the biomechanical properties of skin (Menton et al., 1978), and our findings of a thin pia mater suggest abnormal biomechanics of the optic nerve.
Fbn1 Tsk/ϩ mice have accelerated age-dependent decrease of visual acuity as measured by the optomoter response that becomes quite pronounced at 16 months of age (Fig. 2). The optomotor response is a reflex initiated by neuronal input to the brainstem accessory optic system from direction-selective RGCs (Dhande et al., 2013;Leinonen and Tanila, 2018). In mouse models of glaucoma, degeneration of RGCs has been shown coincide with reduced visual acuity measured by the optomotor response (Burroughs et al., 2011;Wong and Brown, 2012;Grillo and Koulen, 2015). The accelerated reduction in the optomotor response seen in our experiments with Fbn1 Tsk/ϩ mice is consistent with early-stage glaucomatous degeneration of RGCs.
Coincident with decreased visual acuity, RGC-specific pathology in Fbn1 Tsk/ϩ mice is indicated by the substantial reductions in the pSTR and nSTR amplitudes (Bui and Fortune, 2004;Alarcón-Martínez et al., 2010; that develop with advanced age (Fig. 4). Although glaucoma is primarily RGC specific, outer retinal decline has been described in human patients (Nork et al., 2000;Velten et al., 2001) and in experimental models of glaucoma (Bayer et al., 2001;Fernández-Sánchez et al., 2014). We also detected outer retina dysfunction of Fbn1 Tsk/ϩ mice as suggested by a trend toward decreased a-and b-wave amplitudes, although retina thickness from outer plexiform layer to photoreceptor end tips was not significantly different in a subset of 16-month-old mice as measured by SD-OCT (wt: 81.2 Ϯ 11.8, n ϭ 9; het: 71.7 Ϯ 21.0 m, n ϭ 8; p ϭ 0.28; data not shown). Because it is derived from bipolar cells which supply the primary inputs to RGCs, reduction of the b-wave would reduce the pSTR and nSTR. However, our data demonstrate an RGC-specific deficit since pSTR normalized to the b-wave is significantly reduced (Fig. 4I).
Decreased visual function of Fbn1 Tsk/ϩ mice occurs without loss of RGC cell bodies in the retinas (Fig. 5). RGC soma density was determined by immunostaining wholemount retinas for Brn3a, a specific marker for RGCs that has been used to quantify RGC loss due to elevated IOP (Salinas-Navarro et al., 2010). In the rat retina, Brn3a immunostaining labels Ͼ90% of RGCs that project to the superior colliculus (Nadal-Nicolás et al., 2009, and ϳ85% of RGCs are labeled in the mouse retina (Galindo-Romero et al., 2011;Schlamp et al., 2013). Although Brn3a is not expressed by all morphologically identifiable RGC sub-types (Badea and Nathans, 2011), it is unlikely that a reduction of RGCs was missed due to lack of labeling of a specific sub-population of RGCs, since the number of RGC axons in the optic nerve was also not reduced in Fbn1 Tsk/ϩ mice, even at 16 months of age (Fig. 6D).
Reduced RGC-specific ERG responses that precede RGC loss have been demonstrated in mouse and rat models of IOP-induced glaucoma (Fortune et al., 2004;Holcombe et al., 2008;Frankfort et al., 2013;Khan et al., 2015). Due to the hierarchical organization of the retina, the STR of the RGCs is dependent on the progressive convergence of retinal signals (Saszik et al., 2002;Pang et al., 2003). Shrinkage of RGC dendritic arbors, as has been reported to occur early in experimental glaucoma, long before cell death Shou et al., 2003;El-Danaf and Huberman, 2015), may affect signal transduction from bipolar cells to RGCs, which could contribute to reduction of the STRs. The accelerated decline of the pSTR amplitude found in Fbn1 Tsk/ϩ mice would be consistent with an early phase of glaucoma preceding structural degeneration of RGCs (Saleh et al., 2007;Holcombe et al., 2008;Fry et al., 2018). The attenuated RGC responses could contribute to the observed decline in visual acuity. Previous studies with a mouse glaucoma model with RGC-specific retinal pathology have shown decreased visual acuity (Wong and Brown, 2012), similar to our findings with the microfibril deficient mice.
A striking feature of the microfibril deficient mice is the progressive enlargement of the optic nerve and optic nerve axons (Figs. 6, 7). Four recent studies reported similar expansion of optic nerve axons. Stahon et al. (2016) found an ϳ12% increase in median axon diameter in normal C57BL/6 mice at 12 versus one month of age, which coincided with decreased axon density. Zhu et al.
(2018) reported age-dependent axon expansion in the optic nerves of C57BL/6 mice, with an ϳ6.5% increase in diameter from three to 30 months of age, also coinciding with decreased axon density. In our experiments, axon diameter of Fbn1 Tsk/ϩ mice increased 16% from six to 16 months of age, while enlargement of axons in wt controls was not observed. Working with DBA/2J mouse model of glaucoma, Cooper et al. (2016) reported age-dependent enlargement of the optic nerve that correlated with enlargement of the optic nerve axons and decreased axon density. In normal C57BL/6, optic nerve size did not significantly increase with age in that study, suggesting that the enlargement seen in DBA/2J mice is specific to glaucoma. Importantly, showed expansion of the optic nerve and optic nerve axons in DBA/2J in comparison to control DBA/2J wt-Gpnmbϩ mice. Axon expansion was significantly greater than control in eight-to nine-month-old DBA/2J, an age that corresponds to early glaucoma, in which deficits in RGC function and axonal transport are observed without overt loss of axons , further supporting axon expansion as a component of early glaucoma. Our data, together with the recent reports discussed above (Cooper et al., 2016;Stahon et al., 2016;Zhu et al., 2018), suggest that accelerated age-dependent expansion of optic nerve axons may be an under-appreciated compo-nent of aging and glaucoma pathogenesis. In microfibril deficient mice, the accelerated expansion of the optic nerve and optic nerve axons may represent an early stage of glaucoma occurring in the absence of stress induced by high IOP.
The shift in the distribution of axon diameter to larger axons in Fbn1 Tsk/ϩ mice (Fig. 7A) must be due to enlargement of existing axons, rather than selective loss of smaller axons, because the number of axons does not decrease (Fig. 6D), and therefore, there was no loss of axons. General enlargement rather than selective loss is also suggested by the cumulative plots of axon diameter at 16 months of age (Fig. 7C), in which a shift toward larger axons can be seen for all size categories (small, medium, and large). In mouse glaucoma models, investigations into differential vulnerability of RGC subtypes have suggested that ␣-like RGCs, which have large somas and axons, may have greater susceptibility to pressure-induced damage than do other RGC types (El-Danaf and Huberman, 2015;Ou et al., 2016). Studies in humans and non-human primates have also suggested that larger optic nerve axons are more susceptible to degeneration in glaucoma (Quigley et al., 1988;Quigley, 1999). Mitochondrial pathology such as reduced density, larger size and abnormal morphology has been shown to occur in enlarged axons (Cooper et al., 2016;Stahon et al., 2016;Zhu et al., 2018), accompanied by lower axonal ATP and increased oxidative stress (Stahon et al., 2016). Although we did not investigate mitochondria, it is plausible that similar changes occur in enlarged axons of aged Fbn1 Tsk/ϩ mice. Mitochondrial pathology and oxidative stress are likely important contributors to RGC degeneration in glaucoma (Williams et al., 2017;Harun-Or-Rashid et al., 2018). With microfibril deficiency, axon enlargement may result in increased susceptibility to pressure-induced damage.
Action potential propagation velocity is affected by axon caliber and thickness of the myelin sheath, with larger axons and thicker sheaths driving faster propagation (Rushton, 1951;Smith and Koles, 1970;Chomiak and Hu, 2009). In our study, median axon diameter of microfibril deficient mice was 16% larger and median myelin sheath was 5% thicker than wt, which could potentially affect axon function. However, the g-ratio (inner diameter/ outer diameter) was essentially unchanged with a slight, although statistically significant, 1% reduction. In myelinated nerves, the g-ratio is optimized for required axon potential transmission rates, energy and space constraints (Rushton, 1951;Smith and Koles, 1970;Chomiak and Hu, 2009). The conserved g-ratio in Fbn1 Tsk/ϩ mice suggests a compensatory change in myelin sheath thickness as axons expanded to maintain an optimal g-ratio with the likely result that action potential propagation speed would be essentially unaffected. Consistent with this, Stahon et al., who found similar age-dependent reductions in g-ratio of 1.3-3.2%, investigated action potential propagation speed by measuring compound action potentials and found similar responses in optic nerves from young and old mice, although response amplitudes were larger in older mice with enlarged axons (Stahon et al., 2016). Although we did not measure compound action potentials, the nearly unchanged g-ratio would suggest axon potential propagation speeds in Fbn1 Tsk/ϩ mice would be similar to wt.
Thinning of the elastic fiber-rich pia mater in the microfibril deficient mice suggests a possible mechanism for enlargement of their optic nerves. The subarachnoid space between the pia and dura mater is filled with cerebrospinal fluid, the pressure of which, together with IOP, contributes to the biomechanical forces experienced by optic nerve axons (Sigal et al., 2007;Feola et al., 2017). The relatively stiff pia mater plays a major role in the mechanical stability of the optic nerve, and makes significant contributions to the forces experienced by axons, particularly in the post-laminar region (Feola et al., 2016;Hua et al., 2017).
One function of the pia mater may be to absorb mechanical stress due to an outward pressure gradient between the inner optic nerve and the subarachnoid space. A positive interstitial fluid pressure within the retrolaminar optic nerve was directly measured by Morgan et al. (1998). The resulting pressure gradient spans the pia mater, indicating that it is completely responsible for bearing the resulting stress (Morgan et al., 1998;Balaratnasingam et al., 2009). Using Laplace's law for thin-walled cylinders, S ϭ PR/T, where R is the radius of the optic nerve, and P the internal pressure, the circumferential stress (S) experienced by the pia mater is inversely proportional to its thickness (T). Thinning of the pia mater would result in increased circumferential stress, which would tend to force expansion of the optic nerve. Expansion of the optic nerve in the microfibril deficient mice may result both from increased circumferential stress and altered biomechanical properties of the pia mater.
The extensive elastic fiber network observed in the pia mater by Luna staining (Fig. 8) is likely to be a major determinant of the biomechanical properties of the pia mater. Although we did not observe consistent alterations of elastic fiber structure in the pia mater, elastic fibers are abnormal in the lungs and skin of Fbn1 Tsk/ϩ mice (Akita et al., 1992;Lemaire et al., 2004). Increased pliability of meningeal tissues in the context of microfibril deficiency is strongly suggested by the common finding of expansion of the spinal dura (dural ectasia) in patients with FBN1 mutations (Attanasio et al., 2013;Sheikhzadeh et al., 2014). It is interesting to note that a "soft" pia mater has been identified as an important risk factor for optic nerve strain (Feola et al., 2016;Hua et al., 2017), suggesting that an analogous increased pliability of the pia matter would be highly relevant to glaucoma susceptibility.
A significant correlation was found between the size of the optic nerve and the size of the optic nerve axons (Fig. 7D), indicating that axon caliber may be dependent on the size of the optic nerve. This suggests a possible mechanism of axon expansion, whereby reduced interstitial pressure due to relaxation of the pia mater alters mechanical forces experienced by axons. Axons are highly responsive to mechanical stimulation, and it has been suggested that mechanical forces may contribute to determination of axon diameter (Fan et al., 2017). Our results suggest that in the optic nerve, mechanical forces may contribute to regulation of axon caliber.
In summary, we tested the hypothesis that microfibril deficiency causes glaucoma. While the microfibril deficient Fbn1 Tsk/ϩ mice did not develop progressive RGC degeneration, which is a hallmark of glaucoma, they did have phenotypes similar to early glaucoma such as progressive loss of RGC function. Expansion of the optic nerve axons in microfibril deficient mice, possibly related to thinning of the pia mater, could result in increased susceptibility to damage due to stresses such as elevated IOP. | 2018-11-15T17:45:15.178Z | 2018-09-01T00:00:00.000 | {
"year": 2018,
"sha1": "8f5eca5b227770ade057347c862956f3a55b2507",
"oa_license": "CCBY",
"oa_url": "https://www.eneuro.org/content/eneuro/5/5/ENEURO.0260-18.2018.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9cfdd96c7f9afbbc2bc15b977513305531bc37ad",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
52917793 | pes2o/s2orc | v3-fos-license | An Efficient Forwarding Capability Evaluation Method for Opportunistic Offloading in Mobile Edge Computing
. Opportunistic offloading can be utilized to offload computing tasks and traffic data in Mobile Edge Computing (MEC). To improve the ratioof successful data offloading and reduceunnecessary data redundancy inopportunistic forwarding process, some methods of evaluating a device’s forwarding capability are proposed. However, most of these methods do not consider the temporal impact from device mobility and the efficiency influence from the capability computation process. To settle these problems, we proposed a Transient-cluster-basedCapability Evaluation Method (TCEM) to evaluatea device’s data forwarding capability. The TCEM can be divided into two steps. The first step aims to reduce computational complexity by evaluating a device’s possibility of contacting the destination within a time constraint based on the transient cluster generated by our proposed Transient Cluster Detection Method (TCDM). The second step is to calculate a device’s probability of directly and indirectly forwarding data to the destination. The probability as a metric of evaluating a device’s forwarding capability can be used in different data forwarding strategies. Simulation results demonstrate that the TCEM-based data forwarding strategy outperforms other data forwarding strategies from the aspect of the proportion of the data delivery ratio to the data redundancy.
Introduction
Opportunistic offloading as an emerging communication paradigm can be used in MEC to offload computation or traffic data by leveraging the opportunistic network formed by mobile devices [1][2][3].For example, in computation offloading an edge mobile device with limited computing resources can offload computing tasks through opportunistic communication to other nearby devices with idle computing resources, and in traffic offloading the edge devices that cache the repeatedly requested or popular contents transfer the contents to subscribers by opportunistic communication to reduce network traffic and devices' energy consumption.Opportunistic offloading is based on opportunistic connection between mobile devices causing random delay; thus it is suitable for non-real contents like e-mail, podcast, and weather forecast, which can tolerate some delay in the delivery.
To maximally offload computing tasks and traffic data within a time constraint, the ratio of successful data delivery over opportunistic data forwarding needs to be maximized.In opportunistic offloading, the offloading device could forward the data to opportunistic encountered devices via storecarry-forward mechanism to increase the ratio of successful delivery.Forwarding data to every opportunistic encountered device until the deadline of the data or the destination is reached maximizes the data delivery ratio, but this process produces large amounts of redundant data carried by devices who cannot contact the destination within valid time of the data.These redundant data consumes large amounts of devices' limited resources, such as storage capacity and battery life.Therefore, in this paper, we study how to reduce the redundant data in the opportunistic forwarding process while maximizing the ratio of successful data delivery, when a device sends its caching data with a time constraint to another 2 Wireless Communications and Mobile Computing requested device.The key challenge is how to make strategy on replicating data to opportunistic encountered devices.
Recent literatures make data forwarding strategies based on comparison of the device's data forwarding capability.Devices carried data only replicate the data to devices with higher capability or the device with the highest capability.Although data redundancy is reduced through this method, there exist some problems on evaluating the device's data forwarding capability.For example, the evaluation methods [4,5] are derived from communities formed by the aggregation of contact information.However, contact patterns are time-varying.By aggregating contact information into an aggregated contact graph, some important contact information, e.g., the burst behaviour, may be lost, weakening the accuracy of evaluating forwarding capability.Additionally some evaluation methods such as EER [6] need to get global contact information in advance.It is unrealistic in large-scale mobility scenarios.
In this paper, to optimize data delivery ratio and to reduce unnecessary redundancy, we propose a distributed method-Transient-cluster-based Capability Evaluation Method (TCEM).The first step of TECM is to judge the possibility of a device contacting the destination within the time constraint of the data using our proposed Transient Cluster Detection Method (TCDM).The second step is calculating the device's probability of encountering destination within the time constraint of the data as evaluation metric.The contributions of this method can be summarized as follows: (i) Our proposed TCEM is distributed.Every device uses this method to evaluate its forwarding capability based on the transient contact information it obtains.
Comparing with some centralized methods, which need to get global contact information in advance, our proposed method is more feasible under mobility circumstance of the MEC.(ii) The proposed cluster detection method TCDM is formed by time-varying contact patterns of devices.
Compared with other two transient cluster detection methods DRAFT [7] and CCM [8], TCDM is simpler and more efficient.(iii) Compared with some methods, which directly calculate every encountered device's probability as forwarding capability evaluation metric, our proposed TCEM effectively reduces the computational complexity because we first evaluate the encountered device's possibility of forwarding data to the destination, and only the device with high possibility calculates the probability.
(iv) TCEM accurately evaluates a device's capability of forwarding data to a specific device within a time constraint, since it is based on individual pairs' Inter Contact Time (ICT) distributions and considers the influence of transient cluster's duration time in the probability calculation process.
The rest of the paper is organized as follows.Section 2 gives a brief overview of the related work.In Section 3, we give an overview of a TCEM-based data forwarding strategy.
Section 4 presents our proposed TCEM in detail.It contains TCDM and probability calculation.Section 5 evaluates the performance of our approach by simulation, and Section 6 concludes the paper.
Related Work
Opportunistic data forwarding protocol originates from Epidemic routing [9] which floods the network.Although this flooding-based algorithm can achieve the highest packet delivery ratio, it cause large amounts of redundant data copies in the network.Later studies devote to develop forwarding protocols to approach the performance of Epidemic routing with lower cost, which is measured by the number of data copies in the network.Currently, the most successful approaches for opportunistic forwarding are social-aware strategies [10].The community structure formed by exploiting social contact patterns of nodes has been widely utilized in the methods of evaluating a node's forwarding capability, since it is more reliable and less susceptible to the randomness of human mobility.In Bubble Rap [4], SimBet [11], MDDPC-based [12], and RPC-based [13] data forwarding strategies, the data forwarding capability of a node is represented by its social importance that is the degree of facilitating the communication among other nodes.They are evaluated based on communities.For increasing the efficiency of the decision making process of routing, some schemes combined features are proposed.For example, SimbetTS [14] adds the tie strength for utility calculation, Oi [15] and SCORP [16] combine users' social ties with their interest for social-aware opportunistic routing, and GROUPS-Net [17] combines social awareness with a probabilistic approach using group meetings as a measure of social context to improve data delivery ratio.SAMPLER [18] uses social communities and social popularity metrics as they were introduced in the original Bubble Rap [4] scheme and adds the individual mobility and points of interest within a region to them, helping to achieve high delivery ratio and reducing network overhead.LASS [19] is proposed taking into account the difference of members' internal activity within each community.It utilizes different levels of local activity within communities to realize efficient data forwarding.However, in these strategies the community is formed based on the previous cumulative contact knowledge.The transient characteristics of node contact pattern, which would influence data forwarding performance, are ignored.
References [6,7,[20][21][22][23][24] consider time-varying contact pattern between nodes.Wei et al. [20] develop a novel evaluation method to analytically predict the forwarding capability of a mobile node based on a proposed transient community structure.An efficient temporal closeness and centralitybased data forwarding strategy TCCB [23] are proposed by predicting nodes' future temporal social contact patterns.However, these two schemes are unsuitable for our proposed situation.Since evaluating a node's data forwarding capability [20] is based on the centrality within its transient community and [23] is based on the centrality within all nodes in the network, they do not reflect the node's capability of forwarding data to a destination.The method DRAFT [7] measures the forwarding capability of a node through judging the node's 2hop cluster whether it contains destination or not.The cluster is formed based on participants' cumulative or decayed contact duration.SimBetAge [24] improves upon SimBet by adopting an aged graph to calculate the social metrics dynamically.These approaches are not efficient enough since they improve the performance of data forwarding by replicating data to all high possibility nodes, which still produce a lot of redundant data.TSM [21] evaluates a node's forwarding capability based on three types of time-varying social metrics: betweenness centrality, similarity, and tie strength, which are derived from analysing two sets of social data.Reference [22] proposes a CAOF scheme, which includes intercommunity and intracommunity phase.In the intercommunity phase, the node with higher global activeness and source-to-destination probability is selected to serve as the relay.Besides, in the intracommunity phase, the forwarding decisions are determined by the local metrics.EER [6] proposes to evaluate a node's forwarding capability based on its encounter value (EV), which is the number of directly encountering participants within the data's valid time.These four methods are unpractical in some cases, since they need to get global contact information within data's valid time in advance.Comparatively our proposed TCEM is distributed and effectively reduces the computational complexity because we first evaluate the encountered device's possibility of forwarding data to the destination and only the device with high possibility calculates the probability.
The second step of our proposed TCEM needs to calculate the probability that a node transfers data to a specific node within data's valid time.ICT is defined as the time elapsing between two consecutive encounters of two devices; thus it can be used to calculate the probability.Most probability calculation methods use the aggregate ICT distribution: power law + exponential tail or exponential, which is obtained by considering the samples from all pairs together.However Hernández-Orallo et al. [25] propose using the distribution of aggregate ICT to represent individual pairs' ICTs distributions will not be correct in general, if the network is heterogeneous.Hence in this paper we consider the difference of individual pairs' ICTs distributions in the probability calculation process for more accurately evaluating the forwarding capability of a node.
TCEM-Based Data Forwarding Strategy
In the process of opportunistic data forwarding, the device with data makes data forwarding strategy based on encountered devices' data forwarding capability.In fact the method of evaluating a device's data forwarding capability within time constraint is independent from data forwarding strategies.Different forwarding capability evaluation methods can be applied in the same data forwarding strategy.For example, [8,20] all forward data to devices with higher capability, whereas they use different forwarding capability evaluation methods.In this section we apply our proposed device's data forwarding capability evaluation method TCEM in a data forwarding strategy that the carried data participant forwards data to participants with higher capability.
The data forwarding strategy is composed of two parts.First it uses the TCEM to evaluate the connected device's capability of forwarding data to the destination.Then it decides whether to replicate data to this device on the evaluation result.The detailed process is illustrated as follows.
When a device 1 with data connects a device 2 , 1 first judges whether 2 is carrying this data.If 2 carries this data, 1 directly skips 2 .Otherwise 1 executes the data forwarding strategy as illustrated in Figure 1.
In part A, the strategy uses TCEM to evaluate 2 's data forwarding capability.The TCEM consists of two steps.In the first step, it judges whether 2 has possibility of transferring data to the destination within the time constraint based on transient clusters.Only when 2 's transient cluster or his adjacent transient clusters contain destination, the second step will be continued, otherwise 1 will not replicate data to 2 and the strategy will end.The detailed Transient Cluster Detection Method (TCDM) is demonstrated in Section 4.1.The second step is to use ICT distribution between participants to calculate the probability 2 that 2 forwards data to destination within the time constraint of the data.The calculation process is shown in Section 4.2.In our proposed method the probability is used as the metric of evaluating a participant's forwarding capability.
In part B, the strategy determines whether replicating data to 2 according to the forwarding probability of 2 .If 2 's probability of contacting destination within data's valid time is higher than 1 , 1 will replicate data to 2 , or 1 will not.Devices with data execute the afore-mentioned data forwarding strategy, when they opportunistically connect devices without carrying this data.The process is continued until the destination receives the data within data's valid time.
Data Forwarding Capability Evaluation
Method of the Device (TCEM) have the possibility to transfer data to the destination.The evaluation standard is set because of two reasons.First the efficient content dissemination is mostly due to high contact rate nodes [26].Second setting a device's neighbourhood view more than 2 hops does not improve the forwarding efficiency and even dramatically increases data redundancy [27].Therefore, for evaluating a device 's possibility of contacting a destination, the device also stores its adjacent clusters , besides storing its transient cluster .The contact rate used to construct transient clusters can be represented by ICT which is the time interval between two consecutive encounters.We define the contact rate between two devices is high if and only if the ICT between them is shorter than a predefined threshold .A device would delete devices whose ICTs are bigger than from its transient cluster.The value of is set based on different traces.Since pairwise ICTs are time-varying [21,28], to accurately acquire the transient cluster and adjacent clusters of the current time, every hours a device updates its transient cluster which only contains devices it encounters during this time period and updates its adjacent clusters when the device encounters a device.The process of building storage information of a device is described in Algorithm 1.It does not need centralized control and is independently performed by every device.
Calculating the Probability.
After the first step of TCEM, a device's possibility of transferring data to destination has been determined.However, there is no guarantee that the data can be delivered to destination within a time constraint by the device, even though the path is at most 2 hops.To minimize the data redundancy, then in the second step, we calculate the device's probability of transferring data to destination within the time constraint, which represents the device's forwarding capability.The device with data only replicates data to encountered devices whose probabilities are higher than him.The calculation process is divided into two situations: (1) destination is in 's transient cluster, and (2) destination is in 's adjacent transient clusters.Table 1 gives the notations that are used in this section.
(1) The Destination in 's Transient Clusters.We have defined that a device has possibility to transfer data to another device only if the path between them is no more than 2 hops.Hence can transmit data to the destination via two routes, as illustrated in Figure 3.The first route is that directly transmits data to .The second route is that indirectly transmits data to through a device , who is a member of 's transient cluster.In this paper, data forwarding depends on opportunistic connections between devices.We assume the size of sensed data is small, and it can be completely transferred during one contact and the transmission time can be neglected.Hence the device's probability of transferring data to destination within the time constraint is equal to the probability of contacting destination within the time constraint.The ICT distribution between devices in the real-world mobility traces shows some probability distribution models.We use individual pairwise ICT distribution to calculate the probability that a device directly or indirectly contacts destination within the time constraint and use to represent the ICT distribution between devices and .
The probability () that contacts within time constraint contains two parts: the probability of directly and indirectly contacts d. () is calculated by where ( Since our proposed calculation method is based on the transient cluster, the relationship between duration of the transient cluster and valid time of data would influence the value of the time constraint in the probability calculation process.As illustrated in Figure 4, is the start time of a transient cluster. is the end time of the transient cluster; thus ( − ) is the transient cluster's duration time. is the time that the device carried data encounters another device. V is the deadline of the valid time of the data.The first relationship is (1) that the duration time of the cluster is longer than the valid time of the data.The second relationship is (2) that the duration time of the cluster is shorter than the valid time of the data, and under this situation the time constraint of the data becomes ( − ).
In the first relationship ( > V ), duration of the transient cluster is longer than valid time of the data.Equation ( 3) is the probability of this situation, where is the distribution of the duration time of the transient cluster.In this situation, the real constraint time of the data is ( V − ).Equation ( 4) is the probability 1 ( , V ) that contacts within data's valid time under this situation. ( V − ) is the probability that contacts d within time constraint ( V − ).
To study , we run the TCDM on two real datasets, Infocom6 [29] and Cambridge [30].The detailed information about these two datasets is illustrated in Table 1.We observe the duration time of a transient cluster on daily basis and find its distribution can be approximated by exponential distribution.We take one transient cluster in Cambridge trace as an example, shown in Figure 5.The approximation does not seem perfect, since the samples used to train the distribution are limited.It should be better if more data are used.
The second relationship ( ≤ V ) is that duration of the transient cluster is smaller than valid time of the data.Equation ( 5) is the probability of this situation.In this situation, the real constraint time of the data is ( − ).Equation ( 6) is the probability 2 ( , V ) that contacts within data's valid time under this situation. ( − ) is the probability that contacts within time constraint ( − ).
In summary, ( 7) is the probability ( ≤ V ) that contacts the destination before the expiration of the data's valid time.
(2) The Destination in 's Adjacent Transient Clusters.We discuss the situation that the destination is not in 's transient cluster, but it is in 's adjacent transient clusters.Figure 6 shows an example that is in the 's adjacent cluster . can transmit data to through .The probability () that encounters destination within time constraint only by indirect contacting is calculated in (8), where T is devices of 's transient cluster whose transient clusters contain d.When ( > V ), ( 9) is the probability 1 ( , V ) that encounters d, under this situation that the duration of transient cluster is longer than data's valid time, where When ( ≤ V ), (10) is the probability 2 ( , V ) that encounters destination d, under this situation that the duration of transient cluster is smaller than valid time of the data, where ( − ) is calculated based on (8).In summary, (11) is the probability that contacts destination before the expiration of valid time of the data.
Performance Evaluation
Our proposed TCEM is a method of measuring a device's data forwarding capability.Since it is based on the transient cluster, the performance of forming transient clusters will influence the performance of the method.Therefore we evaluate the performance of the Transient Cluster Detection Method (TCDM) of TCEM.Meanwhile, we evaluate the performance of a TCEM-based data forwarding strategy.In the experiments, the data delivery ratio is the proportion of data items, which are successfully delivered to the destination through opportunistic forwarding before data expires.It is improved by increasing the number of data copies, which represented the network overhead.Hence, to reflect the efficiency, the performance of the data forwarding strategy is evaluated by the proportion of data delivery ratio to the network overhead.Our experiments are based on two real datasets, Info-com6 [29] and Cambridge [30].These datasets are formed based on the devices that periodically detect their peers via Bluetooth interfaces, and a contact is recorded when two devices move into the communication range of each other.The details of two datasets are shown in Table 2.
Transient Cluster Evaluation.
We compare the performance of TCDM with two transient cluster detection methods: Contact-burst-based Clustering Method (CCM) [8] and Distributed Rise and Fall spatio-Temporal (DRAFT) clustering method [7].Besides the distributed feature and the feasibility of the method, the size of clusters detected by the method would also influence efficiency of the data forwarding strategy, as a data forwarding strategy decides whether to forward data to an encountered participant by searching all numbers of this participant's cluster.Hence in this section we compare the performance of methods from three metrics: mean cluster size, max cluster size, and the used cluster size of evaluating forwarding capability.
CCM. It detects transient clusters by clustering pairs of nodes
with similar contact bursts together.A contact burst between two nodes refers to a period when contacts frequently appear between these two nodes.The transient cluster detected by CCM is time-varying, and the cluster's forming process needs central control.
DRAFT.It is distributed.Every node forms a cluster based on cumulated or decayed contact duration.It uses three parameters , , to govern the rate at which clusters grow and decay.If the cumulated contacted duration is longer than , the encountered device will be added to the cluster.Otherwise the device will be deleted from the cluster.A time frame of length t seconds governs the interval at which the cumulative durations for each device are decayed.At the end of each time frame, the contacted duration is decreased by multiplying the parameter .
Our proposed TCDM is distributed.Every device forms a time-varying transient cluster, which contains a set of devices that have high contact rate with him.In this paper, we set a device that will add an encountered device to his transient cluster, only if these two devices' intercontact time is lower than predetermined parameter .The value of is set empirically based on traces.
In these two datasets, we found that each pairwise contact is a series of contact bursts during which two devices' contact rate is high, and two devices' intercontact time is lower than 1 hour in these contact burst periods.For example, we choose any two devices from the Cambridge dataset.The contact is illustrated in Figure 7.An arrow represents a contact between two devices.Thus we set = 1.If these two devices do not contact within one hour that means the two devices' contact rate is not high, and a device will be deleted from another device's transient cluster.
Mean Cluster Size. Figure 8 shows that the mean cluster size of TCDM is the smallest among these three methods.CCM's mean cluster size is the biggest, and it is almost three times bigger than TCDM.We illustrate the reasons as follows.
In the DRAFT method, a cluster is a device-centric onehop cluster, but a device is added to or deleted from a cluster based on the cumulative or decayed encounter duration time.In these two datasets, the cluster of DRAFT will contain high contact rate devices.For example, in Infocom6 dataset, two devices with high contact rate mean these two devices in a contact burst period.Mean contact duration between devices is 24 s.Two participants contact 6 times during a contact burst period.Therefore, in a contact burst period, two devices' cumulative encounter duration time is 144 s, which exceeds the predetermined threshold 120 s.The device will be added to another device's cluster.However, since a device is deleted from a cluster based on the decayed duration time, it might lead to a cluster that still accumulated some devices which do not have high contact rate with the central device now, whereas members of a TCDM cluster are devices, which have high contact rate with a central device.Therefore, the cluster detected by DRAFT is bigger than TCDM.
The mean size of DRAFT's cluster is smaller than CCM, because a cluster detected by DRAFT method aims to detect devices which have high contact frequency with a central device during a period.It is a one-hop cluster.However in CCM any two devices have high contact rate duration; this period will be added to a cluster.The hops do not have limitation.It is absolutely bigger than a DRAFT cluster.
In summary, comparing with DRAFT, although the complexity of TCDM's cluster forming method is similar, TCDM's cluster size is smaller than DRAFT.Comparing with CCM, besides the cluster size TCDM is more feasible and efficient.We demonstrate it from two aspects.First CCM needs to get all devices' contact information in advance.Second in the process of forming cluster, CCM has to traverse all clusters to merge similar clusters.This process wastes large amounts of computing resources and time.However, our proposed TCDM is distributed.Every device forms his transient cluster only based on two devices' real-time contact frequency.Therefore, comparing with CCM, TCDM is more feasible and efficient.
Max Cluster Size. Figure 9 shows the max cluster detected by these three methods.The max cluster detected by TCDM and DRAFT method is smaller than cluster detected by CCM.CCM's max cluster contains more than half of the whole devices.It would extremely increase computation complexity.
The Cluster Size of Evaluating Forwarding Capability.DRAFTbased and our proposed TFCM-based data forwarding strategy determine whether forwarding data to an encountered device is based on its 2-hop cluster.Therefore, a device's 2hop cluster size is very important for the performance of these two methods.However, CCM-based data forwarding strategy is based on a cluster.Hence, we compare mean 2hop cluster size of TCDM and DRAFT, and mean cluster size of CCM. Figure 10 shows our proposed TCDM's cluster is the smallest among these three cluster detection methods.It includes more or less 30% of all devices, but DRAFT's 2hop cluster contains almost half of all devices and CCM's cluster contains more than half of all devices.The reasons are illustrated as follows.First, since the 1-hop cluster of TCDM is smaller than 1-hop cluster of DRAFT, 2-hop cluster of TCDM is smaller than 2-hop cluster of DRAFT.Second, in DRAFT, a cluster contains 2-hop participants but a cluster of CCM does not limit hops; hence the cluster size of CCM is bigger than DRAFT.
In conclusion, first, the cluster size of our proposed TCDM is the smallest.It makes the efficiency of judging the forwarding capability of a device by searching all members in the cluster the best.In addition, comparing with CCM, our proposed TCDM is more feasible and more efficient because of its distribution and simplicity of cluster formation method.Comparing with DRAFT's cluster, which cannot accurately express the change of contact rate between participants, TCDM can reflect the change of devices' contact in real time.Since evaluating a participant's data forwarding capability is based on contact rate between devices, TCDM is more accurate in the evaluation of a device's forwarding capability.
Data Forwarding Performance Evaluation.
In the experiment for fairness sources and destinations are picked randomly, and the data's generation time is randomly chosen in the daytime because nodes' activity time is low at night, which may result in inaccurate comparison.The performance is measured by the metric: the proportion of data delivery ratio to network overhead.Our proposed TCEM-based data forwarding strategy compares with Epidemic [9], Bubble Rap [4], DRAFT-based [7], and CCM-based [8] data forwarding strategy.
Epidemic.The data item is forwarded to every encountered device without the same data.It serves as upper bound.
Bubble Rap.This strategy utilizes both centrality and community.CPM (K-clique) is used to detect communities.The data item is always forwarded to a higher centrality device, until it reaches a device that belongs to the same community as the destination.When the data item reaches the destination community, it is forwarded to higher centrality device within the community's scope until the destination is reached.
DRAFT-Based Date Forwarding Strategy.The data will be replicated to encountered devices whose 2-hop clusters contain the destination until a carried data device encounters destination.CCM-Based Data Forwarding Strategy.It utilizes transient cluster (TC) as the forwarding unit, and data is always forwarded to the TC with better relaying capability to the destination within data's time constraint.Once data reaches a new TC with larger relaying capability, the data is distributed to all nodes met in the TC.The relaying capability of the current TC is computed by summing the probability that each participant of the TC appears in destination's transient cluster within data's valid time.The carried data device deletes data when he has neither encountered a device in TC with larger relaying capability nor gone to a TC with larger relaying capability in an appointed time period.The above-mentioned process ended when data reached the destination or the time exceeds data's valid time.
The result is shown in Figure 11.Considering the delivery ratio and the overhead the performance of Bubble Rap is the worst comparing with TCEM-based, DRAFT-based, and CCM-based data forwarding strategies.The reason is that these three strategies consider device's contact rate is timevarying, but Bubble Rap uses aggregate contact information which cannot reflect device's contact rate within data's valid time.Comparing with other three data forwarding strategies, Bubble Rap cannot accurately evaluate device's forwarding capability within the time constraint.Thus, its data forwarding performance is the worst.
Comparing with DRAFT-based and CCM-based data forwarding strategies, our proposed TCEM-based data forwarding strategy's delivery ratio is the lowest, but at the same time its overhead is also the lowest as illustrated in Figures 12 and 13.Integrating these two parameters its proportion of data delivery ratio to network overhead is highest.First, data forwarding of DRAFT and TCEM is based on the
Figure 3 :
Figure 3: The destination is in a participant's transient cluster.
Figure 5 :
Figure 5: The PDF of a transient cluster's duration time.
Figure 6 :
Figure 6: The destination is in a device's adjacent cluster.
Figure 7 :
Figure 7: Contact rate of two nodes.
In this paper, every device (1, 2, ..., ) has a time-varying transient cluster , which reflects a set of devices that have high contact rate with it, and the device 's adjacent cluster refers to transient clusters sharing the same device with .As illustrated in Figure2, a dashed circle represents a transient cluster. 1 is the device 1 's transient cluster at the time .It contains 2 , 3 , 4 which have high contact rate with 1 at this moment, and Our proposed TCEM consists of two steps: possibility evaluation and probability calculation.These two steps are based on transient clusters.In this section, we first describe our proposed Transient Cluster Detection Method TCDM, and how to evaluate a device's possibility of contacting destination (Section 4.1).We then elaborate on how to calculate the device's probability of successful data delivery within time constraint t (Section 4.2).4.1.Transient Cluster DetectionMethod. 2 , 3 , 4 are 1 's adjacent clusters.We set a device that has a possibility to transfer data to a destination, only when its transient cluster or its adjacent transient clusters contain the destination.For example, in Figure2if the destination of the data is 3 or 5 , which is in 1 's transient cluster or in 1 's adjacent cluster 2 , respectively, the device 1 will
Table 1 :
Start time 0 .Output: u i 's transient cluster , u i 's adjacent clusters .Initialize: ={0}, = {0}, =0.For every encountered participant , do = t current - // is the ICT at the current time If ( < x) // has high contact rate with = + // Adding encountered to =t current // Recording the encountered time If ( contains )// Whether encountered before Update in Else = + Else =t current //Recording the time that encounters End For ((t current - 0 ) % x==0) do // Updating every x hours For every in do If (( - ) > ) //The ICT is larger than hour = - // Delete from = - // Delete 's transient cluster from End End Algorithm 1: Building information stored by the device .Symbols and their definitions. ( , V ) The probability of indirectly contacting when ( ≤ V ) ( ≤ V ) 's probability of only indirectly contacting before V Input: () the probability that contacts within () the probability that directly contacts within t () the probability that through a relay device contacts within t ( 1 , 2 ) The probability of a transient cluster existing duration ∈ ( 1 , 2 ) ( ≤ V ) 's probability of contacting before V 1 ( , V ) The probability of contacting when ( > V ) 2 ( , V ) The probability of contacting when ( ≤ V ) () the probability that only indirectly contacts within t 1 ( , V ) The probability of indirectly contacting when ( > V ) 2 () is the probability that through a relay device contacts within t, and D is devices of 's transient cluster whose transient cluster contains d.
() is the probability that directly contacts within t, | 2018-10-21T21:47:41.133Z | 2018-09-25T00:00:00.000 | {
"year": 2018,
"sha1": "dbfcb6a03944bdaa2cd41c2e9afcb142b50b4d35",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/wcmc/2018/4801465.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "61ce1cb12990da26d31c6386909024f212a33f77",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
258007098 | pes2o/s2orc | v3-fos-license | Application of lean principles in a medicare insurance counseling service learning course
Background Lean principles are increasingly applied in healthcare to improve quality and cost. A service-learning course providing Medicare insurance counseling requiring rapid transformation due to the COVID-19 pandemic provided an opportunity for pharmacy students to apply lean skills. Educational activity Students, already introduced to lean skills earlier in their curriculum, enrolled in the insurance education certificate during their third year in Fall 2020. Students were oriented to the mandated service delivery restrictions. After a review of lean principles, students analyzed existing process for in-person counseling using a value-stream map. Students worked in teams to complete a cause analysis and develop solutions. Collaboratively, students clarified the value of the Medicare insurance counseling services to the community, adapted these components to accommodate environmental risk, and developed standard work for client acquisition, communication procedures, and service delivery to optimize client satisfaction and safety. Outcomes compared before and after application of lean skills included number of pharmacy students completing insurance counselor training, number of clients counseled, and the mean out-of-pocket savings identified for Medicare beneficiaries. Findings Students applied lean skills to transform an insurance counseling service by developing and implementing a future state value-stream map and new standard work. Overall Medicare insurance counseling service metrics decreased compared to previous years, but the service was sustained despite pandemic restrictions. Application of lean skills and service redesign provided a method for students to provide services via telepharmacy. Application of lean principles increased student engagement with the course and provided an opportunity to practice quality improvement skills. Lean provides a flexible set of skills that can be introduced and applied in different pharmacy instructional settings.
Background: Lean principles are increasingly applied in healthcare to improve quality and cost. A service-learning course providing Medicare insurance counseling requiring rapid transformation due to the COVID-19 pandemic provided an opportunity for pharmacy students to apply lean skills. Educational activity: Students, already introduced to lean skills earlier in their curriculum, enrolled in the insurance education certificate during their third year in Fall 2020. Students were oriented to the mandated service delivery restrictions. After a review of lean principles, students analyzed existing process for in-person counseling using a value-stream map. Students worked in teams to complete a cause analysis and develop solutions. Collaboratively, students clarified the value of the Medicare insurance counseling services to the community, adapted these components to accommodate environmental risk, and developed standard work for client acquisition, communication procedures, and service delivery to optimize client satisfaction and safety. Outcomes compared before and after application of lean skills included number of pharmacy students completing insurance counselor training, number of clients counseled, and the mean out-ofpocket savings identified for Medicare beneficiaries. Findings: Students applied lean skills to transform an insurance counseling service by developing and implementing a future state value-stream map and new standard work. Overall Medicare insurance counseling service metrics decreased compared to previous years, but the service was sustained despite pandemic restrictions. Application of lean skills and service redesign provided a method for students to provide services via telepharmacy. Application of lean principles increased student engagement with the course and provided an opportunity to practice quality improvement skills. Lean provides a flexible set of skills that can be introduced and applied in different pharmacy instructional settings.
Background
The publication of To Err is Human ushered in a transformation in the patient safety movement that focused on effectiveness and efficiency. These concepts have continued to remain at the forefront of healthcare services given the ongoing pressure in healthcare to constrain costs and improve quality. 1 Process improvement approaches developed for the manufacturing sector, Toyota Production System and lean, have been utilized in healthcare to achieve significant quality improvements and cost savings. 2 Lean is a peoplefocused philosophy that uses a systems approach to increase value to the customer through the application of continuous quality improvement to eliminate waste and improve operational effectiveness. Lean utilizes skills including the value stream map, continuous quality improvement models like plan do study act (PDSA), and cause analysis and visual problem-solving frameworks like A3. Lean skills provide structure for problem solving. For example, the A3 framework structures communication into a visual outline of the problem that includes background, current conditions, cause analysis, target condition, implementation plan, and follow up. Fig. 1 provides a list of lean skills. Consistent application of lean promotes a culture of problem solving. 3 Lean principles have been applied in pharmacy practice to improve sterile product preparation efficiencies, improve medication management and reduce insulin-related adverse events. [4][5][6] Quality and safety are core components of a pharmacist's role on the healthcare team and The Accreditation Council for Pharmacy Education (ACPE) 2016 standards name quality improvement skills as an essential component of pharmacy education. 7 Recent studies highlight the limited extent to which quality improvement and safety instruction is delivered in US Schools of Pharmacy. 8,9 While limited in pharmacy education, lean has been used in curricula and courses in both engineering and health care administration programs. 10,11 Quality improvement is a disruptive process that requires organizational readiness for change that may limit opportunities to apply lean skills in controlled didactic learning environments. 12 Service-learning and experiential courses where students participate in service delivery may provide an opportunity for students to apply lean skills, but to our knowledge this has not been reported in the pharmacy literature.
The purpose of this report is to describe the application of lean skills to a Medicare insurance counseling or Seniors Health Insurance Information Program (SHIP) certificate elective course. The opportunity to apply lean skills in this service-learning course was prompted in response to barriers imposed on the program due to the COVID-19 pandemic. Engaging students in the application of lean skills is a framework that can be applied in other circumstances where process changes are required.
Educational activity
The application of lean skills occurred in the context of a service-learning course offered at High Point University School of Pharmacy through a partnership with the State Department of Insurance. The educational activity described in this paper, application of lean skills, involves three components: (1) a pharmacy school course, SHIP Certificate, (2) the national and state training programs for Medicare insurance counselors, SHIP TA program, and (3) student participation in a pharmacist-run program serving older adults' medication access needs, named StewardSHIP. This project was approved by the University's Institutional Review Board.
SHIP certificate course
At High Point University School of Pharmacy, third year PharmD candidates complete required elective courses, including one certificate course. Certificate courses are a bridge between the didactic and experiential curriculum. In a certificate course students learn the processes to develop and demonstrate competence in patient management through self-study, patient cases, clinical scenarios, and participation in a clinical service. The SHIP certificate is one such course and was first delivered in the Fall of 2018. The elective course is taught by a SHIP certified pharmacist who provides Medicare insurance counseling in the local community through a community-based pharmacy program called StewardSHIP. The Accreditation Council for Pharmacy Education (ACPE) 2016 standards include the ability to utilize insurance knowledge to assist patients and caregivers in accessing medications to meet healthcare needs as an entry-level competency. 7 Navigating insurance is an essential pharmacist skill, given that over 90% of prescriptions filled at independent pharmacies are billed through a third-party payor. 13 Medicare Part D pays for nearly one-third of prescription drug expenditures in the US, making Medicare a key to medication access for older adults and those with disabilities. 14 Given that navigating insurance and Medicare are essential skills for pharmacists entering the workforce, the overall course objectives are four-fold: 1) describe common biologic changes that occur during aging; 2) perform selected aspects of geriatric patient assessment including medications and social determinants of health; 3) effectively communicate with older adults; and 4) evaluate insurance options and health care needs for individual patients. To achieve these objectives the SHIP Certificate course is delivered in three phases: 1) selfstudy online Medicare insurance counselor SHIP TA course 2) direct instruction of Medicare insurance and geriatric prescribing principles and 3) delivery of Medicare insurance counseling services to the community during Medicare OEP. The course is offered as pass-fail and the three phases of the course are assessed separately. The SHIP TA course is assessed by earning a passing score (>80%) on the online course final exam. The direct instruction of Medicare and geriatric principles are assessed through formative feedback on counseling simulations and a passing score of (>85%) on a geriatric prescribing knowledge exam. The Medicare insurance counseling phase is assessed through the completion of a minimum of 10 hours of counseling in the community. Formative feedback for counseling simulations and live counseling is provided by faculty pharmacists who are certified Medicare insurance counselors. Formative feedback is provided on counseling sessions and is based on a counseling worksheet outlining steps of Medicare counseling appointment. Overall, the SHIP Certificate course is assessed by enrollment, number of pharmacy students completing insurance counselor training and passing the course. One distinctive component of this certificate course is that pharmacy students complete national Medicare insurance counselor training and, at the State level, training equivalent to other Medicare insurance counselors.
Medicare insurance counselor training
In all fifty states, Medicare information is distributed to beneficiaries through the federally funded State Health Insurance Assistance Program. Medicare information is provided directly to beneficiaries by trained volunteer Medicare insurance counselors referred to as SHIP counselors. Volunteers serving their communities as Medicare insurance counselors receive training through the national State Health Insurance Program Technical Assistance (SHIP TA) program. 15 The SHIP TA program offers a self-study online Medicare course. The SHIP TA online course is the first training step that new SHIP counselors in North Carolina complete. Prior to the semester's start, students complete a Medicare insurance counselor volunteer application with the State Department of Insurance and are enrolled by a State coordinator in the SHIP TA online course. Students complete 24 instructional hours of self-study about Medicare and assistance programs. After passing the SHIP TA course, students participate in a 4-hour Medicare overview that emphasizes communication of concepts across different levels of health literacy. Students then complete a counseling simulation and receive formative feedback. Table 1 describes the overall Medicare insurance counselor training process and assessment. Prior to entering the final phase of the course, students complete the counseling simulation. Students then move on to participate in the provision of Medicare insurance counseling during Medicare OEP through the StewardSHIP program supervised by course coordinator, certified counselor, or SHIP staff until they demonstrate independent counseling skills. Overall Medicare insurance counselor training is assessed as a part of the SHIP Certificate course and aligns with course assessment of number of pharmacy students completing insurance counselor training and passing the course.
Delivery of medicare insurance counseling services
Students participate in the delivery of Medicare insurance counseling primarily within a pharmacist-run community-based program called StewardSHIP. StewardSHIP is run by the faculty pharmacist who coordinates the SHIP certificate and is focused on helping older adults in High Point, North Carolina access medications and use them for improved health. StewardSHIP began providing Medicare insurance counseling to older adults during OEP in 2018 and in 2021 the program expanded to provide this service and additional supports for medication access year-round. Prior to the implementation of the StewardSHIP program, only two SHIP certified counselors provided Medicare education in High Point, North Carolina. Course partnerships allow students to contribute to local Medicare insurance counseling efforts in Guilford County but also provide counseling in seven neighboring counties. Local Medicare insurance counselors, including those who work with StewardSHIP, serve a county population of 83,442 individuals over the age of 65, where only 0.06% of the population is not covered by some insurance and 77.7% are covered by insurance that included Medicare based on data from 2016. 16 In 2018 and 2019, StewardSHIP organized counseling clinics at the local senior center and other community locations. Medicare beneficiaries were scheduled for face-to-face 1-hour appointments. Clients were recruited through outreach and marketing, which was done via calls to prospective and previous clients and via flyer distribution in pharmacies, doctors' offices, and public service locations. Clients were also recruited through already established partnerships within the community including senior centers, assisted living facilities, and a local health system. Appointments were initiated with clients by completing a Medicare plan finder form, providing general demographic information, Medicare information, current insurance information, and current medications. Counselors then conducted a patient-specific plan comparison using Medicare.gov. 17 Plans were reviewed with the beneficiary, including cost savings if any, and any client-specific questions were answered. If a beneficiary decided to change their plan based on the review, enrollment assistance was provided. Clients were provided details of their appointment in writing and StewardSHIP program contact information for follow up. Typically, students observe a live counseling appointment, deliver one counseling under direct supervision, and are then able to deliver Medicare counseling with coaching support as described in Table 1.
Operationally, StewardSHIP was designed to incorporate two value generating processes: 1) helping older adults access medications they need and 2) creating a space for pharmacy students to provide insurance counseling. The program is assessed using outcome, process, and balancing measures chosen for the purposes of evaluating operational effectiveness and quality improvement. Outcome measures included percent of students passing the SHIP TA Final Exam and both the number of clients counseled and the mean out-ofpocket savings identified for Medicare beneficiaries for the counseling service. Process measures included number of phone calls received by the program, number of counseling experiences per student, number of enrollments in cost-effective Medicare plans, and applications submitted for assistance programs. Balancing measures included student course evaluations, patient satisfaction and instructor assessment of overall course workload and effectiveness. A family of measures is useful to determine if process changes are in fact an improvement, measure the connection between process and outcome as well as detecting unintended consequences. 18 Microsoft Excel was used to document and analyze outcome measures.
Application of lean skills
In April of 2020, five months before the start of the SHIP Certificate course, our state partner ceased all in-person operations due to the COVID-19 pandemic and all area senior centers were closed. These changes meant that in-person counseling would not be feasible during Medicare OEP due to increased risk of transmitting the communicable disease COVID-19 to older adults. The Fall 2020 SHIP certificate was fully enrolled and would require rapid change to accomplish course learning objectives. As the Medicare insurance selfstudy component of the course was already online, efforts were focused on transforming later components of insurance counselor training for pharmacy students. The need for rapid change presented an opportunity to engage students in the application of lean principles to create solutions to a service delivery challenge. Lean is a business optimization process whereby organizations clarify customer value and use quality improvement frameworks to solve problems of operational effectiveness. The events of the pandemic presented two operational challenges: training pharmacy students as Medicare insurance counselors and providing Medicare insurance counseling without face-to-face visits. Lean is a journey; with practice and time lean becomes the way we work. Because lean skills' structure thinking for problem solving, lean is learned through the application of lean skills. At our School of Pharmacy, students are introduced to lean skills in required first and second year courses, described in Fig. 1. Opportunities to practice lean skills are limited to simulations created in the skills laboratory such as teamwork and root cause analysis exercises. The need for rapid change in the service-learning component of a certificate course presented an opportunity for students to gain additional experience in the application of lean skills. The objective of incorporating lean skills into this course was to provide students an opportunity to apply lean skills and involve them directly in the redesign of delivering Medicare insurance counseling services. Lean skills were assessed through verbal formative feedback to the project teams from course coordinator. Outcome measured included achieving course and Stew-ardSHIP program outcomes, as well as students' experiences with lean skills.
Between August and October 2020, students applied lean skills to the operational problem of delivering Medicare counseling services without face-to-face appointments. In preparation for the course change, the course coordinator created a value stream map of the pre-COVID SHIP course OEP service process. During Week 3 of the SHIP Certificate students were reoriented to the fundamentals of lean thinking, introduced to the pre-COVID value stream map, and the A3 visual problem-solving framework. Students were provided with current operational conditions and a vision for a target state where Medicare insurance counseling could be provided despite barriers. Students were assigned to teams to conduct a cause analysis and develop an operational hypothesis which addressed the service delivery barriers. Collaboratively, students clarified the value of the Medicare insurance counseling services to the community, adapted these foundational components to accommodate environmental risk, developed standard work for client acquisition, communication procedures, and plans for service delivery. Students participated in generating ideas for the future state by developing an updated value stream map for services. Specific plans for implementation were organized using an A3 visual problem communication format where students identified the problem, background, current condition, problem analysis, target condition, and necessary standard work. The key element of the problem was the limitations that pandemic closures placed on patient access to face-to-face services. Background included the lack of local experience with telephone counseling and potential limited client access to technology. Students used the pre-pandemic value stream map to organize their assessment of the current condition to create a new Value Stream Map. Key improvements are depicted in the starbursts in Fig. 2 and include telephone connections and telehealth appointments. Analyzing the problem, students recognized the need for flexibility to provide clients access to services given the multiple uncertainties. In identifying a target condition, or a goal to reach, students proposed developing both telephone and remote video procedure for service delivery both of which meet the definition of telepharmacy. 19 Plans also included onboarding clients and setting up appointments via telephone using Google voice. Students then organized two teams who formulated a telephone and remote video implementation plan for pandemic SHIP services. Students completed the A3 by organizing a standard procedure for service marketing, program phone services including phone call scripts, and a communication and follow up procedure. All counseling was supervised by the course coordinator. Students first observed a telephone counseling session, then provided counseling to their friend and family members, and finally provided supervised counseling to a program client.
Assessment
The objective of incorporating lean skills into this course was to provide students an opportunity to apply lean principles and involve them directly in the redesign of delivering Medicare insurance counseling services. SHIP Certificate course, Medicare insurance counselor training, and a subset of StewardSHIP program outcomes were compared prior to the application of lean skills (2018 and 2019) and after (2020). Due to the developmental stage of the StewardSHIP program, comparison data was selected for assessment plan elements with complete data. The outcome measures compared were course enrollment, number of pharmacy students completing insurance counselor training and passing the course, number of clients counseled, and the mean out-of-pocket savings identified for Medicare beneficiaries. Student experiences with lean skills were described based on formative feedback interactions.
Findings
Student-led application of lean principles in a SHIP certificate course engaged students in solving operational problems and provided experience applying lean skills to practice-based problems. Students employed lean skills, creating a revised value stream map and new remote counseling procedures. In collaboration with the course coordinator, the students then implemented operational changes and worked within the StewardSHIP program to deliver Medicare counseling during OEP. Application of lean skills provided a means for continued Medicare education during OEP despite pandemic closures that prevented face-to-face appointments. Changes implemented were effective in that pharmacy students were able to experience a Medicare education interaction with an older adult despite constraints placed on the course by the pandemic, and in some cases, students gained new communication skills in engaging with patients via telephone.
Prior to the COVID-19 pandemic, in 2018 (29 students) and 2019 (20 students), there was an average of 25 students certified as counselors through the SHIP Certificate course each year. In those two pre-pandemic years, students participated in an average of 6 outreach events annually, counseled an average of 97 clients annually, and identified an average of $69,916 in estimated cost savings annually through in-person counseling ($2796 per student certified). In the Fall of 2020, seventeen (17) pharmacy students enrolled in the course after the Fall 2020 drop/add date and completed training as SHIP counselors passing the SHIP TA final exam. As in previous years, all students successfully completed SHIP TA certification and the required minimum of counseling hours. However, compared to the previous two years of course delivery there were differences. Despite application of lean, during the 2020 OEP our program was able to serve only 29 clients and identify $3631 dollars of potential savings ($214 per student certified). The local senior center hosted a drive-thru health fair in which we participated but it did not result in any Medicare education opportunities. Additional attempts to identify clients included outreach telephone calls to recently hospitalized Medicare beneficiaries in collaboration with a local health system (n = 90 patients called), telephone calls to previous local clients (n = 40 patients called), and partnership with a neighboring county area agency on aging (n = 3 clients referred). From these phone calls, 8 of the 29 total clients were identified. The remaining clients served were referrals from our local area agency on aging (n = 4) and the friends and family members of students enrolled in the course. The lack of person outreach events and face-to-face marketing and increases in phone calls to older adults from multiple sources during the pandemic significantly reduced our participant volume. Despite the differences in patient volume, students were still able to experience Medicare counseling and receive formative feedback on their counseling skills.
Students were able to apply lean skills and develop implementation plans for service delivery. Application of lean principles and tools enhanced the course experience for students by providing them the opportunity to meet a community needs in the face of significant barriers, which they are likely to encounter in their future pharmacy practice. At the beginning of this course, the level of uncertainty about how both the course and community service would transition was challenging, however, lean principles gave both course instructors and students an avenue to channel this energy productively into creating new policies and procedures. Incorporation of process change into the course prior to OEP increased student awareness of how their Medicare knowledge would be applied and increased their enthusiasm for OEP counseling when that part of the semester arrived. Students signed up to complete counseling hours early in the semester rather than waiting till later as in previous semesters. Student ownership of counseling clinic processes including communication procedures, marketing, and documentation were also increased. Students were pleased with the one-on-one or two-onone interaction with faculty and the time that a limited counseling schedule provided for exploring detailed considerations about Medicare insurance plans. The lack of community outreach events required students to complete all ten counseling hours with our local program, supported by course faculty which increased the total faculty hours spent during Medicare OEP from 6 hours to 15 hours per week. Overall, the application of lean to course changes maintained the dual value of the course to both train students in insurance counseling and support local Medicare beneficiaries.
The selection and implementation of telepharmacy as a solution to pandemic constraints within the lean problem-solving framework resulted in additional unanticipated learning opportunities for students and course coordinator. Prior to implementation of telepharmacy processes, students and faculty had no formal training in telehealth outside of their previous experience working with patients over the telephone in other settings, e.g. community pharmacy and discharge counseling. Telepharmacy appointments were largely conducted using the telephone at patients request. Telephone counseling appointments were, for some students, their first experience with extensive telephone interaction that required impromptu telephone skills practice to facilitate counseling. Two appointments were scheduled via video call, service selected by the client, however both calls encountered technical difficulties requiring a switch to telephone. The overall effectiveness of telepharmacy was evaluated through post counseling debrief between faculty and student(s). During the debrief, students evaluated their overall experience, asked questions, and determined opportunities for improvement and lessons learned. Overall students struggled to connect with the client's essential need or question over the phone. Identifying clients primary learning need is difficult for students providing Medicare counseling in person, and this was more challenging when counseling via telephone. Further, explanations of plan benefits and cost comparisons were time consuming without the benefit of the client and the counselor sharing a view of the plan comparison screen in Medicare.gov. Our State Department of Insurance provides Medicare insurance counseling over the phone from a central call center staffed by experienced counselors. Our experience in this course suggests that new counselors may benefit from video or face to face interactions to use visual supports to improve counseling. No telephone appointment took less than one hour and frequently took more than this standard Medicare counseling appointment time. Despite remote client engagement, counseling experiences inspired in the students an appreciation for the need of providing clients ongoing education about Medicare and provided a space for students to practice their abilities to applying Medicare knowledge to real-life scenarios like previous face-to-face experiences. Telepharmacy has received increased attention as pandemic precautions have necessitated increased utilization. Several recently published studies have demonstrated the impact of telehealth training. A reading and video based asynchronous telepharmacy training module increased student knowledge, but after the module students had a lower intent to provide telepharmacy in the future. 20 A telehealth course in a school of medicine incorporated a faculty-supervised mock patient telehealth encounters improved knowledge and resulted in positive student views of participating in telehealth services. 21 Implementation experiences in this course support the hypothesis that pharmacy students need additional opportunities to experience telepharmacy as the digital transformation in health care continues. While utilization of telepharmacy in this course had limitations the StewardSHIP program will continue to offer this option to patients because of the ability to serve clients with limited transportation and those in counties with few SHIP counselor resources. Implementation of telepharmacy revealed a need to provide pharmacy students training specific to telephone interactions which will be incorporated in future semesters. Telepharmacy has expanded the potential reach of StewardSHIP program certified counselors and has facilitated ongoing partnership with a local health system to provide Medicare education across a multiple county region.
While the COVID-19 pandemic was the impetus to incorporate lean principles into this course it added value and will be maintained to pursue ongoing improvements in the courses' provision of Medicare education in the community. However, there are significant limitations to this evaluation of lean skills application. This report provides a retrospective description of a rapidly planned and implemented course change with limited evaluation. The StewardSHIP program is under ongoing development and data collection for the full panel of program measures is incomplete and was interrupted by COVID-19. Future evaluations of the application of lean skills in pharmacy learning environments would benefit from prospectively planned evaluation. For example during the Fall 2020 OEP, despite the program engagement gains from the application of lean skills, the course coordinator observed that students felt less confident after a single directly supervised counseling session to provide counseling independently. However, in this cohort confidence was not systematically measured. Course coordinators plan to apply lean skills in the form of a cause analysis to the training cases that directly precede live counseling to identify elements that result in counseling independence. Hollingsworth and colleagues performed a qualitative study suggesting that live counseling adds little to the Medicare knowledge gained in a classroom training setting but does impact student confidence. 22 Assessment of student confidence will be incorporated in future course offerings. Despite the challenges presented by the COVID-19 outbreak and overall transition from in-person to telepharmacy appointments lean skills provided a framework for transformation that may be useful in other course settings.
Summary
Students were able to apply lean principles and develop procedures to deliver insurance counseling services despite significant practical challenges due to the COVID-19 pandemic. Lean is an effective vehicle for clinical transformation and this experience demonstrates how lean skills can be applied in a service-learning course. Lean provides a flexible set of skills that can be introduced and applied in different instructional settings to equip student pharmacists with the essential quality improvement knowledge and skills.
Contribution to literature
Lean skills applied in a service-learning course provide student pharmacists opportunity to practice quality improvement skills.
Declaration of Competing Interest
None. | 2023-04-08T05:09:36.415Z | 2023-03-01T00:00:00.000 | {
"year": 2023,
"sha1": "281e3e45c62a5b0307bb45318b4ec09c0c3b4d25",
"oa_license": null,
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10076253",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "281e3e45c62a5b0307bb45318b4ec09c0c3b4d25",
"s2fieldsofstudy": [
"Business",
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17588574 | pes2o/s2orc | v3-fos-license | Games and a Typology of Video Game Research Approaches
Although there is a vast and useful body of quantitative social science research dealing with the socialrole and impact of video games, it is difficult to compare studies dealing with various dimensions ofvideo games because they are informed by different perspectives and assumptions, employ differentmethodologies, and address different problems. Studies focusing on different social dimensions ofvideo games can produce varied findings about games’ social function that are often difficult to reconcile—or even contradictory. Research is also often categorized by topic area, rendering a comprehensiveview of video games’ social role across topic areas difficult. This interpretive review presentsa novel typology of four identified approaches that categorize much of the quantitative social sciencevideo game research conducted to date: “video games as stimulus,†“video games as avocation,â€â€œvideo games as skill,†and “video games as social environment.†This typology is useful because itprovides an organizational structure within which the large and growing number of studies on videogames can be categorized, guiding comparisons between studies on different research topics and aidinga more comprehensive understanding of video games’ social role. Categorizing the different approachesto video game research provides a useful heuristic for those critiquing and expanding thatresearch, as well as an understandable entry point for scholars new to video game research. Further,and perhaps more importantly, the typology indicates when topics should be explored using differentapproaches than usual to shed new light on the topic areas. Lastly, the typology exposes the conceptualdisconnects between the different approaches to video game research, allowing researchers toconsider new ways to bridge gaps between the different approaches’ strengths and limitations withnovel methods.
• Much quantitative social science research has explored video games' social impact using widely varied methods and approaches. • As light is sometimes studied as a wave and sometimes as a particle, video game research has used many perspectives.
• It is difficult to compare some game research because studies often examine one social dimension of games while ignoring others. • Researchers exploring different video game dimensions are sometimes like the Indian parable of the blind men and the elephant. • A typology of social science research approaches to video games will aid comparison, synthesis, and expansion of research. • This review of video game research approaches identifies four distinct perspectives used in much video game research. • The "video games as stimulus" perspective includes research focused on effects of game content and features on users. • The "video games as avocation" perspective includes research focused on users of video games and their commitment to the medium. • The "video games as skill" perspective includes research focused on video games as a tool for developing skills and abilities. • The "video games as social environment" perspective includes research focused on social interaction between game users.
Introduction
The advent of video games as a commercial phenomenon has been accompanied by a surge of research on video games' social impact. Beginning in the 1980s and continuing ever since, many hundreds of studies by researchers specializing in communication, psychology, medicine, and related fields have explored the role video games have in their users' lives and in society. This vast body of research has been enlightening, but not always conclusive. For example, commonly-researched areas such as the effects of video game violence remain disputed, and bodies of literature exploring potential negative effects and concerns about video games often remain unreconciled with other scholarship exploring opportunities and positive outcomes of game use. Also obfuscating a clear overview of the state of communication research dealing with video games is the fact that video games have many different dimensions and functions, which limits the extent to which video games' impact can be described in simple and uniform terms.
Given that video games serve many functions, researchers studying them have employed a variety of different methods, theoretical perspectives, and measurement instruments in research on video games. This is appropriate, as different dimensions of video games call for different research approaches. Much as light is sometimes treated as a particle and sometimes as a wave in research depending on the focus and goals of that research, video games can be treated as a message, as a time commitment, as a simulation activity, or as a community depending on what games are being studied and how. Unfortunately, though, the many 2013 , 1 (1), 31-68 VIDEO It is difficult to review, compare, synthesize, and build upon research that is so varied in nature, so much of the research on video games remains isolated by topic area in reviews and meta-analyses. As a result, social science research on video games often progresses in increasingly fragmented and insular streams that examine separate video game dimensions without informing a more comprehensive understanding of the overall role of video games in society. For example, while a group of meta-analyses of video game research examines the negative and positive effects of their content on users (e.g., Anderson & Bushman, 2001;Anderson, Shibuya, Ihori, Bushman, Sakamoto, Rothstein, & Saleem, 2010;Ferguson, 2007a;2007b;Sherry, 2001), others meta-analyses focus on games' function as everything from a learning tool (Vogel, Vogel, Cannon-Bowers, Bowers, Muse, & wright, 2006), to an exercise enhancing visuospatial cognition (Ferguson, 2007b), to a pastime encouraging a sedentary lifestyle (Marshall, Biddle, Gorely, Cameron, & Murdey, 2004), to an active physical activity (Peng, Lin, & Crouse, 2011).
Such different foci, categorized primarily by research topic, produce seemingly contradictory findings across topics about the social role of video games. From one perspective, a finding may indicate that video games discourage physical activity (e.g., Marshall et al., 2004), while from another perspective, a study may observe that some video games are themselves a healthy physical activity (e.g., Peng et al., 2011). From one perspective, research may indicate that video games are a potentially harmful stimulus promoting antisocial behavior (e.g., Anderson et al., 2010), while from another perspective, a study may find that video games are a normal and healthy part of an individual's social life and development (Ferguson & Garza, 2011).
These contradictions in findings about the social role of video games are sometimes the result of studies employing different methods and measures to produce different findings, but methodological differences are not all that separates discrepant research on video games. Differences between findings in different studies of video games are also the product of different assumptions about the fundamental nature of video games and their social role. Regardless of method employed or topic examined, different studies view video games as a stimulus users are exposed to, an avocation users spend time with, a task for users to practice and accomplish, or a medium for users to interact with each other. These approaches transcend method and topic in research, but selecting one of these approaches to a topic limits the way a topic can be conceptualized and researched.
while it is valuable that all of these bodies of research exist, it is important that we recognize the relative strengths and limitations of these approaches, consider how they contribute together to a comprehensive understanding of the social role of video games, explore ways to address popular research topics with different research approaches than previous research on those topics has employed, and plan new research on Video Games as a Multifaceted Medium: Review and Typology
35
2013 , 1 (1), 31-68 video games that will bridge different approaches instead of proceeding in insular "siloes" within one approach or another. In many cases, particular topics in video game research have been primarily addressed from only one approach, perhaps sometimes because the approach has been particularly useful in informing the topic and perhaps sometimes because researchers have simply been accustomed to approaching the topic from that approach. Reliance on one approach in examining a dimension of video games may be effective when the approach is well-suited to the topic (e.g., examining video games as a stimulus in research dealing with the effects of games' violent content), but such steadfast application of the same approach to a topic over time may neglect key relevant elements of the video game experience (e.g., failing to account for the increasing proportion of violent video game play that occurs between players online rather than by a player interacting with game content as a stimulus). Therefore, identifying approaches commonly used in different areas of research on video games can indicate not only what approaches have been consistently used to examine different topics, but also where some topics should be examined from different approaches to generate new insights (e.g., examining violent video game play as an online social behavior rather than only as a player's exposure to a violent media stimulus).
This literature review attempts to summarize much of the quantitative social science research on video games using a novel typology of video game research approaches often used to explore popular research topics. In addition to providing an up-to-date summary of a large number of studies from several areas of video game research, the review's typology of approaches will help organize criticism, comparison, and extension of research on video games. This typology will allow us to identify what video game research topics tend to employ different approaches, which bodies of research can be compared because they share a common approach, and what limitations bodies of research have due to their reliance on a given approach.
Using this typology to categorize existing approaches to video game research, we can also identify when research should further investigate topics using a novel approach, and we can address disconnects between approaches with new research that will bridge those disconnects to add new understanding about video games' social role. Below is an attempt at a comprehensive review of some major areas in quantitative social science research on video games, organized within a novel typology of four research approaches: "video games as stimulus," "video games as avocation," "video games as skill," and "video games as social environment." (See Table 1
Video Games as Stimulus Definition and Characteristics
The "video games as stimulus" approach is by far the most prominent approach in social science research dealing with video games over the last three decades. This approach includes research that examines the effects of video game content and format features on users' psychological and behavior responses. In some www.rcommunicationr.org cases, a single, very simple characteristic of video games, such as the presence of violence, is examined as a unidimensional and monolithic stimulus variable. In other cases, the effects of a more nuanced stimulus message with more semantically complex symbols and constructs, such as in-game persuasive advertising appeals, are examined (though these nuanced message characteristics can still be regarded as stimuli; see Simons, Detenber, Roedema, & Reiss, 1999;wright, 1974). whether viewing video games as one-dimensional stimuli or more complicated mass media messages, though, research from the "video games as stimulus" approach focuses on the effects of one or more dimensions of game content or format on users' responses in a manner consistent with the "media effects" tradition (see Eveland, 2003;McLeod, kosicki, & Pan,1991). Characteristics of the "video games as stimulus" approach therefore include (a) isolation of one or more game content or form elements as treatments to examine effects on users, (b) a presumption that treatment variables will have similar effects across games in which they are present, and (c) a presumption that treatment variables' effects are produced more or less independently of any effects that other game dimensions might have. Research using this approach may acknowledge that video games are an interactive medium whose users can inf luence their experiences with the games, but is still concerned primarily with identifying generalizable effects of individual game dimensions in isolation. Content analyses have consistently demonstrated that the majority of popular video games contain at least some form of violence (e.g., Dietz, 1998;Smith, Lachlan, & Tamborini, 2003), though the nature and extent of that violence varies widely across games (e.g., Thompson & haninger, 2001). The most visible research employing the "video games as stimulus" approach over the medium's history has been research examining the effects of violent content in video games. Consistent with the characteristics of the "video games as stimulus" approach, this research has typically involved laboratory experiments comparing the effects of a violent and nonviolent video game on an outcome variable, usually one associated with aggression, while often also attempting to hold as many other game elements as constant as possible across the compared games (but cf. Adachi & willoughby, 2010;2011). Alternatively, research in this area has also employed cross-sectional or longitudinal surveys asking participants to report their levels of exposure to video games to examine correlations with reports of problematic behavior, again with the intent of identifying video game violence as a universal treatment that can be measured as a level of exposure across numerous hours playing many different video games.
This research examining violence in video games as a treatment inf luencing users' responses dates back to the mid-1980s (e.g., Dominick, 1984;Anderson & Ford, 1986), when empirical research first began to investigate potential relationships between violence in game content and aggression in users. Since then, some prominent and widely-cited studies have pointed toward a relationship between exposure to violence in video game content and aggressive thoughts, feelings, and behavior in users (e.g., Anderson & Dill, 2000;Bushman & Anderson, 2002;Carnagey, Anderson, & Bushman, 2007). The emergence and promotion of these studies initially led to a dominant viewpoint in fields from psychology to medicine that violent video games represented a substantial cause of aggression in users. however, a number of published studies have not observed such a relationship between violence in video games and aggression, and some studies have even found that exposure to video game violence reduced aggressive responses (see Ferguson, 2010;Ferguson & Rueda, 2010) or increased prosocial behaviors (e.g., Ferguson & Garza, 2011 , 1996; 2000) and that measures are used selectively in some cases to increase the likelihood of significant findings (Ferguson & heene, 2012).
The mixed results across studies dealing with video game violence are exemplified by meta-analyses synthesizing scores of these studies examining the effects of video games on measures of aggression; these meta-analyses have been similarly mixed in their findings regarding a relationship between exposure to Video Games as a Multifaceted Medium: Review and Typology 39 2013 , 1 (1), 31-68 violence in video game content and measures of aggressive thoughts, feelings, and behaviors. Some metaanalyses observe such a relationship (e.g., Anderson & Bushman, 2001;Anderson, et al., 2010), though sometimes suggesting effects on aggression may be weaker than violence in other media such as television and film (Sherry, 2001). Other meta-analysis do not indicate such a relationship (Ferguson, 2007a;2007b;Sherry, 2007). Further, the results of meta-analyses finding a relationship between violent video games and aggression have been challenged on the grounds that the relationship exists because studies finding no such relationship are unlikely to be published (Ferguson, 2007a;2007b;Ferguson & kilburn, 2009) and because meta-analysis authors may have been biased in their selection of studies included in analyses (Ferguson & heene, 2012).
In addition, recent research has identified another key video game characteristic that may actually be the cause of the aggressive responses to violent games: competition (Adachi & willoughby 2010;2011).
Most violent video games include a competitive element to produce the conf lict that leads to their violent content (e.g., antagonists engaging a character in a fight), and a pair of recent studies comparing effects of competition and violence on aggressive responses suggest that this competition is a clearer inf luence on aggressive outcomes than violence (Adachi & willoughby, 2011). Such results suggest that effects of violent games on aggression that were previously attributed to violence in the games may actually be an effect of competition that is also typical in violent games.
These and other concerns have provoked calls for researchers investigating effects of video game violence to be more cautious and back away from earlier claims that violent video games present a substantial risk of harming their users while re-examining the research that produced such claims (e.g., 2011;hall, Day, & hall, 2011a;2011b). Meanwhile, other researchers hold steadfastly to the claims that a conclusive link between violence in video games and meaningful aggression in users has been sufficiently evidenced (e.g., Murray, Biggins, Donnerstein, Menninger, Rich, & Strasburger, 2011). Therefore, the state of research regarding the negative effects of violent video games on aggression is very much in f lux, with the longstanding dominant opinion that violence in video games is harmful under siege from new interpretations and data suggesting that the effects of video game violence may have been overstated and may also result from other game factors. In any case, arguments on all sides have been based largely in the "video games as stimulus" approach, and are likely to continue to be.
In addition to research focusing only on the presence or absence of violence, some research on violence in video games examines specific portrayals of violence and the effects of those messages. For example, a content analysis (Smith et al., 2003) raised concerns about not only the high prevalence of violence in video games, but also the nature of its portrayal, claiming that most aggressive exchanges in the games sampled were portrayed as justified, depicted as rewarded or unpunished, and shown with unrealistically low consequences for victims. Some research on effects of video game violence has also investigated specific elements of the way in which violence is portrayed. For example, Carnagey and Anderson (2005) www.rcommunicationr.org conducted three studies exploring whether a game that rewarded violence would have greater effects on aggression than a game that punished violence or a nonviolent game. The studies found that playing either type of violent game had similar effects on hostility compared to a nonviolent game, but that playing the game that rewarded violence tended to elicit more aggressive thoughts and behaviors compared to the game where violence was punished or the nonviolent game. That study, however, has been subject to recent criticism that observed effects were actually due to different levels of competitiveness across game conditions (Adachi & willoughby, 2011).
A group of studies has also explored the effects of bloodshed as a specific element of video games' portrayals of violence, with mixed results (Ballard & Lineberger, 1999;Ballard & wiest, 1996;Barlett, harris, & Bruey, 2008). One study (Ballard & wiest, 1996) found that playing a fighting game portraying bloodshed led to increased blood pressure and hostility compared to playing the same game without blood depicted or playing a nonviolent game. Another study varying levels of blood (Barlett, harris, & Bruey, 2008) found that playing a fighting game set to display higher levels of blood led to more arousal than playing the game set to display lower levels of blood or none at all. A third study (Ballard & Lineberger, 1999) found that playing a violent game led to less reward behavior and more punishment behavior from players in a subsequent task compared to playing a nonviolent game, but that varying the amount of blood depicted in the violent game did not inf luence either types of behavior.
In another study, konijn, Nije Bijvank, and found that children assigned to play a violent video game exhibited more aggressive behavior in a laboratory than children assigned to play a nonviolent game, but also found that children who played the violent game exhibited more aggression if they identified more with the characters. They also compared effects of realistic and fantasy games that were violent and nonviolent, finding that games' realism increased identification, but not aggression. As with many studies finding negative effects of video game violence on aggression, though, the aggressive behavior measure used in the study has been criticized as potentially invalid (Ferguson & Rueda, 2009), and the appropriateness of the games used in the study's nonviolent and violent conditions has also been questioned (Ferguson, 2010;Ferguson & kilburn, 2010).
Lastly, an experiment involving users of an online game (williams, 2006b) examined effects of game violence on perceptions of social reality rather than on aggression. That study found that participants assigned to play a violent online game for a month were more likely than participants not assigned to play the game to overestimate rates of assault with weapons, a specific type of crime portrayed in the game, but that there were no substantial differences between participants who played and did not play the game in terms of their estimates of types of crime not portrayed in the game.
Portrayals of gender, race, and age.
Another prominent example of research from the "video games as stimulus" perspective is research deal- likely to be characters that the user can take the role of in the game, more likely to be passive characters, and more likely to be portrayed in sexualized ways (e.g., Dietz, 1998;Beasley & Standley, 2002;Dill & Thill, 2007;Downs & Smith, 2010;Ivory, 2006;williams, Martins, Consalvo, & Ivory, 2009). Perhaps the largest such study , which analyzed 4,966 characters appearing in the 150 top-selling games from one year, found that female characters represented only 14.77% of all characters and only 10.45% of primary characters. Data from that study also indicated that female video game characters tended to be thinner than the average American woman in highly photorealistic games, though female video game characters were larger than the average American woman in less graphically realistic games (Martins, williams, harrison, & Ratan, 2009). Another recent study (Downs & Smith, 2010) observed that female characters in video games were proportionately more likely than male characters to be portrayed as partially nude or in revealing clothing and more likely to have an unrealistic body shape typically unattainable "without the aid of augmentation, plastic surgery, or chemical injections" (p. 725).
These portrayals may have a negative inf luence on both male and female users' perceptions of women in the real world, as is evidenced by an experiment finding that male and female users who played a video game with a sexualized female main character tended to exhibit more unfavorable perceptions of women on some questionnaire measures compared to users who played the same game with a female main character who was not sexualized or did not play a game at all (Behm-Morawitz & Mastro, 2009). These results, which are consistent with research on the effects of gender portrayals in other media (see Bessenoff, 2006;Signorielli, 1989, but c.f. Muñoz & Ferguson, 2012), suggest that video games' portrayals of gender roles in society may have negative implications for users' perceptions of gender roles.
There is also some research on other demographic portrayals in video games and the potential effects of those portrayals. A content analysis examining portrayals of race in video games found that white characters were overrepresented in top-selling video games relative to the race's prevalence in the U. S. population, with Black, hispanic, biracial, and Native American characters underrepresented and Asian/Pacific Islander characters slightly overrepresented (williams, Martins, et al., 2009). The same content analysis examined age of video game characters as well, finding that adult characters were overrepresented relative to the U. S. population, as were teens to a lesser extent, while children and the elderly were underrepresented (williams, Martins, et al., 2009). while there has been limited research about the effects of such disproportionate portrayals of race and age in video games on games' users, the messages these portrayals send also exemplify the "video games as stimulus" perspective. www.rcommunicationr.org Advertising and product placement.
Another growing body of research within the "video games as stimulus" approach is research dealing with advertising and product placement within video games. This emerging research area has generally tended to find that commercial messages within video games are effective, both in terms of how well they are remembered and the favorable impressions they create. yang, Roskos-Ewoldsen, Dinu, and Arpan (2006) found that advertising in soccer and racing games elicited better implicit memory for advertised brands than for brands not advertised in the games. Another study by Glass (2007) found that brands advertised within a boxing video game elicited quicker positive responses than negative responses in subsequent implicit association tests, and also elicited quicker positive responses than brands not advertised in the game.
An experiment by Lee and Faber (2007) found that users remembered product placement advertising in a driving video game better when ads were centrally-placed in the game than when ads were placed in peripheral locations, and also that users remembered ads for products that were not closely related to the game's topic better than ads for products that were congruent with the game's topic. These effects, though, varied considerably depending on users' experience with video games and their level of involvement with the game.
Although research on advertising in video games has a shorter history than research on some other game messages such as portrayals of character demographics, it is likely that this line of research will continue to be active and vital given that advertising is not only present in some video games, but often the entire purpose of the common and increasingly prevalent "advergames" made available online by companies sell- Game controls.
Another notable example of research examining effects of video game elements from the "video games as stimulus" perspective is the majority of the growing body of research investigating responses to game control formats. while a control interface might in principle be considered a way to interact with game content and features rather than a game stimulus dimension, much research dealing with effects of game controls has examined uniform effects of control interfaces rather than the way an interface is used or an interface's efficacy as a mode of interaction. Therefore, such research can also be categorized as falling within the "video games as stimulus" perspective because of its focus on uniform effects of game controls on user responses. Studies on effects of control schemes, though, are very mixed. For example, a study by Barlett, harris, and Baldassaro (2007) found that playing a shooting video game with a "light gun" controller elicited higher levels of heart rate, aggression, and hostility than playing the same game with a traditional controller. Conversely, though, a study by Markey and Scherer (2009) found that using motion-based controllers did not enhance effects of a violent video game on hostility or aggressive thoughts (though the violent games were found to generate more hostility and aggressive thoughts than the nonviolent games regardless of control format). A pair of studies by Skalski, Tamborini, Shelton, Buncher, and Lindmark (2011) comparing effects of controls varying in "naturalness" on feelings of presence and enjoyment generally found that more natural controllers increased both responses. More recently, Schmierbach, Limperos, and woolley (2012) found that a steering wheel controller increased enjoyment of a driving video game compared to a traditional game controller. On the other hand, a study by Limperos, Schmierbach, kegerise, and Dardis (2011) found that playing a game with a traditional controller elicited more feelings of control and enjoyment than a more advanced and "natural" controller. www.rcommunicationr.org
Trends in Popularity over Time
In the early years of video game research, studies using the "video games as stimulus" perspective may have been common because the content and features of video games were relatively simple due to limitations in graphical processing, computer memory, interface hardware, and game development budgets. In many cases, the content of these early games might have been easy to break down into a few defining characteristics to explore their effects. More recently, technological advances in game hardware and software, along with growth in game development budgets, have engendered massive increases in the complexity and depth of video games' content and made games' messages much more nuanced than a few identifiable stimulus characteristics. Even as video games become more complex and multidimensional, though, the "games as stimulus" approach remains popular, most likely because the approach is very conducive to experimental designs and variable-focused theoretical models of media effects (see Eveland, 2003;McLeod et al., 1991).
Therefore, the "games as stimulus" approach has not only been popular across the history of social science research involving video games' effects, but will also likely be a dominant approach through which video games will be investigated by researchers for the foreseeable future.
Advantages and Limitations
The popularity of the "video games as stimulus" approach has advantages in its suitability to studies conducted in a controlled environment aiming to isolate specific variable relationships in video games' effects, but widespread adherence to the approach may delay a fuller understanding of how many contexts and characteristics of video game play beyond the games themselves may impact game users. For example, while a great deal of video game play now takes place between friends and strangers in an online setting (see williams, 2006c), the bulk of research from the "games as message stimulus" perspective continues to employ experiments where a single research participant plays a video game in a controlled setting so that effects of the games' characteristics alone can be isolated and analyzed, or surveys isolating relationships between game exposure and dimensions of users' perceptions and behavior that expected to be inf luenced by games.
The "video games as stimulus" approach also treats the game experience largely as a one-way communication process wherein users absorb and interpret game messages. while this approach may be wellsuited to video games that provide similar content to all users, viewing video games as a one-way message may be poorly suited to the study of increasingly common video games that provide users with an active role in determining their content and games that allow users to engage one another online and therefore create novel content and experiences for each other. One might consider, for example, whether traditional "video games as stimulus" research on responses to video game violence are relevant to an understanding of whether or how users of popular "massively multiplayer online role-playing games" such as world of warcraft or Lord of the Rings Online are inf luenced by the violent content that is present in these games, Video Games as a Multifaceted Medium: Review and Typology 45 2013 , 1 (1), 31-68 but often almost tangential to the completion of "quests" and social interaction that takes place in their online virtual environments. The one-way focus of the "video games as stimulus" approach also provides little understanding about why people use video games and what makes them choose one game over another.
Video Games as Avocation Definition and Characteristics
The "video games as stimulus" perspective may have tended to dominate research dealing with various video game content, forms, and effects, but a second perspective has also been highly prominent in research on video games. The "video games as avocation" approach has been concerned not with the nature or effects of video games, but with those who use video games. In some ways, the "video games as avocation" perspective's focus on video game users and use behaviors complements the popular "video games as stimulus" perspective in a manner analogous to the way that the uses and gratifications perspective of communication theory (see Blumler, 1979;Ruggiero, 2000) complements other media effects perspectives.
Research based in this approach is characterized by a focus on the characteristics of video game users, why they play video games, the amount of time they spend with video games, and potential problems associated with their commitment to playing video games.
Examples
Video game use.
Some of the most long-standing research on video game users has been surveys measuring video game use, both in general and across gender and age groups. Numerous studies of youth, adolescents, and adults have consistently observed that males are more likely to play video games than females and play them more frequently. Surveys of 900 fourth-through eighth-grade students in the United States conducted in the early 1990s (e.g., Buchman and Funk, 1996;Funk, 1993; & Caplan, 2008) has been able to more precisely identify trends in online game user characteristics by examining game server data in conjunction with surveys. This study's findings were generally consistent with data from other studies of online game players, observing that players were 31.16 years old on average and that 80.8% of players were male. however, the user log data also indicated that female players spent more time on average playing EverQuest II than males, and that female players tended to underestimate how much time they spent playing per week than males.
In addition to tracking video game play and comparing play tendencies across genders, studies of video game use have also observed gender differences in game type preferences and motivations for play. These studies have tended to find that as with total amount of play, game preferences and motivations for play have also varied across genders (e.g., Griffiths & hunt, 1995;Lucas & Sherry, 2004;yee, 2006a).
Problematic use and "addiction."
Just as research has tracked video game use for decades, concern about the harmful overuse of video games has explored the potentially dangerous side of the "video games as avocation" perspective for just as long. Almost as soon as video games became a popular commercial pastime, arguments sprouted about their risk for addiction and overuse (e.g., Anderson & Ford, 1986;klein, 1984). Numerous studies have shown evidence that some users are at risk for problematic use and overuse of video games (e.g., Griffiths, 1991;1997;Griffiths & hunt, 1995;Griffiths & Meredith, 2009;Grüsser, Thalemann, & Griffiths, 2007;Lemmens, Valkenburg, & Peter, 2009). A survey of 387 12-to 16-year-olds in the United kingdom (Griffiths & hunt, 1998) produced estimates that one in five adolescents was "dependent" on video games, while a more recent estimate from Gentile's (2009) survey of 1,178 8-to 18-years olds in the United States placed the rate of "pathological" video game use at 8%. Similarly, a two-year survey of 3,034 students in the third, fourth, seventh, and eighth grades in Singapore (Gentile, Choo, Liau, Sim, Li, Fung, & khoo, 2011) found that about 8% could be classified as "pathological" video game users. Most studies, though, tend to find much lower rates of problematic game use, in part because they use less liberal measures of problematic use (2011) found the overall prevalence of pathological gaming across included studies to be 3.1%.
Given that the Internet has been generally identified as a medium that is prone to overuse (e.g., Caplan, 2002;2003;Mckenna & Bargh, 2000;young, 1998), it is no surprise that online games have been singled www.rcommunicationr.org out in particular as a threat for problematic use. In yee's (2001) seminal survey of online game players, a majority of respondents reported being "probably" or "definitely" addicted to EverQuest, while Castronova's (2001) survey of EverQuest users observed that 38.1% of respondents spent more time playing the game than at their jobs. A third survey of EverQuest users (Griffiths et al., 2004a;2004b) found that some respondents spent as much at 70 hours per week playing the game, and that substantial minorities of respondents reported neglecting other activities such as hobbies, sleep, time with friends and family, work, and school to play EverQuest. A series of surveys of MMORPG users by yee (2006b; 2006c) found that a majority of players have spent at least 10 continuous hours in one play session and that 18% of players believed that their MMORPG play had negatively affected their schoolwork, health, finances, or personal relationships. when asked if they were "addicted" to an MMORPG, about half of those respondents said yes.
Other surveys of MMORPG users have produced similar results with regard to players reporting long play sessions (Ng & wiemer-hastings, 2005) and negative effects of MMORPG use on their lives (Charlton & Danforth, 2007;Cole & Griffiths, 2007). A final survey of MMORPG users (hussain & Griffiths, 2009) estimated that 7% of players may be at risk for problematic game use behaviors.
Complicating research on unhealthy video game use is an absence of a consensus regarding the appropriateness of the term "addiction" to describe video game overuse (Griffiths, 2008;Griffiths & Meredith, 2009;wood, 2008); the term "problematic use" is often substituted to sidestep the difficult questions surrounding whether compulsive use of media constitutes an addiction per se (Caplan, 2002). Also, some have claimed that problematic use of video games has been overestimated because some measures of problematic use actually only assess high engagement with games, which is not necessarily problematic (Charlton, 2002;Charlton & Danforth, 2007). Still others argue that video game "addiction" is often used inaccurately to describe cases where people are simply poor time managers or using video games excessively due to other underlying problems (wood, 2008).
Trends in Popularity over Time
As with research from the "video games stimulus" perspective, research in the "video games as avocation" tradition has been consistently prominent for decades. The approach's utility in determining the medium's prominence in our society, as well as in identifying potential harms of overuse, ensure that "video games as avocation" research is a robust part of the video game research landscape. If anything, research in this tradition appears to be increasing in prevalence with the growing presence of online video games. This increase is likely in part because of their popularity, in part because of concerns about their unique potential for harm compared to other video games, and in part because of unique opportunities for collection of user data through partnerships between researchers and the video games industry.
Advantages and Limitations
By focusing on who plays video games, as well as patterns of their use and reasons they choose to play games, the "video games as avocation" approach recognizes the active role that video game users have in the medium's social impact. By the same token, though, much research from the "video games as avocation" perspective is burdened by the same weaknesses as research focused on media uses and gratifications in that it is reliant on self-reports that may not be accurate. Therefore, "video games as avocation" research may not always accurately uncover patterns of video game use and user characteristics as well as research from the behavioral tradition. That concern is mollified, though, by novel methods of data collection that allow researchers to access video game use data directly from game servers rather than from self-reports alone (e.g., williams et al., 2008).
Video Games as Skill Definition and Characteristics
Although the "video games as stimulus," and "video games as avocation" approaches are arguably dominant in the history of research on video games' social role, there is also a smaller but important body of research that examines a different set of outcomes from video game use. A long tradition of research confirms that casual play of all kinds serves a meaningful role for both humans and animals in the development of important life skills (see Frederickson, 1998). Similarly, the "video games as skill" approach includes studies that explore the practical outcomes of the video game medium by investigating physical and cognitive skills developed through video game play. Generally speaking, then, research in the "video games as skill" approach is characterized by research linking video game play to general development of any of a range of physical and cognitive abilities. Like the "video games as stimulus" perspective, the "video games as skill" perspective deals largely with how playing video games changes aspects of the user, but the direction of the interaction between the game user and game differs between the two perspectives; while the "video games as stimulus" approach views video games as something that users are exposed to and inf luenced by, the "video games as skill" perspective views video games as a tool that users employ to develop abilities and practice and perform skills.
Examples
Perception, cognition, and motor skills.
In the very early years of video game research, several studies observed that video game users performed www.rcommunicationr.org better than non-users on some tasks measuring performance related to perception and coordination. Such tasks included performance on a pursuit rotor exercise (tracking a rotating dot on a turntable with a metal wand) (Griffith, Voloschin, Gibb, & Bailey, 1983), a Bassin timer exercise (pressing a button in time with the arrival of a moving light on a runway) (kuhlman & Beitel, 1991), and a spatial representation task (a mental paper-folding exercise) (Greenfield, Brannon, & Lohr, 1994), among others. while these studies simply correlated video game experience with perceptual, spatial, and motor performance, other studies went further by isolating causal effects of a video game session on performance in similar tasks (e.g., Dorval & Pépin, 1986;Okagaki & Frensch, 1994;Subrahmanyam & Greenfield, 1994).
A particularly widely-cited series of studies (Green & Bavelier, 2003; identified a positive effect of action game play on multiple measures of visual selective attention. Likewise, a metaanalysis summarizing studies of effects of action games on visuospatial cognition (Ferguson, 2007b) found evidence that the body of research in the area indicated a positive relationship. Interestingly, the effects of video games on visuospatial cognition may be stronger for violent video games than nonviolent video games, largely because of the nature of the action in violent games (see Spence & Feng, 2010). Some research has also shown connections between video game experience and performance in specific vocational skills, such as some types of surgery (Lynch, Aughwane, & hammond, 2010;Rosser, Lynch, Cuddihy, Gentile, klonsky, & Merrell, 2007). while recent research has suggested that the bounds of positive effects of video games on visuospatial cognition may be limited (Valadez & Ferguson, 2012), and that methodological f laws may cause some research to exaggerate effects of video games on perception and cognition (Boot, Blakely, & Simons, 2011), there appears to be a general consensus that video games can have positive effects on some skills related to perception and cognition. It should be noted, though, that not all research based in the "video games as skill" perspective involves users developing skills that are necessarily healthy or prosocial; one experiment, for example (whitaker & Bushman, in press), found that playing a shooting game with a pistol-like controller instead of a typical game controller led users to successfully make almost twice as many "headshots" and 33% more other shots on a mannequin in a subsequent target-shooting task.
Physical activity.
Video games have also been explored as a potential positive inf luence on general physical health and fitness, though primarily only in recent years. An early study (Segal & Dietz, 1991) found that playing a standing arcade game resulted in more energy expenditure than standing without playing a game, and that energy expenditure from playing the arcade game was comparable to a slow walk. Video game play was not associated with enough energy expenditure to be recommended as an acceptable form of cardiovascular exercise. Considering that many video games are played in a sedentary position rather than the standing position used for many arcade games, video game use has historically been associated with inadequate physical activity levels and a risk of unhealthy weight (Vandewater, Shim, & Caplovitz, 2004) however, the potential for positive physical health outcomes from video game use has been revived by the recent development of several popular active video game control interfaces that require users to control games with movement, such as using a motion-sensing controller, standing and moving on a motion-and weight-sensing device, or moving their bodies in front of a camera interface. A number of "exergames" of "active video games" (AVGs) developed using these interfaces show promise for encouraging physical activity during game play rather than the sedentary states traditionally associated with video games (e.g., Graf, Pratt, hester, & Short, 2009;Graves, Stratton, Ridgers, & Cable, 2007;. Both an interpretive literature review (Peng, Crouse, & Lin, in press) and a meta-analysis (Peng et al., 2011)
Trends in Popularity over Time
As the brief review of research from the "video games as skill" indicates, the perspective was manifested in a number of studies during some of the early years of video game research-roughly the 1980s and early 1990s-but was somewhat quieter for more than a decade while research on effects of game content proliferated. The combination of a resurgence of research on games and visuospatial cognition and new developments in active video game technology, though, have spawned a resurgence in "video games as skill" research in recent years to complement the ongoing video game research dealing with use and social responses.
Given the exciting potential implications of some research from this approach, though, it is likely that research based in the "video games as skill" approach will continue to be prevalent in the future.
Advantages and Limitations
while many of the other video game research approaches described here regard video games in much the same way as other communication media in their exploration of content, use, and effects, the "video games as skill" perspective is sensitive to the fact that video games are indeed games with unique characteristics and functions compared to other media. By keeping the game component of video games in focus, the "video games as skill" perspective is best suited to address the unique contributions of video games compared to other media. As the brief review above indicates, some of these unique contributions may be very promising. At the same time, though, video games do contain powerful stimuli and messages, so stripping their function down to only a task or exercise fails to take into account the amount of social information that video games convey as a rich and dynamic medium. Despite the approach's focus on games as a tool for developing skills of one type or another, research applying the approach is also often limited www.rcommunicationr.org in the extent to which it can truly show evidence for long-term casual effects of game play on skills because much of the research consists of either short-term experiments or correlational studies rather than prospective or longitudinal experiments.
Video Games as Social Environment
Definition and Characteristics while all three approaches of video game research described so far deal with the way people use video games and respond to them, the increasing presence of video games that allow users to interact with each other online ushers in the final video game research perspective in this typology: "video games as social environment." Research from the "video games as social environment" perspective focuses not on how much people interact with video games or how they respond to video game content and technology, but rather on how people use video games to interact socially with other people online. Therefore, this research perspective addresses video games as an interpersonal and group social medium rather than as a one-way mass medium or interactive simulation.
Examples
Social interaction and relationships.
Although even the first prototypes of video games were designed for more than one person to play together (Consalvo, 2006;kirriemuir, 2006;Lowood, 2006;Rockwell, 2002;williams, 2006a), social interaction between video game players has tended to be understudied over the history of video games research in favor of the research more in line with the "video games as stimulus" tradition. There are some exceptions, of course, involving early video game research dealing with social interaction between game users. For example, Fisher's (1995) survey of young arcade game players found that socializing with others was a primary motivation for their arcade visits. For the most part, though, social interaction between game players was neglected in early research, a decision perhaps justified by surveys finding that most video game players used the games alone even when playing at public arcades (Selnow, 1984). (1), 31-68 many of them the same surveys of online game use described above in the review of research from the "video games as avocation" perspective. The surveys revealed not only that online video game users commit a lot of time to their games, but also that they enjoyed a rich virtual social landscape. For example, yee's (2006b; 2006c) surveys of MMORPG players found that 39.4% of male respondents and 53.3% of female respondents considered their friendships with people in online games to be as good as or better than their friendships based outside of the games. Of the respondents, 32.0% of females and 22.9% of males also claimed they had told a personal secret to a friend in a MMORPG that they had not told a friend outside of an online game setting, and 15.7% of males and 5.1% of females in the survey had been involved in a physical dating relationship with someone they met in an MMORPG. Many MMORPG users in the surveys also claimed to have learned interpersonal, leadership, and social skills from playing the games.
A second survey of MMORPG users (Cole & Griffiths, 2007) found that about three-quarters of respondents reported making good friends in the games, with more than a third having discussed sensitive topics with their friends in an MMORPG. Other surveys of MMORPG users indicate that online games may offer a valuable alternative to other social opportunities (Ng & wiemer-hastings, 2005), that many MMOR-PG users' primary motivations for playing are social (Griffths et al., 2004a;2004b;williams et al., 2008) and that most MMORPG users prefer to take part groups that are primarily social in nature within the games (williams, Ducheneaut, Xiong, Zhang, Lee, & Nickell, 2006). Such research indicates that for online game users, much of the play experience is not about the content and tasks of the game as much as it is about the social interactions that the games provide.
Online behavioral observation.
Given that online games provide users with a dynamic social environment, online games also provide researchers with an opportunity to observe some social behavior at a level of detail that is not possible in everyday life. Studies have observed that many of the social behaviors online game users exhibit in game environments mirror patterns and tendencies observed in studies of real-life social behavior, including subtle behaviors such as nonverbal communication and gestures (williams; 2010; yee, Bailenson, Urbanek, Chang, & Merget, 2007). This correspondence between behavior in games and in real life makes some researchers optimistic that social behavior in online games can be studied not only to understand games' social dynamics, but to understand how people may interact in the real world. Suggestions for topics that can be studied using online games to better inform an understanding of real-life phenomena have ranged from economic trends and behaviors (Castronova, williams, Shen, Ratan, Xiong, huang, & keegan, 2009) to disease outbreaks and epidemics (Balicer, 2007;Lofgren & Fefferman, 2007). Such efforts are a testament to the richness of online games as a social environment. www.rcommunicationr.org
Trends in Popularity over Time
As has been mentioned above, the study of social dynamics of video games has been very limited until relatively recently, even though video games have allowed users to play together since their inception and online game environments have existed for more than three decades. Therefore, the "video games as social environment" approach is the most recent of the four perspectives described here to see a high level of representation in research activity. As the populations of online games continue to grow, though, and their research potential becomes clearer, research on social dimensions on online games has f lourished in recent years and can be expected to continue to do so. In fact, it is very possible that the "video games as social environment" approach may dominate the future research on video games, eclipsing previously common perspectives that have focused more on users' interactions with games than with each other in games.
Advantages and Limitations
As video games become increasingly more likely to include online components, either as a game feature or a central aspect, the value of the "video games as social environment" approach is clear. while other perspectives like the "video games as stimulus" approach have treated video games as a one-way inf luence, the "video games as social environment" approach focuses on the dynamic interpersonal interactions that are a key component of most online game users' experience. Considering the many millions of video game users who play games online, as well as the time many of them commit to the games, it is a grave error for researchers to continue to examine the social impact of video games based purely on games' content and potential effects of that content on users. An awareness of video games' social dimensions is critical to an understanding of today's video game landscape. On the other hand, though, research focused on video games' content, uses, and effects remains valuable. Online video games represent only part of the broad range of video games available, so individual uses and responses still require investigation. Further, researchers must take care to note that even though some online game behaviors will correspond closely with real-life behaviors, this will not always be the case. Therefore, research employing online games to examine social phenomena must proceed cautiously to ensure that online games are used as a model for social behavior only when appropriate (williams, 2010).
Applying the Typology when Critiquing and Conducting Research
while some research topics and methods are better suited from one approach than another, the approaches to video game research outlined here transcend topic and method. Therefore, the four categories of this typology have some utility simply as organizational heuristics in critiques of existing research.
Given that research from each of the four perspectives tends to share common advantages and limitations, identifying the approach that informs a study provides insight about a study's strengths and weaknesses.
Video Games as a Multifaceted Medium: Review and Typology 55 2013 , 1 (1), 31-68 Just as several advantages and limitations of a study can be known as soon as its method is revealed (e.g., a laboratory experiment is potentially useful in isolating causal relationships but limited by artificiality; a cross-sectional survey can identify correlations in a large group but frustrates attempts to eliminate alternative explanations for a relationship), several advantages and limitations of a study about video games can be known once its approach is recognized (e.g., a study from the "games as stimulus" perspective may inform potential psychological effects but neglect the role of social interactions with other players in game experiences; a study from the "games as social environment" perspective may address interpersonal and group dimensions of game use but may neglect how the game itself may inf luence users' perceptions and behavior).
In this manner, the limitations of assumptions behind a study or group of studies in a topic might be summarized by describing their adherence to one approach to the exclusion of other approaches, and the approach typology can be used to inform new directions in research on a topic. This typology may also provide a comprehensible entry point for scholars who are less familiar with video game research, demonstrating in quick and simple terms what approaches to video game research guide the broad range of research studies exploring the medium and where there may be opportunities to employ new approaches in exploring a video game research topic. As this review indicates, it is often the case that a body of research on a given topic related to video games is based largely or wholly in one research tradition; that does not mean that this should be the case, though.
For example, much of the research on popular "first-person shooter" games tends to examine their effects from the "video games as stimulus" perspective, which is valid for answering some questions but completely neglects the fact that much "first-person shooter" play now takes place in online multiplayer environments. Therefore, research examining first-person shooter games from the "video games as social environments" perspectives may be needed to supplement the body of research on the topic from the "video games as stimulus" perspective. Similarly, this review has noted that much of the literature on problematic video game use and "addiction" is based in the "video games as avocation" approach. This perspective has been useful in informing prevalence of video game use and overuse, and some relationships between individual difference variables and problematic game use. however, problematic video game use could also be studied effectively from other perspectives to better inform the broad picture of unhealthy game use, such as with research from the "video games as stimulus" perspective investigating video game features that produce effects conducive to problematic use, or research from the "video games as social environment" perspective exploring social dynamics of online game relationships that are associated with problematic use. Any number of video game research topics can be examined similarly using this typology to determine where the existing research has not been explored with multiple approaches.
Finally, research designs can work to address topics and questions comprehensively by employing designs based in multiple approaches from the typology. For example, we have extensive research on the effects of www.rcommunicationr.org video game violence from the "video games as stimulus" perspective, but more research examining effects of such game features for users with different play habits and motivations would hint at whether some players are more at risk for potential negative effects than others to synthesize the "video games as stimulus" and "video games as avocation" perspectives to better inform the media effects picture. In this way, mindfulness of the typology of approaches to video games might serve as a valuable structure for research designs seeking to take into account the full range of roles that video games serve rather than only addressing one facet of video games at a time.
Conclusions
In an attempt to synthesize the widely varying foci of the vast and growing corpus of literature dealing with video games, this article has presented a thorough review of quantitative social science research on video games and their social impact over the past few decades, as well as a novel typology of four different approaches within which much of that research can be categorized. while new research topics and findings will continue to emerge, this categorization of video game research perspectives will hopefully allow us to determine how new studies can be compared to existing work and how they can be placed in the vast context of the video game research landscape. Further, this categorization is also meant to guide development of new studies by delineating the characteristics, strengths, and limitations of each approach to help researchers make clear determinations as to how best to investigate new problems.
This typology is not without its limitations, most notable among them that it categorizes only quantitative social science research on video games. This means that the four game research approaches described here are positioned within the broader paradigm of empirical social science research. The existence of valuable research from other qualitative and critical research perspectives should also be acknowledged, and future scholarship may be able to position the approaches described here within a larger typology of video game research approaches that spans more methodological and conceptual approaches to video games.
Finally, and most ambitiously, it is hoped that by describing and explicating each of these existing approaches in video game research, this typology can help scholars carefully consider what all of the approaches are missing and develop novel approaches that will guide future research probing new questions about video games. Every study of video game research has its strengths, weaknesses, limitations, and assumptions. Using the broad typology described here, perhaps we can better understand gaps and opportunities in the existing research, reconcile discrepancies in findings from different perspectives, and design new and better studies and approaches to draw ever closer to a comprehensive understanding of the social Attribution you must attribute the work to the Author and mention the Journal with a full citation (it must at least include the data that appears in the suggested citation in the first page of the article), whenever a fragment or the full text of this paper is being copied, distributed or made accessible publicly by any means.
Commercial use
The licensor permits others to copy, distribute, display, and perform the work for non-commercial purposes only, unless you get the written permission of the Author and the Journal.
Modifications of the work
The licensor permits you to copy, distribute, display and perform only unaltered copies of the work. The licensor does not allow you to create and distribute derivative works based on it. The only exception is that you can use parts of the article as a citation.
The above rules are crucial and bound to the general license agreement that you can read at: http://cre- | 2014-10-01T00:00:00.000Z | 2013-01-01T00:00:00.000 | {
"year": 2013,
"sha1": "a94ba8133b568751c6fabe373ee548fb41232186",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.12840/issn.2255-4165_2013_01.01_002",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "efc9511e15b1878c80221ec12e4ef7a131596f2e",
"s2fieldsofstudy": [
"Computer Science",
"Sociology"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
208021650 | pes2o/s2orc | v3-fos-license | Object Detection and Classification for Self-Driving Cars
With the advancement in image processing, object detection has been one of the interesting topics due to its spectrum of applications in real time. For past 10 years Advanced Driving Assistance System (ADAS) has rapidly grown. Recently not only luxury cars but some entry level cars are equipped with ADAS applications, such as Automated Emergency Braking System (AEBS).ADAS systems are used for assisting the drivers by providing advice and warnings when necessary.Visual recognition tasks, such as image classification, localization, and detection, are the core building blocks of many of these applications, and recent developments in Convolutional Neural Networks (CNNs) have led to outstanding performance in these state-of-the-art visual recognition tasks and systems.This application is for multiple object detection and classification in a given video based on Open Computer Vision (OpenCV) libraries. The application also uses MobileNet architecture, SSD (Single Shot Detectors) framework and Caffe (Convolutional Architecture for Fast Feature Embedding) model to get the predictions.The system is used as one of the features in ADAS system for collision avoidance by detecting and classifying the objects such as vehicles and pedestrian .
I. INTRODUCTION
Driverless cars were once just the stuff of science fiction. But in recent years, they've become a reality and they're now hitting the streets in a number of U.S. cities. Companies like Uber, Google, and Ford recently started testing hundreds of self-driving vehicles on public roads. Supporters of driverless cars say the vehicles will make roads safer by cutting down on the number of crashes caused by distracted driving or other human errors.
In recent years, deep Convolutional Networks (ConvNets) have become the most popular architecture for large-scale image recognition tasks. The field of computer vision has been pushed to a fast, scalable and end-toend learning framework, which can provide outstanding performance results on object recognition, object detection, scene recognition, semantic segmentation, action recognition, object tracking and many other tasks. With the explosion of computer vision research, Advanced Driver Assistance System (ADAS) has also become a main stream technology in the automotive industry. Autonomous vehicles, such as Google's self-driving cars, are evolving and becoming reality. A key component is visionbased machine intelligence that can provide information to the control system or the driver to manoeuvre a vehicle properly based on the surrounding and road conditions. There have been many research works reported in traffic sign recognition, lane departure warning, pedestrian detection, and etc. This application is confined to detect objects on the road using MobileNet architecture and Single Shot Detectors (SSD) framework. Unlike other object detection techniques like RCNN's or YOLO, SSD's are more accurate and fast. The application uses Caffe model to get the predictions.
Abstract
With the advancement in image processing, object detection has been one of the interesting topics due to its spectrum of applications in real time. For past 10 years Advanced Driving Assistance System (ADAS) has rapidly grown. Recently not only luxury cars but some entry level cars are equipped with ADAS applications, such as Automated Emergency Braking System (AEBS).ADAS systems are used for assisting the drivers by providing advice and warnings when necessary.Visual recognition tasks, such as image classification, localization, and detection, are the core building blocks of many of these applications, and recent developments in Convolutional Neural Networks (CNNs) have led to outstanding performance in these state-of-the-art visual recognition tasks and systems.This application is for multiple object detection and classification in a given video based on Open Computer Vision (OpenCV) libraries. The application also uses MobileNet architecture, SSD (Single Shot Detectors) framework and Caffe (Convolutional Architecture for Fast Feature Embedding) model to get the predictions.The system is used as one of the features in ADAS system for collision avoidance by detecting and classifying the objects such as vehicles and pedestrian.
II. RELATED WORK
An integrated real-time approach was developed for detecting objects in the captured images of self-driving vehicles. Object detection is modelled as a regression problem on the predicted bounding boxes and their class probabilities. Unified neural network has been performed on the whole image which could predict the bounding boxes and class probabilities at the same time. [1] An improved frame-difference method was introduced, which can shorten the running time and improve the accuracy of the object detection. The results of the experiment show that after adding the improved frame-difference method, the detection speed is increased by 21.06 times, the image detection accuracy is improved about 8%. The algorithm is robust and it can be adapted to different scenes including indoor and outdoor. [2] Another method uses haar-like features of the images and AdaBoost classifier for detectionwhich provides a very fast detection rate. In order to predict the class of a vehicle, a feature based method is proposed. HOG, SIFT, SURF all are well represented feature for image classification. [3] Works have been carried out using Region Proposal Network (RPN), a fully convolutional network that simultaneously predict's object bounds and scores at each position. The RPN was trained end-to-end to generate high-quality region proposals, which were used by Fast R-CNN for detection. Furthermore RPN and Fast R-CNN were merged into a single network by sharing their convolutional features.Their accuracy is high but computational rate is slow. [4] III. PROPOSED SYSTEM In this system an input video is taken where objects are detected and the data of the detected objects is sent to a text file. To achieve a balance between accuracy and speed, our system uses Single Shot Detectors (SSD) along with MobileNet architecture and Caffe model. Figure 1 shows the system design and its implementation will be explained in the subsequent chapters. OpenCV is a software toolkit for processing real-time image and video, as well as providing analytics, and machine learning capabilities.The camera module consists of a input video which is captured using an OpenCV function VideoCapture(). The read() function of OpenCV is then used on the input video object to divide it into frames.
B) Processing Module:
The system uses MobileNet architecture that is a class of Convolutional Neural Networks (CNN). They are based on a streamlined architecture that uses depth-wise separable convolutions and point-wise convolutions to build light weight deep neural networks. This reduces the burden on the first few layers of the CNN, hence making the network fast.
Single Shot Detector (SSD) is used for object detection and classification together with MobileNet architecture. It runs a convolutional network on input image only once and calculates a feature map. It then runs a small 3×3 sized convolutional kernel on this feature map to predict the bounding boxes and classification probability. Each convolutional layer operates at a different scale hence it is able to detect objects of various scales.SSD achieves a good balance between speed and accuracy. CAFFE (Convolutional Architecture for Fast Feature Embedding) is a deep learning framework, which is used to train our model. Caffe supports many different types of deep learning architectures geared towards image classification and image segmentation. It supports CNN, RCNN, and fully connected neural network designs.
C) Data Module:
In this module the system sends the data of the detected objects to a text file. The data includes class of the object, probability of detection and co-ordinates of its bounding box. This data can be used to take further decisions by the ADAS system.
V. RESULT AND DISCUSSION
Working of the project is depicted through snapshots as follows. The following fig.3 shows the snapshot of the text file where the data of the detected objects are sent.
CONCLUSION
Trust in autonomous technology is the key to a driverless future. A vision-based object detection system for on-road obstacles is realized using Single Shot Detectors (SDD) and MobileNet architecture. This proposed work is used to categorize the moving objects like pedestrians, cars, motorbikes etc. into their respective classes and locate them by drawing bounding boxes around them. The resulting bounding boxes of detected objects and their classes are useful for subsequent motion planning and control subsystems of self-driving cars.
ACKNOWLEDGMENT
We owe heartfelt thanks to Visvesvaraya Technological University for supporting and providing us needful things for the project. | 2019-11-15T22:30:55.322Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "58760371e0b277444b20d5dd120821c9d363aab7",
"oa_license": null,
"oa_url": "https://doi.org/10.29126/23951503/ijet-v4i3p31",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "58760371e0b277444b20d5dd120821c9d363aab7",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
2803572 | pes2o/s2orc | v3-fos-license | AP5Z1/SPG48 frequency in autosomal recessive and sporadic spastic paraplegia
Hereditary spastic paraplegias (HSP) constitute a rare and highly heterogeneous group of neurodegenerative disorders, defined clinically by progressive lower limb spasticity and pyramidal weakness. Autosomal recessive HSP as well as sporadic cases present a significant diagnostic challenge. Mutations in AP5Z1, a gene playing a role in intracellular membrane trafficking, have been recently reported to be associated with spastic paraplegia type 48 (SPG48). Our objective was to determine the relative frequency and clinical relevance of AP5Z1 mutations in a large cohort of 127 HSP patients. We applied a targeted next-generation sequencing approach to analyze all coding exons of the AP5Z1 gene. With the output of high-quality reads and a mean coverage of 51-fold, we demonstrated a robust detection of variants. One 43-year-old female with sporadic complicated paraplegia showed two heterozygous nonsynonymous variants of unknown significance (VUS3; p.[R292W];[(T756I)]). Thus, AP5Z1 gene mutations are rare, at least in Europeans. Due to its low frequency, systematic genetic testing for AP5Z1 mutations is not recommended until larger studies are performed to add further evidence. Our findings demonstrate that amplicon-based deep sequencing is technically feasible and allows a compact molecular characterization of multiple HSP patients with high accuracy.
Introduction
Autosomal recessive hereditary spastic paraplegia (ARHSP) is a clinically and genetically heterogeneous neurodegenerative disorder characterized by progressive lower limb spasticity and pyramidal weakness because of axonal degeneration of the corticospinal tracts and dorsal columns. According to the presence of additional neurological signs, including cerebellar ataxia, peripheral neuropathy, epilepsy, optic neuropathy, and intellectual disability, ARHSPs are distinguished into pure and complex forms. Genetically, several ARHSP loci and at least 40 disease-associated genes have been identified (Fink 2013;Novarino et al. 2014). It is well known that the most frequent causes of ARHSP are mutations in the gene SPG11 (Online Mendelian Inheritance in Man [OMIM] no. 610844) (Stevanin et al. 2007). ARHSP families negative for such mutations present a significant diagnostic challenge. For a cost-and time-efficient diagnostic routine, information about the frequency of newly identified hereditary spastic paraplegia (HSP) genes is necessary. Techniques like next-generation sequencing (NGS), with its massively parallel and high throughput, increase the potential to analyze clinically relevant genes, in order to identify mutations. Slabicki et al. (2010) reported in two French siblings a homozygous indel mutation in exon 2 (p.R27Lfs*3) of the AP5Z1 gene (OMIM 613653), encoding adaptor protein complex 5 zeta 1(AP1Z1), as the underlying genetic cause of autosomal recessive SPG48 (OMIM 613647). Both siblings have pure adult-onset spastic paraplegia with hyperintensity of the cervical spinal cord in one sibling as the only distinguishing magnetic resonance imaging (MRI) feature (Slabicki et al. 2010). Recently, Novarino et al. (2014) identified another homozygous AP5Z1 mutation (p.L701P) in a single family displaying pure ARHSP. AP5Z1 forms a subunit of the adaptor protein complex 5 (AP-5), which is associated with the known ARHSP-associated proteins spatacsin (SPG11) and spastizin (SPG15). AP5Z1 is involved in membrane trafficking and appears to be the best candidate for endosomal sorting (Hirst et al. 2011). The frequency of SPG48 among apparently sporadic or ARHSP cases as well as its associated phenotype is unknown, as no further families with AP5Z1 mutations have been described so far. To study the frequency and the phenotype of SPG48, we performed a molecular screening investigating AP5Z1 in a cohort of 127 HSP patients of Caucasian origin. Furthermore, we demonstrated an amplicon-based NGS strategy that is feasible and allows a molecular characterization of multiple HSP patients in a massive way with high accuracy.
Materials and Methods
We set out to investigate the frequency of AP5Z1 (RefSeq accession number: NM_014855.2) mutations as a cause of ARHSP. To this, a consecutive series of 127 index patients (39 pure form, 88 complex form), including 96 sporadic and 31 HSP cases compatible with autosomal recessive inheritance, were recruited through the German Network for Hereditary Movement Disorders and the T€ ubingen HSP outpatient clinic. All patients were of European descent. In all patients with either cognitive deficits (n = 9) or corpus callosum dysgenesis on MRI (n = 6) or both, mutations in the SPG11 (OMIM 610844) gene as well as the ZFYVE26 gene (OMIM 612012) were excluded. Mutations in the CYP7B1 (OMIM 603711) and SPG7 gene (OMIM 602783) have been excluded in all cases. We used an amplicon-based NGS strategy for barcoding and multiplexing thousands of PCR amplicons for deep sequencing onto the Roche 454 NGS platform (454 Life Sciences, Branford, CT). All patients were screened for gene dosage in the AP5Z1 gene using a multiplex ligation-dependent probe amplification assay. For amplicon-library generation conditions, primer sequences (Tables S1 and S2), copy number variation analysis, and data analysis procedure including variation interpretation see supporting information.
Results and Discussion
An array-based amplification strategy followed by NGS was used to detect AP5Z1 mutations in a cohort of 127 patients representing sporadic or recessive HSP. We performed the amplification of target regions on a microfluidic system (Fluidigm 48.48 AccessArray TM System, Fluidigm Corporation, San Francisco, CA) and processed the emulsion-based clonal amplification and sequencing protocol using the medium-volume GS FLX Titanium amplicon workflow (454 Life Sciences). Overall, a median of 74,289 high-quality sequencing reads (passed filter wells) were generated per patient pool (48 PCR amplicons and 48 study samples). The median coverage per amplicon was 51-fold, ranging from 12-to 164-fold (mean coverage 107-fold). Both, the forward and reverse strands, were successfully and homogeneously sequenced as demonstrated in Figure S1. The median length of reads per patient pool was 334 bp. Per patient, the median of base pairs sequenced ranged from 413 to 1040 kbp. Dropouts of single amplicons with no coverage were observed in 91 (4.2%) of 2159 PCR products. Furthermore, 7.8% of the amplicons (169 of 2159) were insufficiently covered with less than 10 reads ( Figure S2). All amplicons without any coverage or covered less than 10-fold were additionally analyzed by conventional Sanger sequencing.
Co-occurrence of two mutations as expected in the recessive AP5Z1 gene was identified in only one patient. The 43-year-old female with sporadic complicated paraplegia showed two heterozygous nonsynonymous variants of unknown significance (VUS3; c.874C>T [p.R292W] and c.2267C>T [p.T756I]). Interpretation of these variants is summarized in Table 1. Unfortunately, no further family members were available in order to establish the chromosomal status of both variants. The patient presented a sporadic complicated HSP and showed cerebellar affection manifesting as myokymia and congenital bilateral nystagmus. Brain MRI was normal. The reported SPG48 phenotype represents a complicated adult-onset SPG with urinary incontinence, normal brain MRI, and hyperintensities in the spinal cord in one patient (Slabicki et al. 2010). As only two families have been described so far, it is difficult to draw any genotype/phenotype correlations. Additionally, 17 known single-nucleotide polymorphisms (http://www.ncbi.nlm.nih.gov/SNP), 2 variants which had already been reported by Slabicki et al. (2010), 8 synonymous and 1 nonsynonymous single-nucleotide (p.S164G, heterozygous) variants, which were not considered as causative, were detected in our cohort (Table S3). We could not identify disease-causing mutations by gene dosage analysis.
These findings indicate a very low frequency of SPG48 in Europeans. With the output of high-quality reads and a mean coverage of 51-fold, we demonstrated a robust detection of variants. All sequence variants found in the patient cohort could be confirmed by Sanger sequencing. This indicates the high quality of our approach, furthermore, it demonstrates that our strategy is technically feasible and allows a compact molecular characterization of multiple HSP patients in a massive way with high accuracy. The diagnostic yield in our study cohort of ARHSP is still unclear; variants were identified but their pathogenicity is still elusive. Due to the low frequency of SPG48, we suggest that SPG48 should not be given a high priority when considering genetic screening for ARHSP mutations. Further studies are needed to fully understand the clinical relevance of AP5Z1, the frequency and relevance of mutations in Caucasian and non-Caucasian populations and to clarify the variants of unknown significance. Therefore, to address these issues we suggest in any case including AP5Z1 in NGS gene panel diagnostics for ARHSPs.
Supporting Information
Additional Supporting Information may be found in the online version of this article: Figure S1. Coverage distribution across amplicons. (A) For each of the amplicons (x-axis), the distribution of generated reads is represented (y-axis). Box-and-whiskers plots summarize the corresponding overall coverage and (B) according to forward (A reads) and reverse (B reads). (AP5Z1 RefSeq: NM_014855.2). Figure S2. Performance of the study. In total, 88% of the amplicons (green, 1899 of 2159) were covered successfully (>10-fold). Dropouts of single amplicons with no coverage were obtained in 91 of 2159 amplicons (red, 4%) and 8% of the amplicons (blue, 169 of 2159) were insufficiently covered less than 10-fold and were completed by conventional Sanger sequencing (AP5Z1 RefSeq: NM_014855.2). Table S1. Primers for all coding exons and intron boundaries of the AP5Z1 gene (RefSeq: NM_014855.2). Table S2. Lib-A adaptor barcode primer. | 2016-05-12T22:15:10.714Z | 2014-05-25T00:00:00.000 | {
"year": 2014,
"sha1": "4b74315f3c831ae323bde3cb6c26dae13b09917c",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/mgg3.87",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4b74315f3c831ae323bde3cb6c26dae13b09917c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
8727021 | pes2o/s2orc | v3-fos-license | Amlexanox Suppresses Osteoclastogenesis and Prevents Ovariectomy-Induced Bone Loss
The activity of protein kinases IKK-ε and TANK-binding kinase 1 (TBK1) has been shown to be associated with inflammatory diseases. As an inhibitor of IKK-ε and TBK1, amlexanox is an anti-inflammatory, anti-allergic, immunomodulator and used for treatment of ulcer, allergic rhinitis and asthma in clinic. We hypothesized that amlexanox may be used for treatment of osteoclast-related diseases which frequently associated with a low grade of systemic inflammation. In this study, we investigated the effects of amlexanox on RANKL-induced osteoclastogenesis in vitro and ovariectomy-mediated bone loss in vivo. In primary bone marrow derived macrophages (BMMs), amlexanox inhibited osteoclast formation and bone resorption. At the molecular level, amlexanox suppressed RANKL-induced activation of nuclear factor-κB (NF-κB), mitogen-activated protein kinase (MAPKs), c-Fos and NFATc1. Amlexanox decreased the expression of osteoclast-specific genes, including TRAP, MMP9, Cathepsin K and NFATc1. Moreover, amlexanox enhanced osteoblast differentiation of BMSCs. In ovariectomized (OVX) mouse model, amlexanox prevented OVX-induced bone loss by suppressing osteoclast activity. Taken together, our results demonstrate that amlexanox suppresses osteoclastogenesis and prevents OVX-induced bone loss. Therefore, amlexanox may be considered as a new therapeutic candidate for osteoclast-related diseases, such as osteoporosis and rheumatoid arthritis.
Amlexanox inhibits RANKL-induced osteoclastogenesis in vitro.
We examined expression of TBK1 and IKK-ε during RANKL-induced osteoclastogenesis. Upon RANKL treatment, the protein expression of TBK1 and IKK-ε increased significantly and peaked at the last (7 th ) day (Fig. 1a). We then asked whether inhibition of TBK1 and IKK-ε by amlexanox had impact on osteoclastogenesis. We first evaluated the potential toxicity of amlexanox by measuring the proliferation of BMMs treated with different concentrations of amlexanox by Cell Counting Kit-8. As shown in Fig. 1b, amlexanox did not significantly affect the proliferation of BMMs. Next, BMMs were treated with different concentrations of amlexanox every 2 days in the presence of RANKL (50 ng/mL) and M-CSF (30 ng/ml) for 7 days. As shown in Fig. 1c,d, amlexanox inhibited osteoclast formation in a dose-dependent manner, with a half maximal inhibitory concentration (IC 50 ) of 3 to 6 μ M. Osteoclastogenesis is a multistep process consisting of proliferation, differentiation, cell fusion, and multinucleation 28 . To determine at which stage amlexanox blocked osteoclastogenesis, we added amlexanox at different time points during RANKLinduced osteoclastogenesis (Fig. 1e,f). Our results demonstrated that administration of amlexanox on the first day strongly inhibited RANKL-induced osteoclastogenesis. Amlexanox added at later stages also inhibited osteoclast formation, though to a lesser degree. These findings show that the inhibition of RANKL-induced osteoclastogenesis by amlexanox is not stage-specific, amlexanox might influenced pathways important for both early and late stages of osteoclast formation.
Amlexanox inhibits osteoclast function. To further examine whether amlexanox inhibited osteoclast function, we performed pit formation and actin ring formation assays using mature osteoclasts recovered from collagen-gel culture. Mature osteoclasts were seeded on bone slices in the presence of M-CSF and RANKL, and treated with or without 25 μ M amlexanox for 3 days. Amlexanox significantly inhibited the pit formation activity of osteoclasts (Fig. 2a). We next examined the actin ring formation, which is essential for osteoclast attachment and bone resorption. Amlexanox markedly disrupted actin ring formation of osteoclasts (Fig. 2b). These results indicate that amlexanox inhibits the function of mature osteoclasts.
Amlexanox inhibits RANKL-induced NF-κB and MAPKs activation. Activation of NF-κ B and MAPKs plays pivotal roles in osteoclast differentiation and function 9,29-33 . To investigate whether (a) BMMs were seeded on day 0. RANKL was added on day 1 and every 2 days thereafter. Cells were collected for analysis of protein expression of TBK1 and IKK-ε . ACTB was used as a loading control. (b) Amlexanox has little effect on proliferation of BMMs. BMMs (5 × 10 3 cells/well) were cultured with M-CSF (30 ng/ml), amlexanox was added at different concentrations (0, 1.5, 3, 6, 12, 25 μ M) every 2 days for 7 days. Cell proliferation was assessed by Cell Counting Kit-8. Data was presented as mean ± SD of 3 independent experiments. (c,d) Amlexanox inhibits osteoclast formation in a dose-dependent manner. BMMs (1 × 10 4 cells/well) were treated with different concentrations of amlexanox every 2 days in the presence of RANKL (50 ng/mL) and M-CSF (30 ng/ml) for 7 days. Then cells were fixed and stained for TRAP assay. TRAP-positive multinucleated osteoclasts (> = 3 nuclei) were counted. Data was presented as mean ± SD of 3 independent experiments. Scale bar represents 100 μ m. (e,f) BMMs (1 × 10 4 cells/well) were cultured in the presence of RANKL (50 ng/mL) and M-CSF (30 ng/ml) for 7 days and treated with 25 μ M amlexanox at the indicated times. Subsequently, cells were photographed and TRAPpositive multinucleated osteoclasts (> = 3 nuclei) were counted. Data was presented as mean ± SD of at least 3 independent experiments. *P < 0.05, **P < 0.01 versus vehicle. Scale bar represents 200 μ m (e). amlexanox suppress osteoclast differentiation through repression of NF-κ B pathway, we examined NF-κ B activation in BMMs using two methods. First, using western blot assays, we demonstrated that amlexanox could suppress RANKL-induced phosphorylation and degradation of Iκ Ba, and the phosphorylation of NF-κ B p65 (Fig. 3a). Second, we showed that amlexanox inhibited RANKL-induced NF-κ B DNA-binding activity by EMSA assays (Fig. 3b). Together, our results indicate that amlexanox can inhibit RANKL-induced NF-κ B activation. We next examined the phosphorylation of MAPKs (ERK, JNK, and p38) in BMMs by Western blot assays. As shown in Fig. 3c, amlexanox inhibited RANKL-induced phosphorylation of ERK, JNK and p38.
Amlexanox inhibits RANKL-induced c-Fos and NFATc1 expression. NFATc1 is a master transcription factor for osteoclast differentiation and function 8,32 . The transcription factor c-Fos acts as a key upstream activator of NFATc1 during osteoclastogenesis 33,34 . To determine whether amlexanox inhibits the expression of c-Fos and NFATc1 during RANKL-induced osteoclast differentiation, we examined the expression of c-Fos and NFATc1 by Western blot analysis. As shown in Fig. 4a, the protein expression of c-Fos and NFATc1 were increased during RANKL-induced osteoclast differentiation, and amlexanox efficiently prevented the increase of expression of c-Fos and NFATc1 induced by RANKL. The osteoclast preparation recovered from a collagen-gel culture was placed on bone slices, and treated with or without 25 μ M amlexanox for 3 days. After removal of cells, bone slices were stained with toluidine blue. The number of resorption pits was counted. The area of resorption pits were measured. Data represent as mean ± SD for three bone slices. *P < 0.05; **P < 0.01. (b) Amlexanox disrupts the actin ring-formation by mature osteoclasts on bone slices. After culturing for 48 h, actin ring formation staining were performed, and examined by fluorescence microscopy. Osteoclasts with actin rings were counted. Data represent as mean ± SD. *P < 0.05; **P < 0.01. (a) Amlexanox inhibits RANKL-induced phosphorylation of NF-κ B/p65. We starved BMMs with 0.5% FBS in α -MEM for 12 h before treatment. BMMs were then pretreated with or without amlexanox (25 μ M) for 1 h and then stimulated with RANKL (100 ng/mL) for the indicated times. The cell lysates were extracted for immunoblotting with the indicated antibodies. (b) Amlexanox suppresses RANKL-induced NF-κ B DNA-binding activity. BMMs were pretreated with or without 25 μ M amlexanox for 1 h and then stimulated with or without RANKL (100 ng/mL) for 30 min. Then the nuclear protein was prepared and subjected to EMSA. The Arrows indicate the free probe and the probe-NF-κ B complex, respectively. (c) Amlexanox inhibits RANKL-induced phosphorylation of ERK, JNK and p38. We starved BMMs with 0.5% FBS in α -MEM for 12 h before treatment. BMMs were then pretreated with or without amlexanox (25 μ M) for 1 h and then stimulated with RANKL (100 ng/mL) for the indicated times. The cell lysates were extracted for immunoblotting with the indicated antibodies. Amlexanox suppresses the expression of osteoclast maker genes. Osteoclastogenesis is affected by the expression of several osteoclast-specific genes, such as TRAP, MMP9, NFATc1 and Cathepsin K. All of these are target genes of NFATc1 28 . Because amlexanox inhibited the expression of c-Fos/NFATc1 during osteoclast differentiation, we next tested whether amlexanox suppressed the expression of these osteoclast marker genes. Our results indicate that amlexanox significantly inhibits the mRNA expression of TRAP, NFATc1, MMP 9, and Cathepsin K at both early-and late-stage of osteoclastogenesis (Fig. 4b,c).
Amlexanox enhances osteoblast differentiation of BMSCs. We also examined the effect of amlexanox on osteoblastogenesis in vitro by alkaline phosphatase (ALP) staining and alizarin red staining. The ALP positive regions were significantly increased when BMSCs were treated with amlexanox, especially at 25 μ M ( Supplementary Fig. 1a). Moreover, amlexanox enhanced bone nodule formation of BMSCs ( Supplementary Fig. 1b). These results show that amlexanox enhances osteoblast differentiation of BMSCs in vitro. NF-κ B activation plays a negative role in osteoblast differentiation 35,36 . To investigate whether amlexanox enhanced osteoblast differentiation through repression of NF-κ B pathway, we examined NF-κ B/p65 activation during osteoblast differentiation of BMSCs. The results demonstrated that amlexanox could moderately suppress the degradation of Iκ Ba and the phosphorylation of NF-κ B /p65 (Supplementary Fig. 2a). MAPKs activation also plays an essential role in osteoblastogenesis 37 . We examined the phosphorylation of MEK and MAPKs (ERK, JNK, and p38) in BMSCs by Immunobloting. As shown in Supplementary Fig. 2b, amlexanox significantly promoted the phosphorylation of MEK , JNK and p38.
Amlexanox prevents OVX -induced bone loss. We next used the ovariectomized (OVX) mouse model to mimic menopause-induced bone loss in women 38 . The OVX mice showed marked atrophy and decreased wet weight of the uterus compared with the sham-operated mice ( Supplementary Fig. 3a,b). Amlexanox (20 mg/kg) showed little effect on body weight over 8 weeks ( Supplementary Fig. 3c), suggesting little toxicity of the small-molecule compound at the tested concentration, which is consistent with a previous study 27 . Micro-computed tomography (μ -CT) was used to analyze femurs from different groups of mice. The analysis of trabecular bone in distal femoral metaphyses demonstrated that BV/TV, Tb.N and Tb.Th in the OVX mice decreased dramatically, whereas Tb.Sp was significantly increased when compared with sham-operated group. Treatment of amlexanox (20 mg/kg) in OVX mice (OVX+ Amlexanox) significantly inhibited the OVX-induced bone loss as measured in these parameters (Fig. 5a,b). To investigate whether amlexanox prevents bone loss through inhibition of osteoclastogenic activity in vivo, we performed TRAP staining on the femoral sections. The activity and size of osteoclasts in OVX mice increased markedly compared with sham-operated group. Treatment of OVX mice by amlexanox dramatically decreased the OVX-induced osteoclast activity (Fig. 5c,d). Histomorphometric analysis confirmed that Oc.S/BS, ES/BS, and N.Oc/BS strikingly increased in OVX mice compared with sham-operated group (Fig. 5d). These osteoclastic parameters were significantly decreased in OVX mice treated with amlexanox, as compared with the OVX mice (Fig. 5d).
Moreover, the serum levels of type 1 collagen cross-linked C-terminal telopeptide (CTX-I), a bone resorption marker, were increased in OVX mice compared with sham-operated control mice, whereas amlexanox treatment significantly decreased the CTX-I levels induced by OVX (Fig. 6a). In addition, the serum levels of osteocalcin, a marker of bone turnover, were significantly increased in amlexanox-treated mice compared to that in OVX mice (Fig. 6b). Since the balance between RANKL and OPG produced by osteoblast lineage cells is critical for osteoclastogenesis and function, we examined the serum levels of RANKL and OPG by ELISA. In the OVX mice, serum levels of RANKL were increased, whereas amlexanox treatment obviously decreased the RANKL levels induced by OVX (Fig. 6c). The serum OPG levels in OVX mice was not significantly different from that of the sham-operated mice. However, amlexanox treatment markedly increased the serum OPG levels. As a result, the RANKL/OPG ratio was significantly decreased in OVX mice treated with amlexanox, as compared with the OVX mice (Fig. 6c).
Discussion
In the present study, we investigated the effects of amlexanox on RANKL-induced osteoclastogenesis in vitro and ovariectomy-induced bone loss in vivo, as well as osteoblast differentiation of BMSCs. We believe that amlexanox attenuated OVX -induced bone loss through multiple mechanisms: first, amlexanox directly inhibited osteoclast formation and activity, which might be the most important mechanism; second, amlexanox promoted osteoblastogenesis from BMSCs in vitro and led to higher serum osteocalcin levels in vivo, suggesting that it might enhance bone formation; third, by regulating production of RANKL and OPG synthesized by osteoblast lineage cells and immune cells, amlexanox might inhibit osteoclastogenesis and bone resorption indirectly; fourth, amlexanox was shown to reduce serum concentrations of multiple osteolytic cytokines including IL-1α and TNF-α 27 , which might also lead to suppressed osteoclastogenesis.
At the molecular level, amlexanox inhibited multiple pathways downstream of RANKL, including MAPKs, NF-kB, NFATc1 and c-fos. The NF-κ B signaling pathway is one of the essential pathways for osteoclast formation and activity 1,9,29 . We showed that amlexanox inhibited RANKL-induced activation of the NF-κ B signaling pathway, as demonstrated by inhibition of phosphorylation of both Iκ Ba and p65, and the DNA-binding activity of NF-κ B. We believe NF-κ B might be one of the most important downstream pathways that mediating the effects of amlexanox on osteoclastogenesis. The regulation of NF-κ B by TBK1 or IKK-ε has long been a controversial topic 27 . A previous study showed TBK1 positively modulates RelA/p65 phosphorylation, which is in agreement with our finding that amlexanox represses phosphorylation of p65 through inhibition of TBK1 39 . We also demonstrated that amlexanox moderately suppressed the degradation of Iκ Ba and the phosphorylation of NF-κ B/p65 during osteoblast differentiation. Therefore, amlexanox might promote osteoblast differentiation partially through inhibition of NF-κ B, which plays a negative role in osteoblast differentiation 35,36 . MAPKs activation also plays an essential role in both osteoclastogenesis and osteoblastogenesis 37 . We found that amlexanox inhibited MAPKs in RANKL -induced osteoclastogenesis; whereas, during osteoblast differentiation, amlexanox promoted activation of MAPKs. We believe that the modulation of MAPKs in cells of the two lineages might not be a primary effect of amlexanox. Further efforts are warranted to shed light on these issues.
We found that amlexanox promoted osteoblastogenesis, the mechanisms of action are not completely known, whether amlexanox influenced the activity of key signaling pathways in osteoblast differentiation such as Wnt/β -catenin and RUNX2 remains unknown. Future work is needed to elucidate the involved mechanisms.
In a previous study, amlexanox was shown to produce weight loss, improve insulin sensitivity and decreased steatosis in obese mice through its anti-inflammatory properties 27 . Therefore, it was proposed to be a promising therapeutic agent for diabetes and obesity. Diabetes and obesity shares similar inflammatory pathways with osteoclast-related disorders such as postmenopausal osteoporosis, rheumatoid arthritis (RA) and osteoarthritis. Moreover, diabetes and/or obesity are frequently co-exist with osteoporosis and osteoarthritis in senior patients. Considering our current results and the proven pharmacologic safety of amlexanox in patients, we believe it might be worthwhile to try to re-purpose amlexanox for these inflammatory -related conditions. Cell cultures. We cultured primary bone marrow cells isolated from C57/BL6 mice as described 40,41 .
Reagents. Amlexanox was purchased from Sigma
Briefly, Bone marrow cells were isolated from 8-week-old C57/BL6 mice by flushing femurs and tibias with α -MEM and cultured in α -MEM with 10% FBS, 100 U/ml penicillin, 100 μ g/ml streptomycin and M-CSF (30 ng/ml) overnight. Non-adherent cells were collected and further cultured in the presence of M-CSF (30 ng/ml) for 3 days. Floating cells were discarded and adherent cells were used as bone marrow-derived macrophages (BMMs). Preparation of mouse osteoclasts was carried out as described previously 42,43 . In brief, BMMs (4 × 10 4 cells/well) were seeded on a 0.2% collagen-gel coated 12-well plate and induced by RANKL (100 ng/mL) and M-CSF (30 ng/mL) for 6 days. Then osteoclasts were recovered by treatment with 0.2% collagenase, suspended in a-MEM containing 10% FBS, and used for osteoclast function assays.
In vitro osteoclastogenesis assay. For induction of osteoclastogenesis, BMMs were seeded at a density of 1 × 10 4 cells/well in 96-well plates in the presence of RANKL (50 ng/mL) and M-CSF (30 ng/ml) for 7 days. The culture medium was replaced every 2 days. Osteoclasts were identified by Tartrate-resistant acid phosphatase (TRAP) staining. TRAP-positive multinucleated cells with >= 3 nuclei were counted as osteoclasts. Three wells were assessed per treatment in three independent experiments. Cell proliferation assay. To examine cell proliferation, a Cell Counting Kit-8 was used according to the manufacturer's instructions. BMMs were seeded at a density of 5 × 10 3 cells/well in 96-well plates. After 24 hours, cells were treated with different concentrations of Amlexanox (0, 1.5, 3, 6, 12, 25 μ M) every 2 days in the presence of M-CSF (30 ng/ml) for 7 days. After 1, 3, 5 and 7 days, the culture medium was replaced by the medium containing 10% CCK-8 and cells were incubated at 37 °C for an additional 2 h. The absorbance was then measured at a wavelength of 450 nm on an ELX800 absorbance microplate reader (Bio-Tek, Vermont, USA).
Pit formation assays and actin ring formation assays.
We performed pit formation assay as described previously 40 . Briefly, osteoclasts recovered from a collagen-gel culture were placed on FBS coated bovine cortical bone slices adapted for 96-well plates (IDS Nordic, Herlev, Denmark), and treated with or without 25 μ M amlexanox in the presence of RANKL (100 ng/mL) and M-CSF (30 ng/mL) for additional 3 days. Then the bone slices were treated with 1 M NH 4 OH with sonication for 5 minutes and stained with 0.5% toluidine blue at room temperature for 1 minute. The images of resorption pits were captured through light microscopy. The area and number of resorption pits were measured and analyzed as previously described 40 . The actin ring formation assay was done as described previously 47,48 . Ovariectomized mouse model. Three-month-old female C57/BL6 mice were divided randomly into three groups (n = 12 mice per group): sham-operated mice (SHAM), ovariectomized (OVX) mice treated with vehicle (OVX), and OVX mice treated with amlexanox (OVX + Amlexanox). As described earlier 40 , ovariectomy was performed by removing the bilateral ovaries through a dorsal approach and sham surgery was performed by identifying the bilateral ovaries. One day after surgery, mice were injected intraperitoneally (i.p.) with amlexanox (20 mg/kg) or vehicle every day for 8 weeks. After 8 weeks, all For static histomorphometric analyses, femurs were fixed in 4% paraformaldehyde and decalcification was performed with 10% EDTA for 2 weeks. The samples were then embedded in paraffin. The paraffin-embedded bone sections were stained for TRAP, and the number of osteoclasts was counted as previously described 40,50,51 . Osteoclast surface/bone surface (Oc.S/BS, %), eroded surface/bone surface (ES/BS, %), and osteoclast number/bone surface (N.Oc/BS, N/mm) were measured. Briefly, all measurements were restricted to the secondary spongiosa and confined to an area between 400 and 2000 μ m distal to the growth plate-metaphyseal junction of the distal femur.
Measurement of serum levels of CTX-I, osteocalcin, RANKL and OPG. Sera were collected from SHAM, OVX and OVX+ Amlexanox mice before they were sacrificed after 8 -week treatment with amlexanox. Serum CTX-I levels were measured using a RatLaps EIA kit (IDS Nordic, Herlev, Denmark). Serum osteocalcin levels were measured using a Mouse Osteocalcin EIA kit (Biomedical Technologies). Serum RANKL and OPG levels were measured using Mouse RANKL and OPG ELISA kit (BOSTER, Wuhan, China).
Electrophoretic mobility shift assay (EMSA). BMMs were pre-treated with or without 25 μ M amlexanox for 1 h and then stimulated with RANKL (100 ng/mL) or vehicle for 30 min, and the extraction of nuclear proteins was performed as described previously 40,52 . The DNA-binding activity of NF-κ B was detected using a chemiluminescent EMSA kit (Pierce, USA). Briefly, nuclear extracts were incubated with the probe in reaction buffer (1× binding buffer, 2.5% glycerol, 5 mM MgCl 2 , 50 ng/μ l poly (dI-dC), and 0.05% NP-40) for 30 min. Reactants were loaded onto a 6% native polyacrylamide gel and transferred onto a positively charged nylon membrane (Millipore, Billerica, MA, USA). The DNA was cross-linked by UV cross-linker. The biotin end labeled DNA was detected using a Streptavidin-HRP conjugate and a chemiluminescent substrate. The membrane was then exposed to ChemiDoc ™ XRS+ System with Image Lab ™ Software (Bio-Rad, CA, USA).
Quantitative real-time reverse transcription-PCR. Quantitative real-time reverse transcription-PCR (qRT-PCR) was performed as described previously 52,53 . Briefly, total RNA was extracted from osteoclasts using TRIZOL (Invitrogen, Carlsbad, CA, USA). First-stranded cDNA was synthesized from 2 μ g of total RNA by Easy Script First-Strand cDNA Synthesis Super Mix kit (TransGen Biotech, Beijing, China). Quantitative real-time RT-PCR was performed on CFX96 (Bio-Rad, CA, USA) using Power SYBR Green PCR Master Mix (TransGen Biotech, Beijing, China). All reactions were performed in triplicates, and target genes expression was normalized to the reference gene glyceraldehyde-3-phosphate dehydrogenase (GAPDH). The primers used for quantitative real-time RT-PCR are listed in Table 1. Western blot analysis. Immunoblot analysis was performed as described previously 52,53 . Cells were lysed using the protein extraction reagent RIPA (BOSTER, Wuhan, China) supplement with 1 mM PMSF. The protein concentration was determined using the BCA assay. An equivalent amount of protein were resolved by 10% SDS-PAGE gel and transferred to PVDF membranes (Millipore, Billerica, MA, USA). Subsequently, membranes were blocked and immunoblotted with individual antibodies. The membranes were washed and incubated with horseradish peroxidase-conjugated secondary antibodies (BOSTER, Wuhan, China, dilution 1:5000). The immunoreactive proteins were visualized using enhanced chemiluminescence (BOSTER, Wuhan, China). The protein bands were captured using ChemiDoc ™ XRS+ System with Image Lab ™ Software (BIO-RAD).
Statistical analysis. All quantitative data were presented as means ± SD from three independent experiments. Student's t-test was used for determining the significance of differences between two groups, whereas one-way ANOVA was used for multiple comparisons. The difference was considered statistically significant at P < 0.05. | 2018-04-03T05:05:21.171Z | 2015-09-04T00:00:00.000 | {
"year": 2015,
"sha1": "8c5df356266b10adff9507f81137c022fdca4d32",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep13575.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "47da70a39423f637d9666f1b75b7f84b1e9aa9cc",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
5009045 | pes2o/s2orc | v3-fos-license | Prenatal diagnosis of congenital heart disease: A review of current knowledge
This article reviews important features to improve the diagnosis of congenital heart disease (CHD) by applying ultrasound in prenatal cardiac screening. As low and high-risk pregnancies for CHD are subject to routine obstetric ultrasound, the diagnosis of structural heart defects represents a challenge that involves a team of specialists and subspecialists on fetal ultrasonography. In this review, the images highlight normal anatomy of the heart as well as pathologic cases consistent with cardiac malposition and isomerism, septal defects, pulmonary stenosis/atresia, aortic malformations, hypoplastic left ventricle, conotruncal anomalies, tricuspid dysplasia, and Ebstein’s anomaly, and univentricular heart, among other congenital cardiovascular defects. Anatomical details of most CHD in fetuses were provided by two-dimensional (2D) ultrasound with higher quality imaging, enhancing diagnostic accuracy in a variety of CHD. Moreover, the accuracy of the cardiac defects in obstetrics ultrasound improves the outcome of most CHD, providing planned delivery, aided genetic counseling, and perinatal management.
Introduction
Congenital heart disease (CHD) are the most common form of birth defects. The incidence of CHD is about eight to 10 per 1000 (0.8%-1%) live-born, full-term births, and it could be 10 times higher in preterm infants (8.3%). 1,2 Furthermore, in early gestation, this incidence is even higher as certain CHDs are complex and have been show to result in fetal demise. In fact, 50%-60% of the CHD will require surgical correction and of these, 25% are critical with CHD a leading cause of infant mortality. 3,4 In this setting, the survival, extensive medical care, and developmental disabilities depend on the time of the diagnosis, on the delay of the treatment, and on the severity of the CHD. Therefore, early fetal diagnosis of a treatable CHD has been shown to reduce the risk of perinatal morbidity and mortality. 5 Cardiovascular development involves a complex process in which genetic and environmental factors are involved. Considering that approximately 49% of pregnancies are unplanned, women may not take precautionary actions against environmental factors. 6 The detection of CHD by fetal echocardiography when referred by a suspicion of cardiac abnormality on routine obstetric ultrasound is up to 40% in low-risk populations. However, risk factors are identified in only 10% of CHD. In this scenario, the heart should be examined in detail on a routine sonographic scanning.
In the sonographic prenatal diagnosis of CHD, the fetal heart remains a challenge that involves sonographers, obstetricians, radiologists, and fetal medicine subspecialists. High risk for cardiac defects and the suspicion of a cardiac abnormality on obstetric ultrasound, even in low-risk populations, are indications for referral for performance of a detailed fetal echocardiogram. This manuscript reviews important aspects to improve the prenatal screening for CHD focusing on ultrasound clues to enable the diagnosis of the cardiac defects; drawing the management of cardiac defects in utero and the delivery plan strategies were also approached in this study.
How to screen the fetal heart
The fetal cardiac screening by ultrasound can detect a high proportion of cases of CHD. However, when the prenatal screening was based on the visualization of the four-chamber view, it was inadequate to detect many cases of CHD, especially conotruncal and outflow defects (ex: transposition of the great vessels, tetralogy of Fallot, double-outlet right ventricle, truncus arteriosus, and outlet septal defects). When the evaluation of outflow tracts was added to the four-chamber view, the sensitivity of ultrasound screening for CHD increased from approximately 30% to 69%-83%. 7 Currently, the three vessels (3 V) and 3 V with trachea (3VT) views were added to the standard four-chamber and outflows views in order to improve the detection of CHD. 7,8 The latter one enabled the detection of lesions such as coarctation of the aorta, right aortic arch, double aortic arch, and vascular rings, achieving a prenatal detection rate of congenital heart disease to up 90%. The average time to obtain the cardiac views was just over 2 min, but in approximately one third of cases, the cardiac examination was postponed by 15-20 min due to unfavorable fetal lie (anterior spine). 9 A fetal echocardiogram should be performed if the CHD is suspected on the obstetric cardiac of screening, or if there is a recognized increased risk (maternal, fetal and/or familial factors) for CHD >2% to 3%. Fetal echocardiography may be considered when risk is estimated at 1% to 2%, and when risk approaches that of the general population ( 1%), this exam is not indicated. 10
Upper abdomen and four-chamber views
The examination of the upper abdomen (cross-sectional plane) of the fetus by echocardiography provides the distinction between the left and the right sides of the fetus. When the situs is normal (solitus), the aorta and stomach are located on the left side and the inferior vena cava and liver are placed on the right. Therefore, situs solitus is the normal arrangement of thoracic and abdominal organs (Fig. 1). In general, more complex CHD are associated with abnormalities of the situs. Furthermore, the umbilical vein and the hepatic veins can be visualized in upper abdomen view.
The four-chamber view is the most important plane. This approach enables the evaluation of the main cardiac structures, the position, the size (1/3 of the thorax), the contractility and the rhythm of the heart. In normal levocardia, 2/3 of the heart is left- sided with the axis pointing to the left. Cardiac axis is at 45+/À20 and abnormal axis is associated with chromosomal anomaly, abnormal displacement of the heart (diaphragmatic hernia or spaceoccupying lesion), and many CHD, especially in conotruncal anomalies and univentricular hearts. 11 Cardiomegaly can be evaluated by the global size of the heart and in small fetuses this should be done by cardiothoracic ratio (CTr = cardiac area/ thorax area). The size of the left and right chambers is similar. However, in the third trimester, mild over right-left asymmetry can be a normal variant. Besides the size, the morphological and functional characteristics of each chamber (atria, ventricles, and atrioventricular valves) can be analyzed. The left atrium (LA) is normally located most posteriorly (near descending aorta) and it is identified by the finger-like appendage. Furthermore, the LA is characterized by the presence of the foramen ovale flap and its connection with the pulmonary veins. The right atrium (RA) has a pyramidal appendage with a broad base and receives the vena cava.
Different to the atria which communicate each other with the foramen ovale, the ventricles are separated by the interventricular septum. The muscular part of the septum is the lower two-thirds and the membranous one is the part of the interventricular septum adjacent to the aortic, mitral and tricuspid (septal cusp) valves. On suspicion of a ventricular septal defect (VSD), the heart should be examined using a lateral view allowing the drop-out effect due to the insonation angle by a four-chamber view. In a retrosternal location, the right ventricle (RV) is trabeculated with the presence of the moderator band and the lumen is shorter than the left ventricle (LV) lumen (Fig. 2). The LV is posterior to the RV, reaches the apex of the heart and its lumen is longer than the RV. Furthermore, the tricuspid valve inserts slightly more apical than the mitral valve.
Qualitative assessment of the chambers and quantitative assessment of the atrioventricular valves should be done by four-chamber view. In addition, the cardiac function and rhythm assessment should be performed.
Left and right ventricular outflow tracts
The outflow tracts view (five-chamber view) can be obtained from the four-chamber view by sliding the transducer to the fetal head that enables the identification of the origin of the great arteries. In a normal RV outflow view (RVOT), the pulmonary trunk can be visualized arising from RV and crossing the ascending aorta.
To optimize the five-chamber view, focusing on the analysis of the continuity of the ventricular septum to the aorta, and even the ascending aorta, the transducer has been rotated to the fetal right shoulder. The evaluation of LV outflow view (LVOT) and RVOT helps to identify outflow septal ventricular defects and conotruncal anomalies.
The LVOT confirms the aorta arising from the morphological LV as a vessel continuing the outflow ventricular septum as well the aortic-mitral valve continuity. In this view, the aortic valve can be recognized and a detailed evaluation of its size and mobility can be performed. Even, the thickness of the septum is measured by the five chamber (LVOT) view (IVS raging between 2 and 4 mm during gestation) (Fig. 3).
In the RVOT view, the pulmonary artery crosses over the ascending aorta and becomes the vessel on the left. However, when the great vessels are transposed (TGA), the aorta and pulmonary arteries run in parallel and don't cross over each other. The great arteries short axis view can be obtained by tilting from four-to five-chambers view and confirms the pulmonary trunk (PT) arising from RV (Fig. 4). In this view, the bifurcation of the right and left pulmonary arteries for the PT is best visualized, and also the size and mobility of the pulmonary valve can be assessed.
Three vessels and three vessels and trachea views
The three vessels (3 V) view is obtained from the four-chamber view by moving the transducer in the direction of the upper fetal and the pulmonary trunk; the arch with aortic isthmus and superior vena cava (SVC) can be visualized. 7 The caliber of the pulmonary trunk is slightly larger than aorta, whereas the SVC is the smaller and posterior of the arteries. The PT, the right pulmonary artery, and the ductal arch can be seen. The aortic and ductal arches are located on the left of the trachea on a V-shaped configuration. 8 The trachea is recognized as an echogenic structure on the right side of the arteries and anterior to the spine. The three vessels and trachea (3VT) view enables the diagnosis of coarctation of the aorta, right aortic arch, double aortic arch and vascular rings. The thymus can be visualized in the front of the three vessels as a less echogenic structure which is important to detect defects associated with the 22q11.2 deletion syndrome, facial abnormalities, absent or hypoplastic thymus, and hypocalcaemia. 12 The measurement of the thymic-thoracic ratio (TT-ratio) is a feasible and useful tool in fetuses with cardiac defects (Fig. 5). 13 3. Fetal congenital heart diseases
Cardiac malpositions and isomerism
In normal conditions, the heart is on the left side of the thorax (levocardia), with situs solitus for the visceral and atrial arrangement. The first step in the assessment of the cardiac position and situs is to identify the fetal position and the transducer orientation. 14 In the normal upper abdomen view, the heart points to the left anterior thoracic cavity with a normal base-axis of 45 , left-sided stomach and descending aorta and right-sided liver and inferior vena cava. Cardiac malpositions include mesocardia, dextrocardia, dextroposition, and even an ectopia cordis in which the heart is displaced outside the thorax. 14 Dextrocardia is present when the heart points to the right side of the thorax and dextroposition occurs when the heart is only displaced to the right chest preserving the normal cardiac left axis (Fig. 6). Dextrocardia with situs inversus is known as mirror-image dextrocardia, in which the inferior vena cava (IVC) and liver are on the left and the aorta and stomach are on the right. Dextrocardia with situs inversus totalis has less incidence of cardiac malformations than there dextrocardia with situs solitus. Dextroposition is caused by a displacement of the heart due to a congenital diaphragmatic hernia, left-sided fluid, or masses. 14 Conditions with incomplete lateralization of the organs are known as heterotaxy syndromes or isomerisms or situs ambigus. Left atrial isomerism is a condition with double left-sided structures (double left-sided shaped atria and lungs), and an absence of the RA and IVC. The IVC is interrupted with an azygous continuation in almost all cases of left isomerism (Fig. 7). As the right sinus node is absent, the risk of bradycardia due to heart block is increased in left isomerism. Right atrial isomerism is the reverse of left atrial isomerism (double right-sided shaped atria and lungs). Therefore, LA is absent and the pulmonary veins are abnormal. Total anomalous venous pulmonary drainage is found in about 80% of right isomerism and the IVC is usually present. Additionally, asplenia is often associated, resulting in a long-term risk for infections. 15 The upper abdomen view provides the means of diagnosing the right or left isomerism in most patients. In left isomerism, the azygos vein can be seen as a venous structure posterior to the aorta in upper abdomen view and in right isomerism, the IVC and aorta are located on the same side of the spine (right sided).
Anomalous pulmonary venous return
During the embryonic development, the common pulmonary vein empties into the LA and then four independent pulmonary veins are incorporated into the atrium. The anomalous return may be partial (3 or <3 pulmonary veins) or total (all four veins). Anomalous pulmonary venous return (APVR) is frequently associated with heterotaxy syndrome. In total APVR, there is no connection between the pulmonary veins and LA. At first step, the diagnosis of ventricular asymmetry with RV dominance and a small LA size with the presence of an additional vessel between the aorta and LA in four-chamber view should increases the suspicion of total APVR. 16 Dilated coronary sinus or dilation of the IVC can be present, depending on the point of the anomalous pulmonary vein return. Indeed, the absence of pulmonary venous flow into the LA by color Doppler confirms the diagnosis. Total APVR is classified into four groups, according to the level of the anomalous connection: 1-supracardiac; 2-cardiac; 3-infracardiac; and 4mix levels of connection. Total APVR requires surgical correction after birth and, in most cases, in the first weeks of life. The delivery should be planned in a hospital with pediatric cardiology and cardiac surgery backup. 16 Partial APVR may be overlooked in utero, especially when it is an isolated defect and less than three veins are involved. Scimitar syndrome represents an unusual form of partial APVR in which the right-sided pulmonary veins return to the IVC, just above or below the diaphragm. Scimitar is frequently associated with dextrocardia and right pulmonary hypoplasia caused by a pulmonary sequestration. Prenatal diagnosis is possible, mainly using 3D power Doppler imaging, and enables identification of a collateral vessel arising from the descending aorta and supplying a portion of the right lung. Postnatally, neonatal surgical intervention is rarely required in partial APVR.
Persistent left superior vena cava (SVC) is the most common variation of the systemic venous. The 3VV enables the diagnosis of left SVC by showing a supernumerary vessel to the left of the pulmonary trunk and arterial duct. An enlarged sinus coronary can be present, as in general the left SVC drains into it (Fig. 8). Isolated, persistent left superior vena cava has no clinical significance after birth, however, it can be associated with left-heart obstructive diseases, conotruncal anomalies and AVSD. 17 Therefore, the prenatal diagnosis of left SVC requires a detailed evaluation of the heart.
Septal defects
Intracardiac shunt malformations are the most common congenital cardiac defects leading to left-right shunt after birth. Atrial, ventricular, and atrioventricular septal defects are included in this group. These defects can be associated with other cardiac malformations and, depending on their magnitude, are responsible for heart failure postnatally, however with no hemodynamic significance during fetal life.
Atrial septal defect
Atrial septal defect (ASD) is a common heart malformation occurring in about 10% to 15% of CHD after birth, and results from an abnormal embryologic development of the atrial septum. 18 In general, ASD occurs sporadically with a recurrence risk of 7% to 10%. Less frequently, ASD is associated with bone malformations and syndromes such as Holt-Oram, Noonan and Treacher Collins. Furthermore, there is a familial form of secundum ASD that has been shown to be caused by gene mutations (GATA4 and NKX2-5 genes). 19 During fetal life the ASD is well tolerated and leads to leftright shunt postnatally, with symptoms depending on the size and the type of the atrial defect. 20 Indeed, it may be isolated or associated with other cardiac defects as a part of complex CHD. The types of ASD are: 1-secundum ASD; 2-primum ASD; 3-sinus venosus ASD; 4-coronary sinus ASD (Fig. 9).
A secundum ASD is the most common ASD (70%), located in the middle of the atrial septum. During fetal life, this communication is normal, namely foramen ovale, and has a normal size similar to aortic diameter. 21 Consequently, the prenatal diagnosis of secundum ASD is rarely possible, and it can be suspected when a large foramen ovale is identified, especially with an absent or deficient flap. After birth, the flap of the forame ovale produces a functional closure of this communication. The normal fetal forame ovale presents right-to-left shunt (R-L) with velocities ranging from 20 to 40 cm/s and, when restrictive, the velocities flow across it is increased (>100 cm/s) with L-R shunt. Restrictive forame oval is associated with forms of hypoplastic left heart due to the increased LA pressure. In general, small secundum ASD (<6 mm) close spontaneously during the first two years after birth. Larger defects will need closure by catheter intervention or, more rarely, by surgery.
A primum ASD involves the lower part of atrial septum and is caused by an absent fusion of the lower atrial septum to the underlying atrioventricular valve. Therefore, primum ASD involves an abnormality of AV(s) valve(s) and should be categorized as a form of atrioventricular septal defect (AVSD) which will be discussed later on the topic of AVSD.
A sinus venosus ASD is a less common septal defect, located in the posterosuperior (10% of ASD) or posteroinferior portion of the atrial septum, lying at the junction of the SVC or inferior vena cava, respectively. Both are frequently associated with anomalous pulmonary venous return. Coronary sinus ASD is a rare septal defect in which a communication occurs between an unroofed coronary sinus and LA. Persistent left SVC draining into the coronary sinus is almost always present. Both sinus venosus and coronary sinus ASD have not yet been reported in fetal life. Sinus venosus and coronary sinus ASD do not close spontaneously and need to be repaired by surgery postnatally. 22 In conclusion, the diagnosis of secundum DSA can be suspected when the foramen ovale is larger than the aortic diameter on fourchamber view, and an echocardiogram should be done after birth to enable this diagnosis. Inversely, when restrictive, the forame ovale is narrowed with increased flow of velocities (>100c/s) or even, it can be an absent. 23 In such cases, changes in pulmonary venous flow pattern, such as a reversed wave and the lacking of D wave, can be useful in detecting restrictive foramen ovale. 24 The atrial septum restriction flow is associated with poor outcome in cases of left heart hypoplastic syndrome, or aortic or mitral atresia.
Ventricular septal defect
Ventricular septal defect (VSD) is the most common CHD occurring in about 30% of neonates with CHD and 7% to 10% in utero. 25 This defect can occur sporadically or in association with TBX5 and GATA4 genes. VSD may be isolated or multiple and is commonly associated with other cardiac defects. In fetal life, the association of VSD with chromosomal anomalies is about 10% to 30%, depending on the type and size of the defect. This rate is significantly higher than after birth, mainly if the defect is large and extends to the inlet septum or, even, when associated with extracardiac defects. 26 Ventricular septal defects are classified according to their location on the ventricular septum as: 1membranous (small segment close to the septal cusp of tricuspid valve and adjacent left heart valves); 2-muscular (lower 2/3 of the septum); and 3-subarterial doubly committed (supracristal). The third one is the least common VSD. It is adjacent to the aortic and pulmonary valves and is caused by the absence of the outlet septum. Membranous and muscular types can also be subclassified according to their area of extension: inlet (close to AV valves implantation); trabecular; and outlet (conus septum) parts of the septum. 23 Indeed, the VSD can be termed malaligned when different parts of the septum adjacent to the defect are malaligned to each other (Fig. 10).
In general, VSD causes no hemodynamic disturbances in utero. Most of all, VSDs evolve to spontaneous closure, even in utero or during the first year of life. The size and location of the defect influence the chance of spontaneous closure. Postnatally, small defects and muscular defects present a greater chance to close spontaneously. 27 Surgical intervention may be necessary during the first years of life, when there is congestive heart failure or depending on the size and location of the defect.
In conclusion, the diagnosis of a fetal VSD can be done by a fourchamber view using a lateral view to detect more accurately the bidirectional shunt across the defect. The evaluation of LVOT (fivechamber view) helps to identify outlet defects, mainly membranous outlet VSD. The great arteries short axis view is useful in detecting subarterial doubly-committed VSD and some types of membranous defects. Large defects are greater than the aortic diameter. Three-and four-dimensional ultrasound with inversion flow can be used to detect small defects. The goals of the fetal ultrasound diagnosis of VSD are to define whether the segment of the ventricular septal is involved and to exclude other cardiac anomalies. Malaligned VSD and left-right shunt across the defect should increase the suspicion of left-heart outflow obstructions, such as coarctation of aorta or interrupted aortic arch.
Atrioventricular septal defects
The atrioventricular septum defect (AVSD) refers to a group of cardiac malformations resulting from a defect of atrioventricular septum that may lead to defects of the interatrial septum (ostium primum ASD) of the interventricular septum (inlet VSD), and the division of the atrioventricular valves. AVSD is also known as atrioventricular canal defect or endocardial cushion defect. The risk of recurrence of AVSD in children of mothers or fathers with this defect is 14% and 2%, respectively.
The complete form of AVSD is the most common CHD detectable in utero. 28 The complete form of AVSD includes the presence of an ostium primum ASD, an inlet VSD and a common (single) atrioventricular valve. The AVSD is termed partial or incomplete, when a tongue of tissue joins the superior and inferior cusps dividing the common valve into two valves (Fig. 11). The classical partial form of AVSD, also known ostium primum ASD, combines an atrial defect and a cleft of the left valve orifice. However, subtypes of AVSD may include both atrial and ventricular communications.
Furthermore, the assessment of the insertion of the common AV valve enables the Rastelli's AVSD classification (A, B and C types), which is useful to the postnatal surgical approach (Fig. 12). 29
,30
The analysis of the atrioventricular connection and the size of the ventricles are important tools for the identification of balanced and unbalanced forms of AVSD. Unbalanced AVSD results in ventricular disproportion (hypoplasia of one ventricle) and is typically found in association with heterotaxy syndrome (atrial isomerism). 31 Complete AVSD is associated with extra-cardiac malformations and syndromes such as trisomies 21 (75% of cases), 18 and 13. Therefore, fetal karyotype should be discussed with parents whenever this diagnosis is made. Moreover, a sequential detailed cardiac examination is indicated to identify additional cardiac anomalies, such as tetralogy of Fallot and double-outlet RV, mainly in fetuses with trisomy 21. 31 During the fetal life, the AVSD is well tolerated and the delivery should follow the obstetric routine. Few fetuses will develop congestive heart failure and non-immune hydrops due to severe AV regurgitation and/or complete heart block (atrial isomerism). After birth, complete VSD should be surgically repaired between three and six months of age by the early risk of irreversible arterial pulmonary hypertension. 30 In cases of unbalanced DSAVT with hypoplasia of one of the ventricles, the postnatal surgical management will be the same as used for the univentricular heart.
In conclusion, the fetal diagnosis of AVSD can easily be done by four-chamber view. The best diagnosis clues are: the absence of crux of the heart, the presence of the primum ASD and the absence of the usual offset of the AV valves (Fig. 13). In most cases, an inlet VSD is also present. The four-chamber view can be significantly improved by measuring the ratio of atrial to ventricular length (AVL) with a cutoff value of over 0.6. 31 Furthermore, this plane is ideal to assess the relationship of AV junction, the size of the ventricles, and AV valve insufficiency. The five-chamber view can identify the characteristics of the LV-outflow view in AVSD: a narrowed and elongated outflow (also known as gooseneck). Fig. 11. Types of AVSD and normal atrioventricular valves (normal heart). The image shows the partial AVSD with two AV valves and the complete form of AVSD with a single AV valve. AVSD, atrioventricular septal defect; AV atrioventricular valve.
Pulmonary atresia with intact ventricular septum/tricuspid atresia
Pulmonary atresia (PA) with intact ventricular septum account for about 1% to 3% of CHD diagnosed during fetal life and for 2.5% to 4% after birth. The classification includes two types: Type I (75%) À stenotic and competent tricuspid valve (TV) and RV with hypoplasia and Type II À dysplastic and insufficient TV with a normal size RV and an enlarged RA. 32 The prenatal diagnosis of PA can be done by the five-chamber view that enables identification of an immobile and thickened pulmonary valve and the presence of a reversed flow in the pulmonary artery from the ductus arteriosus to the pulmonary valve by color Doppler (Fig. 14). The classification of PA varies according to the features of the TV. In cases of PA and TV stenosis, the RV is hypoplastic (Type I), whereas when PA is associated with a dysplastic and incompetent TV (Type 2), the RV is well-developed and the RA enlarged. Ventriculocoronary connections (fistulas) can be observed, more commonly in Type I. In all forms, the PA is smaller than aorta on three-vessel view. 32 Tricuspid atresia is defined as an agenesis of the tricuspid valve with no direct communication between the RA and the RV. It is a rare form of CHD (3%-4% in fetal life), with a recurrence risk of 1%. 33 This defect is associated with normally related (type I) or transposed arteries (types II and III: D and L transposed arteries) with, or less commonly, without VSD. 32 The pulmonary flow could be normal, decreased (pulmonary stenosis), or absent (pulmonary atresia). Additional cardiac anomalies, such as coarctation of the aorta, are present in less than 20% of cases. If the VSD is absent or small, the restricted flow to RV can lead to severe hypoplasia of the RV.
The prenatal diagnosis of tricuspid atresia is easily done by ultrasound. The features of tricuspid atresia in the four-chamber view are: an echogenic and immobile tricuspid valve; absence of flow across the TV on color Doppler during diastole; and a hypoplastic RV with or without VSD. Transposed arteries are present in about 20% of cases. Sequential follow-up cardiac ultrasound is indicated to assess the LV function, the size of the foramen oval and the presence of RV outflow obstruction. Fetal distress and hydrops are related with very small foramen oval.
Neonates with PA with intact ventricular septum require administration of prostaglandin E1 at birth to maintain patent ductus arteriosus until surgery is performed. In cases of tricuspid atresia with a restrictive foramen oval or secundum ASD, this communication should be enlarged by ballon or surgical atrioseptostomy during the first days of life. In both cases, the prenatal diagnosis will improve the perinatal outcome. Posteriorly, the type of surgical management will depend on the type of the anatomy.
Pulmonary stenosis/tricuspid valve stenosis
In pulmonary valve stenosis the thickness and the reduced motility of the pulmonary valve are variable. Generally, the severity of the pulmonary stenosis is based on the size of the pulmonary valve and the direction flow in the pulmonary artery. Direct evidence of ductal dependence and tricuspid regurgitation are indicative of severe stenosis (critical pulmonary stenosis). In cases of critical stenosis, the anterograde pulmonary flow can be detectable instead of the fully retrograde flow which characterizes PA. In mild-to-moderate forms, the four-chamber view can be normal, even an increased pulmonary artery peak velocity is present. Therefore, an abnormal pulmonary valve echogenicity on the five-chamber view and patients at a risk (rubella syndrome) should be followed by a Doppler flow investigation. In cases of supravalvar pulmonary stenosis, the RV outflow tract obstruction is located in the pulmonary artery instead of being in the pulmonary valve, and is commonly associated with syndromes such as Noonan, Williams, and Alagille.
In the tricuspid stenosis, the tricuspid valve diameter is reduced with thick cusps and restricted diastolic opening. During the gestation, the lack of flow across the valve can decrease the RV development leading to RA dilatation. The four-chamber view with color Doppler enables the diagnosis of the tricuspid stenosis.
Ebstein's anomaly and tricuspid dysplasia
Ebstein's anomaly is characterized by the lack of mobility and downward displacement of septal and posterior cusps of TV (gap between the TV and the mitral valve >8 mm). In cases of ventricular inversion, an Ebstein's anomaly could be observed on the left side. The diagnosis can be suspected on a cardiac screening (four-view chamber) when the RA is enlarged and the thickened cusps of the TV are displaced down and tethered on the septal surface. Ebstein's anomaly is a rare congenital condition (3% to 7% of fetal CHD), and has been associated with maternal ingestion of benzodiazepines. 34 When Ebstein's anomaly is identified prenatally, the fetus should be closely monitored due to an increased risk of tricuspid insufficiency progression, cardiac dysfunction, and fetal demise. Serial fetal echocardiography is recommended, especially during the third trimester. Depending on the degree of the TV displacement it can result in less or more severe reduction of the functional RV. 35 After birth, the RV functional area is helpful to predict prognosis. However, predicting outcomes of fetuses with Ebstein's anomaly remains a challenge. Fetuses with pulmonary regurgitation, indicating circular shunt physiology, as well early gestational age at the diagnosis, have been associated with poor outcome.
Fetal transplacental digoxin theraphy may improve the cardiac function in cases of heart failure. 36 In fetuses with Ebstein's anomaly and hydrops, the delivery should be performed in a hospital with an intensive and specialized cardiac care team. The relative risks of premature delivery must be carefully discussed as recent studies suggest it is associated with worsening neonatal outcome. 36 However, early delivery may be considered in cases of hydrops and uncontrolled arrhythmia. 37 Neonatal repair will be required in cases of heart failure or profound cyanosis. In children and adults, the presence of symptoms (cyanosis and paradoxal embolization) is an indication for surgical approach. 38 Tricuspid dysplasia can be distinguished from Ebstein's anomaly on the basis of the normal attachment of the TV. The dysplasia can be isolated or associated with syndromes such as trisomy 21 and CHARGE (coloboma, heart defect, atresia choanae, retarded growth and development, genital hypoplasia, and ear anomalies/deafness) syndrome. The tricuspid dysplasia can be easily diagnosed by the greater echogenicity of the valve (valve deformation) on the four-chamber view of the heart. Furthermore, the color Doppler enables one to identify and quantify the tricuspid regurgitation. Both tricuspid dysplasia and Ebstein's anomaly are rare congenital TV malformations that lead to the same physiology in utero by the TV regurgitation (Fig. 15). Even in tricuspid dysplasia and Ebstein's anomaly, fetuses with pulmonary regurgitation (PR) are at high-risk. PR serves as the insult that completes the circular shunt with a systemic flow steal leading to low organ perfusion and ultimately fetal distress.
Aortic stenosis/mitral stenosis
Valve aortic stenosis is the most common type of aortic stenosis occurring in 60%-70% of patients with aortic stenosis. Supravalvar and subaortic stenosis are rare in fetuses and at least one can be associated with mitral valve disease and coarctation of aorta, also known as Shone syndrome (left-sided heart obstructive lesions). Aortic stenosis occurs in about 3% to 6% of newborns with CHD and is an associated cardiac malformation in about 30% of cases. 40 The prognosis and management depend on the degree of the obstruction, the gestational age at the diagnosis, and the presence of associated anomalies. Many cases of severe aortic stenosis evolved at the mid-trimester have shown reduced growth of the left heart structures. Therefore, sequential studies should be performed due to the risk of progressing to a hypoplastic left heart syndrome (HLHS). 41 The aortic valve may appear abnormal (thickened with reduced mobility) on the five-chamber view with turbulent color Doppler flow at the valve. If aortic stenosis is moderate to severe, the LV may be normal or mildly hypertrophied with an increased aortic peak Doppler velocity ( > 2 m/s). If the aortic stenosis is critical, the LV is dilated with poor contractility and increased echogenicity (suggesting fibroeslastosis), and the aortic valve is small with Doppler velocity slightly increased ( > 1 to 2 m/s) 40 (Fig. 16). Mitral stenosis and mitral regurgitation may be associated. If aortic stenosis is critical stenosis, there will not be full anterograde flow across the aortic valve. In the 3 V view, the reversed flow and the small size of the transverse aorta are clues to the diagnosis. In cases of aortic atresia, no flow across the valve is detectable by color Doppler. The interatrial shunt across the forame oval is left to right, also reversed.
In cases of critical aortic stenosis prostaglandin, E1 should be given at birth as the systemic blood perfusion is ductal dependent. Subsequent balloon dilation of the valve or surgery need to be performed. If the LV is hypoplastic, a biventricular repair may not be feasible. During fetal life, mitral stenosis is associated with a reduced flow to the LV. The LV and aorta are smaller than the RV, and pulmonary artery and the interatrial shunt are reversed (L-R). In the four-chamber view, there is a discrepancy between the ventricles and AV valves. The mitral valve is thickened with small size and reduced mobility. Associated cardiac anomalies, especially other left-sided cardiac lesions, must be excluded.
Aortic atresia and hypoplastic left ventricle syndrome
The most common sign of aortic atresia is the hypoplastic LV, detectable as an echogenic and dysfunctional chamber in the fourchamber view. In fetuses with classic forms of HLVS (aortic and mitral atresia), there is a markedly abnormal four-chamber view at mid-gestation, with no inflow into the left ventricle (mitral atresia) and a severely hypoplastic LV (Fig. 17). In cases of aortic atresia and mitral stenosis, the LV is more recognized. The degree of hypoplasia of the ascending aorta and reduced size of the LA can be variable according to the form of aortic atresia and HLVS. No anterograde flow across the aortic valve and a reversed flow in the aortic arch are detectable in the 3 V view and the long-axis of aortic arch.
A restrictive interatrial shunt is associated with poor outcome and the analysis of the pulmonary vein Doppler pattern (reversed a wave and low or absent D wave) is useful. Fetuses with restrictive interatrial shunt may be candidates for percutaneous enlargement of the foramen ovale during the second trimester. 42 The right heart should be examined in detail to evaluate if Norwood palliative surgery will be feasible. The HLHS is associated with trisomies (18 and 13) and abnormalities of the central nervous system (CNS).
In fetuses with HLHS the delivery should be planned at a tertiary care center experienced with complex CHD. All newborns with HLHS are ductal dependent for systemic blood flow, so prostaglandin infusion should be initiated. The neonates with HLHS and a restrictive foramen ovale will require atrial septostomy/septectomy after delivery. In general, the surgical management of HLHS requires three palliative surgical procedures: Norwood (or Norwood-Sano) operation or a hybrid approach within the first week after delivery, followed by a Glenn procedure at approximately six months and a Fontan procedure at two to three years. Some heart transplantation may be considered an alternative in cases of newborns with right-sided anomalies or long-term survival with progressive heart failure. 43 6. Conotruncal anomalies
Tetralogy of Fallot
The tetralogy of Fallot (TOF) is the most common cyanotic heart defect at a frequency of 7% to 10% in children with CHD. The tetralogy of Fallot is characterized by: 1-a VSD; 2-an overriding aorta; 3-infundibular pulmonary stenosis; and 4-right ventricular hypertrophy. However, in utero, the RV hypertrophy is almost always absent and the four-chamber cardiac view is normal. As the defect results from an anterior deviation of the conal septum, the RVOT, the overriding of the aorta, and a malalignment VSD are the clues to this diagnosis of TOP, enabled by the five-chamber view and the basal short axis view. Furthermore, a smaller pulmonary artery and a larger aorta are observed in 3VT view (Fig. 18). The cardiac anomalies more frequently associated with TOF and reliably diagnosed in fetuses are: right aortic arch (20%); AVSD (5%); additional muscular DSV; and anomalous left subclavian. 44 Di George syndrome, trisomies (13, 18 and 21), omphalocele and pentalogy of Cantrell, among other conditions, may be associated with TOF. 45 The management and prognosis of TOF depend on details of the anatomy and physiology. TOF with pulmonary stenosis is the classical form. TOF with pulmonary atresia (TOF/PA) is an extreme form of tetralogy in which the patent pulmonary outflow may progress to pulmonary atresia during fetal life. In fetuses with TOF/PA, the pulmonary arteries are severely hypoplastic or absent and the pulmonary blood flow is dependent on collaterals or on ductus arteriosus. Longitudinal aortic view with color Doppler facilitates in showing the flow from the ductus, or from the descending aorta collaterals to the pulmonary artery. Unusually, the TOF is associated with the absence of the pulmonary valve (TOF/APV). Interestingly, the ductus arteriosus is typically absent in cases of TOF/APV, and an aneurysmal dilatation of the pulmonary artery may progress with a risk of heart failure in utero. Differently to the other forms of TOF, the four chambers in TOF/APV are abnormal with an enlarged RV and in cases with tricuspid regurgitation, a dilated RA. The pulmonary valve is usually absent and incompetent and the pulmonary arteries are enlarged. After birth, in severe cases the patients present pulmonary hypoplasia or severe airway disease due to the chronic extrinsic compression of the airways by the aneurysmal pulmonary arteries during fetal life. Airways and lungs fetal magnetic resonance imaging (MRI) may be considered as enabling the risk stratification risk for liquid ventilation and extracorporeal membrane oxygenation (ECMO) at birth. 44 Classic forms of TOF tend to being well tolerated during the neonatal period and the patient may be discharged for follow-up, usually after the first week. In general, the surgical correction will be performed during the first year of life, or some cases may require a prior palliative surgical Blalock-Taussig operation. In cases of TOF/PA with pulmonary flow ductal-dependence, the prostaglandin infusion should be initiated with a sequential surgical approach soon after birth. The forms of TOF/APV may evolve with congestive heart failure and higher perinatal mortality. Pregnant women with fetal TOP with pulmonary valve agenesis should be referred for delivery in a tertiary reference center, with a pediatric cardiologist team, available cardiac surgery and ECMO. 37
Transposition of the great arteries
The transposition of the great arteries (TGA) is a frequent cyanotic CHD characterized by discordant ventriculoarterial connection, in which the aortic artery arises from the RV and the pulmonary artery from the LV. It occurs in 5% to 7% of cases of CHD in childhood, however it is one of the CHD most commonly underdiagnosed in utero. 46 TGA is rarely associated with chromosomal or extracardiac anomalies and in 40% of cases is associated with VSD. Serial fetal echocardiographic is recommended as the risk of associated outflow tract defects (pulmonary stenosis and aortic arch obstruction). Two major types of TGA have been described: dextroor D-TGA and levo-or L-TGA.
In most cases of D-TGA, the atrial situs is solitus. In D-TGA, the ventricular looping is normal (D-looped: RV is the right-sided and anterior ventricle), and the relationship of the great arteries is abnormal with mitral-pulmonary fibrous continuity. The fourchamber view is normal in fetuses with isolated D-TGA but the outflow tracts are markedly abnormal. The arteries do not cross each other (parallel arteries) and can be visualized in a single imaging plane. Instead, the five-chamber view shows the pulmonary artery arising from the LV and the aorta arising from the RV. The arteries are arranged in a different fashion in the 3VT view with a right anterior aorta and a left posterior pulmonary artery. 46 Consequently, the finding of only 2 vessels (transverse aorta and SVC) in the 3VT view is frequent in cases of TGA as the pulmonary trunk is not visible in the great vessels transverse view. 47 In L-TGA, the atrial situs is normal, however, the ventricles are L-looped (the RV is the left-sided and posterior ventricle). L-TGA is characterized by the combination of discordant atrioventricular connection and discordant ventriculoarterial connection. Because of the discordance at two levels, the physiology of this defect is congenitally corrected, which explains it being known as congenitally corrected transposition of the great arteries. Many cases of L-TGA are associated with VSD, Ebstein's anomaly of leftsided tricuspid valve, complete heart block, and pulmonary setenosis/atresia. Typically, the four-chamber view is abnormal with a left-sided ventricle containing the moderator band which characterizes the morphologically RV. The aorta is located anterior and left to the pulmonary artery. 46 D-TGA is an absolute indication of delivery in a tertiary referral center, with a pediatric cardiologist and cardiac surgery available. After birth, newborns with D-TGA should receive prostaglandin infusion to maintain the ductus patency and may require ballon atrial atrioseptostomy. Subsequently, patients with D-TGA usually undergo the Jatene arterial switch operation within the first one to two weeks after delivery. In cases of D-TGA with VSD and pulmonary stenosis, the Rastelli operation (conduit between LVaorta) will be performed or, in such cases, an atrial switch operation. After delivery, patients with L-TGA may not require any intervention. However, the presence of associated cardiac abnormalities may require a subsequent surgical intervention (doubleswitch operation), or even a neonatal intervention. 46
Double outlet right ventricular
Double-outlet right ventricular outflow tract (DORV) refers to a group of heart defects in which both great arteries arise predominantly (>50%) from the morphologically right ventricle. In DORV, there is aortic-mitral discontinuity and, in almost all cases, a VSD is present. The type of DORV, depending on the type of the VSD and the outflow tracts features: DORV type Fallot (subaortic VSD + pulmonary stenosis), Taussig-Bing anomaly (subpulmonary VSD + transposed great arteries) or DORV with and without VSD (doubly or non-committed VSD). The incidence of this CHD in newborns is 0.03-0.07 per 1000 live births. 48 The prognosis will depend on the type of DORV, the complexity of associated cardiac anomalies, the extracardiac lesions and the presence of chromosomal abnormalities. Trisomies 21, 18, and 13, extracardiac abnormalities and Di George syndrome (22q11) are more common in DORV-type Fallot. Delection in the chromosomal 22q11 is associated with thymic hypoplasia or absence that can be detected at the fetal ultrasound in the 3VT. As is typical of the conotruncal anomalies, the four-chamber view is normal and all fetuses with DORV have abnormalities of the outflow tracts view. The presence of a malaligned VSD (>50%) and aortic-mitral discontinuity may be helpful to enable the differential diagnosis in the five-chamber view. 47 The delivery counseling in a tertiary referral center, with pediatric cardiologist and cardiac surgery available, will depend on the type of DVSD and the presence of associated extracardiac and/ or chromosomal abnormalities.
Truncus arteriosus
Truncus arteriosus is a rare condition (1.5% of CHD in newborns) in which only a single arterial arises from the ventricles and gives flow to the systemic, pulmonary and coronary circulations. In almost all cases there is a large malaligned VSD and it is strongly associated with Di George syndrome. 44 In fetuses with truncus arteriosus, the four-chamber view is normal. The five-chamber view is markedly abnormal with the presence of a thickened truncal valve that overrides a large VSD. During fetal life the differentiation between TOF with PA and truncus can be a challenge, as both the 3 V and 3VT views show only two vessels. In TOF/PA, the aortic valve is large and thin and in truncus, the truncal valve is thickened. Furthermore, in truncus arteriosus, the pulmonary arteries usually have good size and the ductus arteriosus is absent. Depending on anatomy, four types of truncus are described: type I-a pulmonary trunk arises from the truncal trunk (Fig. 19); type II-pulmonary arteries arising from the truncal trunk; type III-one of the pulmonary arteries is absent with collateral circulation; and type IV-truncus arteriosus with an interrupted arch aortic. 49 Commonly, patients with truncus arteriosus undergo surgical repair within the first few weeks after delivery, with VSD closure and usually a placement of a pulmonary conduit into the RVOT.
Coarctation of the aorta/interrupted aortic arch
Coarctation of the aorta represents a narrowing of the aortic arch and, in general, it is located between the origin of the left subclavian artery and the ductus arteriosus (aortic isthmus). Prenatally, coarctation may be difficult to detect mainly in the mild and moderate forms, because the ductus arteriosus and is patent. Chromosomal abnormalities are present in about 30% of fetuses with coarctation of the aorta.
Prenatally, the ventricular size discrepancy with right dominance in four-chamber view should increase suspicion for the diagnosis of coarctation of the aorta. In 3VT view, the presence of abnormal size of the great arteries (PA/AO >1.5) and/or isthmusductus ratio (< 0.74), increase the potential presence of coarctation (Fig. 20). 50 Finally, the presence of a transverse aortic arch hypoplasia and isthmus hypoplasia in the long-axis of the aortic arch is the most sensitive feature. 51 Isthmus measurements and Z scores (< À2.0) for gestational age are very helpful to detect hypoplasia. 52 Coarctation of aorta may develop in utero and sequential ultrasound/echocardiogram evaluation is recommended.
After birth, depending on the severity of the coarctation, these patients will require catheter or surgical intervention with an excellent prognosis. However, after neonatal period repair, the frequency of reintervention for recoarctation is higher (up to 50% of cases). In cases of critical coarctation, the delivery should be planned (38-39 weeks of gestation) at hospital with a team of hemodynamics and/or cardiovascular surgery should be prepared.
Interrupted aortic arch, in almost all cases, is associated with a malaligned VSD. In such cases, the flow across the defect is reversed (left-right shunt). The discrepancy in ventricular and the arteries size is also present. In the most common form of interrupted aortic arch (type B), the ascending aorta appears to be straighter than normal and the left subclavian artery arises from the junction of the ductus arteriosus with descending aorta (Fig. 21). Furthermore, the evaluation of the thymus as 22q11 microdeletion is frequently associated with interrupted aortic arch. 53 Interrupted aortic arch represents a ductal-dependent systemic circulation requiring initiation of prostaglandin E1 infusion, immediately after delivery. These patients require surgical neonatal repair and the delivery must be planned near term at a hospital with a team of pediatric cardiologists and cardiac surgery available.
Right aortic arch and vascular ring/double aortic arch
Right aortic arch with a left subclavian or innominate artery with a left ductus is not uncommon. The right-sided descending aorta is observed at 3 V and four-chamber views. Additionally, a gap between the pulmonary trunk and the ascending aorta and a U-shaped vascular loop enable the diagnosis on 3 V view (Fig. 22).
Double aortic arch is the only form of vascular ring in which the ring consists exclusively of vessels. In double arch, usually the right arch is larger than the left-sided arch. A left ductus and a left descending aorta are present. The two aortic arches, pulmonary trunk, and ductus arteriosus form the vascular ring as a figure of numbers "6" or "9" at the 3 V view. 54 9. Complex congenital heart disease
Univentricular heart
The univentricular heart or "single ventricle" is a condition in which both atria are connected to a dominant ventricle. This ventricle maintains the systemic and pulmonary circulations. The most common form of univentricular atrioventricular connection is double-inlet. The other two forms of univentricular AV connection are: single inlet (mitral or tricuspid atresia); and common inlet (single AV valve). The dominant ventricular chamber can be a morphologic LV or RV chamber or even undetermined. 55 The rudimentary chamber may be underdeveloped or absent. It is a rare cardiac anomaly that occurs in 2.5% of live births with congenital heart disease.
The presence of a main ventricular chamber with an absence of the ventricular septum in the four-chamber view enables the diagnosis of univentricular heart. The existence of a "single ventricle" per se usually does not present a significant hemodynamic change in the fetus.
After birth, the hemodynamics of the univentricular heart depends on other associated anomalies. Depending on the outflow tracts the newborns are submitted to palliative surgery with PA banding or systemic-pulmonary shunt (Blalock-Taussig operation). A surgical cava-anastomose between SVC and pulmonary artery (Glenn operation) is performed at about six months of age and it is completed by directing the flow of the inferior vena cava to the pulmonary artery (Fontan operation), generally between two and four years of age.
Cardiac biometric
The initial evaluation of a fetus with heart failure or at risk of myocardial dysfunction includes the assessment of the presence or absence of cardiomegaly. Cardiac thoracic index (CTI) is the ratio between the cardiac and chest circumferences (normal values 0.5) or the ratio between the cardiac and thoracic areas (normal values 0.35). 56 CTI is altered in conditions that involve global cardiomegaly such as tricuspid dysplasia and Ebstein's anomaly, cardiomyopathy, or secondary to extra-cardiac causes, such as anemia, twin-to-twin transfusion, and infection.
Abnormal size of cardiac chambers could be an early indicator of CHD. In fetuses with cardiac asymmetry, left and right ventricular widths and lengths should be evaluated. The RV and LV widths are measured in a four-chamber cardiac view when the atrioventricular valves are closed and before the onset of cardiac systole (at the end of diastole). The maximal RV and LV widths are measured just below the atrioventricular valve from inner to inner edged of each ventricle. 57 The RV to LV width ratio can be calculated and has been used to screen cardiac anomalies. Before 25 weeks of gestation, the RV dominance should raise the suspicion of coarctation of the aorta. 50,51 However, during the third trimester, a slighter dominance of RV may be physiological (normal: RV/LV <1,5). 58 The maximal lengths of the ventricles are measured from the AV valve to the apex of each ventricle. LV hypertrophy and changes in cardiac geometry may occur, characterized by more a globular heart and can be assessed using the LV sphericity index (the ratio between the longitudinal and transverse diameters of LV = 0.5). 59 Indeed, the z-score system expresses the ventricular chambers and vessels measurements as a number of standard deviations for gestational age or by adding fetal anthropometric values (femur lengh/biparietal diameter). The Z-scores are very helpful tool to screen RV or LV hypoplasia and outflow tract anomalies. 57,60
Conclusion
Considering that CHD is the most important cause of infant mortality due to birth defects, the fetal diagnosis of cardiac defects is the point of care to improve the outcome in critical CHD, especially where the circulation depends on patency of the ductus. In this scenario, cardiac screening in fetuses remains a challenge that involves a team of professionals. In the modern era, ultrasonography has shown important advances and CHD is expected be diagnosed in detail during fetal life, which may improve the prognosis for most cases of CHD. Furthermore, it is important to know the importance of the cardiovascular system in fetal well-being. Therefore, in this study, important aspects of cardiac sonography that may be applied in clinical practice, have been covered in order improve the diagnosis of a cardiac defect enabling the management in utero, planning delivery, and identifying the CHD which may progress in utero cardiac. In conclusion, many studies and guidelines have been published and further studies should be done in order to provide the tools to enable the diagnosis of CHD by ultrasound examiners. | 2018-04-03T06:16:04.209Z | 2017-12-16T00:00:00.000 | {
"year": 2017,
"sha1": "973096eacf0767618901966ddbec0ce7ca1c6e0d",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.ihj.2017.12.005",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4251196422d8cdcb2d986186c6dcff2a83aa9011",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
265618141 | pes2o/s2orc | v3-fos-license | Public mass shootings cause large surges in Americans’ engagement with gun policy
Abstract As public mass shootings continue to plague the United States, a growing scholarly literature seeks to understand the political effects of these tragic events. This literature, however, focuses on public opinion or turnout and vote choice, leaving open to question whether or not public mass shootings affect a range of other important actions citizens may take to engage with gun policy. Leveraging the as-good-as random timing of high-publicity public mass shootings over the past decade and an immense array of publicly available and proprietary data, we demonstrate that these events consistently cause surges in public engagement with gun policy—including internet searches, streaming documentaries, discussion on social media, signing petitions, and donating to political action committees. Importantly, we document the behaviors where shootings induce polarizing upswings in engagement and those where upswings skew toward gun control. Finally, we demonstrate that low-publicity shootings largely exert little-to-no effect on our outcomes.
Every year over the past decade the United States has experienced one or more public mass shooting, where an assailant with firearms entered a public space (e.g.school, religious institution, workplace, shopping venue, or festival) and opened fire on victims in a haphazard manner.These instances of acute gun violence have occurred in every region of the country and have taken the lives of individuals across the spectrum of age, gender, race and ethnicity, religion, and socioeconomic status.Striking, however, has been the lack of sweeping policy change in the United States to curb gun violence given the recurrence of public mass shootings (1,2).At the federal level, consequential changes to gun law are seldom proposed and rarely make it past the Congressional committee stage (3).What is more, research at the state level finds that, rather than tightening gun restrictions, firearm laws across the 50 states become less restrictive following public mass shootings (4).A common explanation offered by pundits and journalists for the lack of drastic policy change toward greater gun control in response to public mass shootings is that these events, while horrendous, do not propel the American public into action aimed at curbing gun violence (5,6).For example, a recent editorial in the Washington Post asserted, "rarely do Americans who support gun control make it their top priority" (7).Some contend that the persistence of mass shootings has acclimated the American people to gun violence to such an extent that "mass shootings have become white noise" (8), with the result being "Americans get apathetic about gun control" (9).Indeed, journalists have suggested that the recurrence of chaotic events like mass shootings can lead to "crisis fatigue," which causes society to "collectively throw up our hands and give up on civic engagement" (10).Adding to this, many claim there exists an "enthusiasm gap" about gun policy in America, with opponents of gun control holding gun rights as central to their political identity and highly willing to engage in routine political action in support of their views, while public support for gun control ebbs and flows in a fleeting manner around instances of acute gun violence (5,11,12).Topping this off is evidence that many public mass shootings fail to trigger national media attention (9), while the ones that do often succumb to the "issue-attention cycle" (13), whereby spikes in coverage precipitously decline a few weeks after a shooting and the focus of the media-and thus the general public-shifts from gun control to other issues.
Even accounts of gun politics in the United States highlighting the presence of episodes of action toward gun control leave unclear how ordinary citizens engage with gun policy in the wake of gun violence.For example, one long-standing and prominent account depicts gun politics in America as a recurrent cycle of "outrage-action-reaction" resulting in policy gridlock (14).According to this account, instances of horrific gun violence, such as public mass shootings, cause surges in public outrage and political action to achieve gun control which are promptly countered and stymied by powerful gun rights advocates.This account implies that public outrage ebbs and flows around acute incidents of gun violence.Moreover, this account largely documents the "action" stage as comprised of the activities of activist groups and major gun control advocacy organizations (e.g. the Brady Campaign and Everytown for Gun Safety), with little discussion or empirical assessment of mass political behavior and large-scale actions by ordinary citizens.The components of this account best supported by extant empirical research are "outrage" in response to public mass shootings (15) and the power of pro-gun interests groups in shaping legislator behavior and policy outcomes (16)(17)(18).In sum, the standing wisdom is that public mass shootings, at best, result in small and ephemeral shifts in attitudinal support for gun control that are ultimately unsupported by surges in mass political action aimed at curbing gun violence.And at worst, public mass shootings are part of a constant background noise of ongoing crises for Americans that do not provoke engagement with gun policy.
Is the standing wisdom true?Do Americans, in fact, fail to engage in action to promote gun control following public mass shootings?Moreover, is there a stronger response following public mass shootings among opponents of gun control in defense of gun rights?Surprisingly, existing research does not provide clear answers these questions.On the one hand, in contrast to the claim that mass shootings have become "white noise," existing research demonstrates that public mass shootings take a significant psychological and emotional toll on the American people (15,19).On the other hand, looking beyond Americans' wellbeing to their political attitudes, past research finds that public mass shootings do not elicit a clear or consistent effect on Americans' opinions on gun control (20)(21)(22)(23).Perhaps more importantly, with respect to electoral behavior, recent research demonstrates that public mass shootings do not heighten voter turnout or influence party choice in federal, state, or local elections (24,25).Judging by these findings alone, one may suspect the standing wisdom is true.
However, conspicuously absent from the literature is research analyzing Americans' responses to public mass shootings focusing on the myriad ways people may engage with politics and attempt to influence public policy beyond reporting opinions to pollsters or casting votes in elections.Following public mass shootings, do Americans do things like seek out political information, engage in political discussion, express their opinions through the display of political banners or flags, sign petitions sent to policymakers, or donate money to policy advocacy organizations?Such behaviors are critically important because public engagement and action around an issue in these ways has a powerful effect on policy above and beyond merely holding an opinion (26)(27)(28)(29).Moreover, the opinions individuals profess to hold do not always match the actions that they take, especially when such actions require costs that individuals may not be willing to meet (30,31).And yet, despite the vital importance of such actions on democratic responsiveness, the scholarly literature renders us without an answer to this question.
Prior research on mass shootings (20,23,24) theoretically draws on the literature on "focusing events" (32,33), which argues that sudden and harmful events forcibly direct societal attention to a problem and mobilize the general public into action to remedy aspects of the status-quo policy environment deemed responsible for the event.Applied to mass shootings, the operative hypothesis is that these appalling events swiftly direct Americans' attention to gun violence, highlight the problem of inadequate government regulation of deadly firearms, and mobilize the public around policy change toward greater gun control.While the application of this framework to mass shootings has rendered mixed or unsupportive results with respect to Americans' attitudinal support for gun control or turnout and vote choice in elections, it remains to be seen whether or not this framework finds empirical support when analyzing nonelectoral forms of engagement with gun policy.One recent piece of suggestive evidence in support of this framework comes from Goss and Lacombe (34), whose presentation of trends in the volume of letters sent by citizens to the editors of four newspapers illustrate observable spikes in the number of gun-related letters following several high-profile public mass shootings (e.g.Columbine, Sandy Hook, and Parkland).While important in their own right, the authors admit that their findings are descriptive in nature, focus on a single behavior, and do not involve a systematic analysis of the causal effect of a broad set of public mass shootings on gunrelated letters to editors.As such, it remains open to question whether public mass shootings cause consistent surges in various forms of behavioral engagement with gun policy.
In this article, we analyze an array of previously unexplored actions Americans may take in response to public mass shootings.We collected a compilation of large-scale publicly available and proprietary data measuring an array of indicators of engagement with gun policy.These indicators include: measures of political information-seeking, such as internet searches for firearm-related policy positions and advocacy organizations (Google Trends data) and streaming prominent documentaries about gun policy (proprietary data from media companies); measures of political discussion and expression, including online political speech (Twitter data) and purchases of pro-gun political flags for display (Amazon.comsales data); and finally, measures of efforts to directly influence politicians and public policy, such as signing petitions sent to lawmakers (Change.organd Patriot Voices data) and donating money to the political action committees (PACs) of gun policy advocacy organizations (Federal Election Commission contributions data).These measures capture forms of engagement that vary in their costs in time, effort, and money to ordinary citizens, as well as in their potential visibility and significance to key policy actors.Importantly, for most of these indicators of engagement, we measure activity on both sides of the gun policy debate-that is, activity oriented toward gun control and gun rights.This enables our analyses to speak to the issue of countervailing political engagement, with the overarching goal being the detection of a potential tilt in activity toward one side of the gun policy debate vs. the other.
With these data in hand, we leverage the as-good-as random timing of public mass shootings occurring over the past decade to estimate the causal effect of these events on Americans' engagement with gun policy.Given known variation in media coverage of public mass shootings (35), the importance of mass media in shaping the political effects of local events by elevating their salience (36,37), and the considerable power of the media in general to shape political priorities and discourse (38,39), we collected information about the level of national news media attention given to each public mass shooting.The principal expectation is that receiving extensive publicity elevates a shooting to the status of "focusing event" capable of mobilizing public engagement with gun policy, whereas shootings attracting less national media attention will remain localized events with limited impact on mass political behavior.
In contrast to the standing wisdom that public mass shootings do not propel Americans into action, our findings demonstrate that high-publicity public mass shootings cause drastic increases in internet searches for gun control and gun control advocacy organizations, online political speech mentioning gun control and gun control advocacy organizations, signing petitions demanding gun control, and donations to the PACs of gun control organizations.These spikes in political activity are typically quite large, constituting a multiple SD shift from preshooting patterns for many of these outcomes.Interestingly, high-publicity public mass shootings typically prompt countervailing spikes in informationseeking about and online discussion of gun rights and pro-gun political organizations.That said, when analyzing sales on Amazon.com of popular political flags with gun rights slogans, we fail to observe any effect of public mass shootings on purchases.Adding to this, when turning our focus to efforts by Americans to directly influence lawmakers and policy advocacy organizations-namely, petition signing and PAC donations-we find that high-publicity public mass shootings only trigger activity oriented toward gun control.Finally, we find that low-publicity public mass shootings typically fail to instigate significant changes in Americans' level of engagement with gun policy.
Beyond opinions and voting: measuring Americans' engagement with gun policy
Our analysis relies on a unique and immense compilation of publicly available and proprietary data on daily indicators of Americans' engagement with gun policy.The various datasets used in our analysis are summarized in Table 1.The data we compiled far surpass previously published work in terms of breadth and granularity.
We begin with daily activities in pursuit of information about gun policy-namely, internet search behavior and viewership of documentaries.For internet search behavior, we retrieved publicly available data on search histories from Google Trends between 2011 July 6 and 2022 February 26.Because of the private nature of internet searches, such information represents true interests (40), can be used effectively to identify information acquisition among individuals in an area (40)(41)(42)(43), and has been shown to match actual behaviors and outcomes (44)(45)(46)(47).We collected Google Trends data on Americans' internet searches about basic gun policy positions ("Gun Control" and "Gun Rights") and political organizations working to achieve gun control ("Brady Campaign" and "Everytown for Gun Safety") and to preserve gun rights ("the NRA" and "Gun Owners of America").We combine these with data on daily searches for terms unrelated to gun violence and gun policy (e.g."Recycling"), which we use to perform placebo tests with the expectation of null effects of public mass shootings on these presumed treatment-irrelevant outcomes.
For additional self-educating behavior, we queried Google's videos tab for "gun control documentary" and "gun rights documentary."We contacted the media companies who produced the films in the search returns with requests for daily streaming data on the company's proprietary web platform and/or their YouTube videos page.Most staff within media companies are guarded from the public-making it difficult to contact relevant personnel with data requests-and the majority of our requests did not receive a response or were denied.In total, we received proprietary daily streaming data for four popular documentary films about gun politics in America.First, we received data for one of the most popular American gun politics documentaries released within the past decade: Gun Nation, produced by the British The Power of the NRA, was released in 2015 January 6 (streamed over 461,000 times at the time of data collection), and the second, NRA Under Fire, was released in 2020 March 24 (and had been streamed over 45,000 times).Finally, we received YouTube streaming data for the Real Stories documentary, The Gun Store, a 2019 gun violence documentary that had been streamed over 13,000 times at the time of data collection.For each documentary, we received daily streaming data from their release date up through between June and September 2021.We make no claim that these four documentaries are representative of the universe of extant documentary films about gun policy in America.Rather, we began with a purposive sample of popular documentary films based on Google search returns and the resulting four documentaries we analyze represent a convenience sample of retrieved data.Despite this, our sample includes popular documentaries released by prominent media outlets and provides us with a meaningful first step in understanding the impact of public mass shootings on efforts by Americans to educate themselves about gun policy via streaming documentaries.
We complement these indicators of daily information-seeking with data on the daily volume of online discussion of gun policy on the social media platform Twitter.Discussion of political issues on platforms like Twitter is a common form of contemporary political activity (48).Twitter reported having over 69 million monthly active users in 2021, b and roughly 23% of American adults in 2021 reported using Twitter.c Beyond providing a space where citizens discuss political issues, Twitter also serves as an arena where the public interacts with government officials, resulting in the demonstrated capacity of discourse on Twitter to direct the attention of legislators and shape their political agendas (26).Twitter data have been used to understand a variety of social and political phenomena (26,(49)(50)(51), including specifically measuring public discussion of political issues (38).We used Brandwatch's Crimson Hexagon to collect the daily count of tweets from April 2011 to June 2021 that included gun policy position hashtags ("#guncontrol" and "#gunrights"), mentioned gun policy advocacy organizations ("#everytown" and "#NRA"), or an unrelated hashtag used as a placebo ("#recycling").All in all, our dataset totals nearly 20 million tweets (n=19,877,924).
In addition to posting comments on social media, a common way Americans express their views is displaying political signs or flags (52).Prior research finds that displaying political signs can exert modest yet reliable persuasion effects on nearby residents (53) and that an estimated 12 to 19% of Americans have displayed a political sign at their home.d Applied to gun policy, people may display signs or flags that convey their support or opposition to gun control.A search of the Amazon.commarketplace led us to identify three popular flags endorsing gun rights sold by one of the largest online vendors of political flags, ANLEY INC.These three flags are: (i) a "Second Amendment" American flag displaying text of the second amendment, (ii) a "Liberty or Death-2nd Amendment" flag displaying a skull and crossbones with rifles, and (iii) a "Come and Take It" flag exhibiting a rifle.We contacted ANLEY INC. and the CEO of the company provided us with daily sales data for each of these flags from the company's store on Amazon.com between 2021 January 1 and 2022 January 1.Using these data, we constructed a series of the daily count of sales for each flag and for all three flags combined (n=46,276 total sales during this time period).While we were able to locate flags endorsing gun rights, there is a notable absence of flags endorsing gun control.ANLEY does not carry a single flag endorsing gun safety or control, and a search on Amazon.comfor "gun control flag" did not render a single result.As such, our analysis of flag sales offers a window into the effect of public mass shootings on a behavior seemingly unique to the gun rights side of the policy debate.
To extend our analysis to act directly intended to influence lawmakers and public policy, we obtained data on daily petition signing and political contributions to gun policy advocacy organizations.For petition signing, we submitted requests to the creators of popular gun policy petitions on Change.org,one of the largest and most popular websites for creating and circulating political petitions in the world.There are over 200 petitions at Change.org categorized under the topic heading "gun control" and the petition bearing the most signatures, titled "Pass Common Sense Gun Control," was created by one of the students surviving the 2018 Stoneman Douglas High School shooting.We submitted a request for data to the creator of this petition, who subsequently shared anonymized data on all signatures along with the date and country of residence of the signer.The data included n = 375,032 signatures from the United States collected between February 15, 2018 and July 20, 2021.We accompany this with similarly anonymized signature data from the second most popular gun control petition on Change.org,which was created by a physician in San Francisco, CA, on October 2, 2017, following the mass shooting in Las Vegas.This petition, entitled "Physicians Demand Stricter Gun Control," included n = 220,003 signatures collected in the United States between October 3, 2017 and May 12, 2023.The third most popular petition on Change.org was created by a Walmart employee following the mass shooting at a Walmart store in El Paso, TX, on August 3, 2019.This petition, entitled "Stop the sale of guns at Walmart stores," included n = 163,457 signatures collected in the United States between and three of these petitions are listed on one of the most popular online petition platforms.As such, analysis of these petitions provide important initial insight as to whether public mass shootings propel Americans to engage with gun policy by signing prominent online petitions.e Finally, we accompany the petition data with information retrieved from the United States Federal Election Commission (FEC) on publicly reorted political contributions made between January 7, 2013 and December 31, 2020 to the two largest gun control and gun rights PACs operating in the United States that were actively raising money during this time period-the Giffords PAC (gun control) and the National Rifle Association (NRA) Victory Fund (gun rights).f Relying on donations to PACs likely yields a substantial under-count of donation activities to these organizations but itemized donations data to 501(c)3 and 501(c)4 gun policy nonprofits, which likely raise the vast majority of money following mass shootings, is not publicly available-a long acknowledged fact in previous research that works with donations data (54).
Analytic strategy
To identify the effect of public mass shootings on our outcomes, we use a regression discontinuity in time (RDiT) approach.Regression discontinuity designs (RDDs) leverage as-good-as-random variation and continuity in potential outcomes around an arbitrary cutoff to estimate a causal treatment effect (55)(56)(57)(58)(59). RDDs have been shown to benchmark well to randomized control trials, e.g.(60).The RDD that we use rests on the reasonable assumption that the precise timing of mass shootings are unanticipated by the public and are exogenous to the mass behaviors that we consider as outcome variables.g Our analysis focuses on 44 public mass shootings occurred between 2011 and 2021.These 44 shootings are listed in Fig. 1 and Appendix Table A1.The term "mass shooting" entails a broad umbrella of events typically involving three or more fatalities (not including the shooter) in a single shooting event.h Public mass shootings are a subset of mass shootings involving an assailant with firearms entering a public space and opening fire on victims in a haphazard manner.Excluded from this subset are mass shootings occurring in: private homes and other residential settings targeting family members, spouses and romantic partners; public spaces resulting from spontaneous altercations between belligerents carrying firearms (e.g.bar fights); and public spaces resulting from criminal or gang activities (e.g.drive-by shootings or police shootouts).Much of the social science research analyzing the impact of mass shootings focuses on public mass shootings (15,19,20,24) because they typically involve higher victim counts and generate more media attention (35) than other types of gun violence involving three or more victims (e.g.familicide or gang-related shootings).We conducted an extensive search across multiple databases on mass shooting events to identify the subset occurring in schools and college campuses, workplaces, religious institutions, recreational areas and festivals, shopping venues, and other public settings.The list of 44 public mass shooting we identify is comprehensive, including the deadliest high-profile shootings during the decade under study (e.g.Las Vegas Harvest Music Festival, Orlando Florida Pulse Night Club, and Sandy Hook Elementary), as well shootings generating considerably less national media attention (e.g.Marysville Pilchuck High School Shooting, Carson City IHOP shooting, and Don Carter Lanes).
Importantly, public mass shootings sometimes occur in event chains or temporal clusters, whereby one or more shooting occurs shortly in time after an initial event.This phenomenon is sometimes referred to by journalists and scholars as a "contagion" or "copy cat" effect (62).Several of the public mass shootings on our list occurred within several days of another shooting, such as the Gilroy Garlic Festival, El Paso Walmart, and Dayton Ned Pepper's Bar shootings in 2019.Within our data, 37 shootings can be analyzed as standalone events, whereas 7 shootings, given their proximity to others, are analyzed in event clusters.Our analysis includes 3 event clusters: the 2016 Kalamazoo County + Hesston Excel Industries cluster; the 2019 Gilroy + El Paso + Dayton cluster; and the 2021 Atlanta Massage Parlors + Boulder CO King Soopers supermarket cluster.i In total, these 40 public mass shooting events (37 standalone shootings, 3 shooting clusters) span a decade of time and are diverse in terms of geographic region, setting of shooting, and characteristics of the shooter and victims.We include a map (Fig. A1) and detailed table of the shootings (Table A1).
Figure 1 displays the cumulative proportion of media coverage in the nation's top-50 news sources (as categorized by Media Cloud) that mentioned the terms "mass shooting" for the week following each shooting.The plot is sorted by highest to lowest coverage and grouped into quartiles.Existing scholarship addresses the question of why some public mass shootings garner more media attention than others (35).Public mass shootings, like any event occurring in a specific place and time, are unobserved by citizens outside of the immediate vicinity of the shooting or the confines of local social and media networks.As such, national media attention serves as a key vehicle for raising widespread public awareness of a shooting event, and thus, in bringing about a change in behavior across the populace.With this in mind, we present results for high-publicity and low-publicity public mass shootings, which we distinguish as those receiving above vs.below median levels of national media attention.However, to keep the presentation of results as succinct as possible, our figures focus on displaying the estimated effects of high-publicity public mass shootings, which we define as those receiving above median levels of national media attention (the "Top 20" shooting events in Fig. 1).All results presented below focus on this subset of high-publicity shooting events and we present results for low-publicity public mass shootings (i.e.below median or "Bottom 20" shooting events in Fig. 1 and Figs.A3 to A7) in the Appendix.This said, to aid readers in drawing comparisons in effects by level of media attention, our figures include meta-analytic estimates summarizing average effects across outcomes for shooting events receiving the most media attention ("Top 10"), all high-publicity shooting events receiving above median media attention ("Top 20") and low-publicity shooting events ("Bottom 20").
Information seeking
Figure 2 displays the effect of public mass shootings on Americans' information-seeking behavior on gun policy.We begin by presenting a plot of the raw data of Google searches following one example event-the 2018 Stoneman Douglas High School shooting-to illustrate what a sample case looks like, before presenting RDiT coefficient plot estimates across all of our other outcomes and shootings.The plots in panel A present raw daily Google search data for "Gun Control" and "Gun Rights," which are rescaled to range between 0 and 100, together with a best-fit linear regression on both sides of the event cutpoint, for the two months before and after the Stoneman Douglas shooting.This figure illustrates the clear and substantial jump in internet search interest of 65 points (2.26 SDs) for "Gun Control" and 46 points (2.27 SDs) for "Gun Rights" following the shooting.These effects are large-being both statistically and substantively significant by any reasonable benchmark-and are indicative of a meaningful increase that lasts long after the shootings.
The plots in the middle of Fig. 2 display the RDiT estimates scaled by the SD of each outcome for each of the mass public shooting events we analyze on Google searches for gun policy positions (panel B), a "placebo" search term ("recycling") presumably unrelated to gun violence or mass shootings (panel C), and gun policy organizations (panels D and E).panel B examines the effect of public mass shootings on internet search interest in gun control (circles) and gun rights (triangles).Because publicly available Google Trends data is rescaled depending on the date range of data collected, it is not possible to know the actual number of raw searches on any given day, so we rely on SD shifts to characterize the magnitude of the effect size (d) in political activity.As a benchmark for interpreting substantive significance, we note that many scholars have used the heuristic of three cutoffs that demarcated small (d = 0.2), medium (d = 0.5), and large (d = 0.8) effect sizes from one another (63).Each analyzed mass shooting spanning from Sandy Hook in 2012 to Atlanta + Boulder in 2021 caused statistically significant and substantively large surges, from between −1.19 ("Gun Control" searches following Indianapolis Fed Ex [2021]) and 3.59 ("Gun Control" searches following Aurora Movie Theater [2012]) SD shifts.
We also present meta-analytic estimates of treatment effects in the final three rows of the coefficient plots in Fig. 2. We present meta estimates for the 10 shooting events listed at the top of Fig. 1 with the most media coverage "(Top 10)," the 20 high-publicity shooting events with above median levels of media coverage "(Top 20)," and the 20 low-publicity shooting events that garnered the least (i.e.below median) media coverage "(Bottom 20)." j We include full plots of RDiT estimates for these shootings in the Appendix.Across the 10 shooting events with the most media coverage, we find an average increase in "Gun Control" searches of 2.18 SDs and "Gun Rights" searches of 1.86 SDs.Meta-analytic estimates for the top 20 shootings are two-thirds to three-quarters as large as those of the top 10.And, as expected, estimates for the bottom 20 low-publicity shooting events are a relatively precise 0. This highlights the expected moderating role that the media play in the effects we document.Again, the effects for public mass shootings with high media coverage are large-showing a massive increase in gun-related information seeking following shootings.In stark contrast, panel C displays null RDiT estimates for the impact of public mass shootings on an unrelated search terms ("Recycling").These null results confer validity to the Fig. 2. Effect of public mass shootings on internet search behavior.A) Displays daily Google Trends search data for "Gun Control" before and after the Stoneman Douglas shooting.The remainder of the plots display RDiT treatment effect estimates with 95% CIs.B) Circles indicate "Gun Control" and triangles "Gun Rights" searches.C) Estimates are for "recycling" searches.D) Circles indicate "Everytown for Gun Safety" and triangles "Brady Campaign" searches.E) Circles indicate "Gun Owners of America" and triangles "NRA" searches.For the three bottom rows of B-E) "Top 10," "Top 20," and "Bottom 20" indicate meta-analytic estimates for shootings by rank of media coverage received.Point estimates that are colored gray in Panels B-E indicate that the CIs includes 0; point estimates in black are statistically significant.Missing estimates arise when there is no overlap between the time series outcome variable we measure and when a shooting occurred or insufficient data to estimate an effect.B-E) The units are changes in SDs.
Reny et al. | 7
findings presented in panel B by illustrating that mass public shootings do not cause immediate shifts in public interest in an issue unrelated to gun violence or policy.
Accompanying searches for gun policy positions, panels D and E demonstrate that, following high-publicity public mass shootings, Americans go beyond seeking information about gun policy positions by engaging in internet searches for leading gun control and gun rights advocacy organizations, such as Everytown for Gun Safety (panel D, circles), the Brady Campaign to Prevent Gun Violence (panel D, triangles), Gun Owners of America (panel E, circles), and the NRA (panel E, triangles).We find −0.87 (searches for "Everytown for Gun Safety" following Charleston Church Shooting) to 3.5 SD (searches for "Everytown for Gun Safety" following Stoneman Douglas) jumps in search activity for gun policy organizations, an average of 1.42 to 1.55 SDs increases (Top 10) for gun control and gun rights organizations, respectively.By any standard, these effects are large.
Figure 3 displays the RDiT estimates for the impact of public mass shootings on an alternative indicator of informationseeking: watching a documentary about gun violence and firearms regulations.For the PBS documentary Gunned Down, which aired in 2014, we see that all but two of the high-publicity public mass shootings that occurred after the airing of the documentary caused a significant spike in streaming activity on their web interface.These effects, reported in terms of number of raw streams of the documentary, are relatively smaller, ranging from 0 additional views (Gunned Down following Virginia Beach Municipal) to 2,337 additional views (Gunned Down following Orlando) relative to a median daily stream baseline of 43.We find a similar range of spikes in web streams for Gun Nation, which range from 0 additional streams following the Aurora Henry Pratt shooting to 2,049 additional streams following the Las Vegas shooting.The effect of the Atlanta and Boulder shooting on streams of NRA Under Fire, a documentary released in March 2020, is statistically significant but relatively small, at 47 additional streams, relative to a median daily stream baseline of 41.Last, we observe a small but statistically significant jump of 8 additional streams of Gun Store on YouTube following the Atlanta + Boulder shootings, relative to virtually no streams (median value of 0) leading up to that shooting.While these effects may seem inconsequentially small (a meta-analytic effect of 196 streams on average), three caveats are due: first, the meta-analytic effect relative to the base rate is large; second, streaming an educational documentary on an internet-connected device is a time-consuming, and thus "high cost," activity relative to less costly activities like Google searches; third, we highlight that we have data only on certain web streams on proprietary digital streaming platforms or on YouTube-these metrics do not capture the consumption of the documentaries when they are aired on live television or on different platforms like Amazon Prime or Vimeo.
It is important to note that internet searches cannot be equated with political preference or endorsement, as users in favor of gun control could search for information about gun rights (i.e.counter-attitudinal searches).This said, the findings in Figs. 2 and 3 provide support for one firm conclusion: high-publicity public mass shootings over the past decade caused clear, consistent, and sizable surges in political information-seeking among the American public.This information-seeking activity encompasses multiple behaviors (searching the internet and streaming documentaries) and targets different types of political information (policy positions and advocacy organizations).Critically, we find that this information-seeking activity is not limited to one side of the policy debate on firearms in America.These figures also reveal that shootings garnering scant media attention (i.e."Bottom 20") on average had little-to-no effect on these information seeking behaviors.
Political discussion on twitter
Figure 4 presents the results from our analysis of discourse about guns on Twitter.We present RDiT estimates for the effect of public mass shootings on Tweets using hashtags mentioning policy positions ("#guncontrol" and "#gunrights," circles and triangles in panel A), Tweets using gun policy advocacy organization hashtags ("#everytown" and "#NRA," circles and triangles in panel B), and Tweets with a hashtag unrelated to firearms or gun policy ("#recycling," panel C).
We consistently find that high-publicity public mass shootings cause large spikes in gun-related policy conversation mentioning "#guncontrol" or "#gunrights."The effects range from null to 3.81 SDs ("#guncontrol" following Atlanta + Boulder).Our metaanalytic estimate of the effects for the top-10 shootings is between 1.73 and 2.27 SD increases for "#guncontrol" and "#gunrights" tweets, respectively, or between n = 4,875 and 6,397 additional gun policy-related Tweets, on average, immediately following a high-media attention shooting.Prominent public mass shootings also lead to large spikes in Twitter comments that mention specific gun policy organizations, with slightly smaller effects ranging from null up to 3.68 SDs, an average spike of between 1.20 and 2.09 SD or an additional n = 3,381 to 5,889 Tweets.Importantly, we observe almost exclusively null results (e.g. a relatively precise null meta-analytic effect of −0.16 for top-10 shootings) in panel C for a Twitter topic that is unrelated to gun policy ("#recycling").Finally, the bottom rows of Fig. 4 reveal that low-publicity public mass shootings had no detectable effect on discussion of gun policy on Twitter.
In sum, the results in Fig. 4 provide evidence that high-publicity public mass shootings over the past decade led to consistent and sizable spikes in social media discussion of gun policy positions and advocacy organizations.Given that this analysis is drawn from the entire universe of Tweets during the time period under study, it is important to note that these findings are comprehensive.However, while we have evidence that many of the Tweets mentioning "#guncontrol" or "#everytown" are accompanied by language calling for strengthening gun laws and Tweets mentioning "#gunrights" or "#NRA" are accompanied by language calling for gun rights (see Fig. A2 for examples), there are a nonnegligible number of Tweets that include both or express mixed sentiments.As a result, while these results are suggestive of increases in discussion on both sides of the policy debate, the firmest conclusion that can be drawn is that the American public consistently responds to high-profile public mass shootings with heightened online discussion of gun policy positions and advocacy organizations.
Purchases of political flags
This section investigates whether or not our findings for Twitter discourse extend to other means of publicly expressing one's views on guns, such as purchasing political flags for display.Our data on flag sales cover the time period when three public mass shootings occurred-two higher publicity events (Gilroy + El Paso + Dayton and San Jose VTA) and one lower publicity event (Oxford High).As shown in Fig. 5, we find that none of these events lead to a statistically significant increase in flag purchases.The meta-analytic estimate across flags and shootings is statistically indiscernible from zero.In sum, when analyzing an outcome seemingly unique to the gun rights side of the debate-purchasing political flags for display with gun rights messages-we fail to uncover evidence that recent public mass shootings lead those presumably in favor of gun rights to purchase pro-guns flags as means of expressing their views.To be clear, over 46,000 of these three popular gun rights flags were sold by ANLEY alone between January 2021 and January 2022, indicating that this is an activity gun rights supporters engage in.Fig. 4. Effect of public mass shootings on social media discussion.RDiT treatment effect estimates with 95% CIs.A) Circles indicate effects on "#guncontrol" and triangles on "#gunrights" tweets.B) Circles indicate effects on "#everytown" and triangles "#NRA" tweets.C) The placebo outcome is tweets including "#recycling."For the three bottom rows of panels A-C, "Top 10," "Top 20," and "Bottom 20" indicate meta-analytic estimates for shootings by rank of media coverage received.Gray point estimates indicate that the CIs includes 0; black point estimates are statistically significant.Missing points arise when there is no overlap between the time series we measure and when the shootings occurred or we have insufficient data to estimate an effect.
Our findings here simply suggest it may not be an act they engage in more in the wake of public mass shootings.
Petition signing and donations
We now turn to the more unequivocally partisan political acts of signing a gun policy petition and donating to a gun policy PAC. Figure 6 displays the RDiT results for our analysis of petition signatures for major gun control and gun rights petitions.For the shooting events for which we have data on gun control petitions, we observe some very large spikes in signatures following shooting events: these spikes range between 0 and 3.59 SDs (following Gilroy + El Paso + Dayton).On the gun control petition side, the meta-analytic estimates in the bottom rows make it clear that signatures to prominent Change.orgpetitions significantly spiked following high-publicity shooting events (e.g.Stoneman Douglas) but experienced little change following low-publicity shooting events (e.g.Aurora Pratt).Turning to gun rights petitions in the right-side graph, we find inconsistent and mostly null effects of public mass shootings on signature activity.For example, signatures to "Patriot's Voice" significantly increased after the Orlando nightclub and San Jose VTA shootings but significantly decreased following the Indianapolis FedEx shooting.The meta-analytic estimates in the bottom row indicate substantively small and statistically insignificant effects, on average, of high-and low-publicity shootings on signatures to these gun rights petitions.
In Fig. 7, we display the RDiT treatment effect estimates for all relevant shootings on SD shifts numbers of donation (triangles) Our meta-analysis of the highest profile shootings suggests surges, on average, in donation amounts but not in number of donations to Giffords PAC.Effects are also mixed for the NRA PAC donations.While we observe positive spikes in the number of donors in 9 of 15 shootings, they are small in magnitude, and we only find an increase in dollar amounts following 1 of the 15 shootings.On average, effects for numbers and amount of donations to NRA PAC are precisely null.In order to directly compare the weight of donations to gun control vs. gun rights PACs, we also calculate the difference in number of raw donations to NRA PAC compared to Giffords PAC (#Giffords-#NRA) and again estimate treatment effects following each of the mass shootings for which we have data.In 8 of the 15 shootings, the RDiT estimates are indistinguishable from zero, while the Thousand Oaks Borderline Bar, San Bernardino, and Pittsburgh Synagogue shootings generated larger surges for Giffords PAC (between 0.26 and 1.33 SDs relative increases) and the Stoneman Douglas, Santa Fe High, Isla Vista, and Charleston Church shootings for the NRA PAC (0.47 to 0.59 SD relative increases).
The average and countervailing effects on political activity
Is there an overall political tilt in the activity generated by public mass shootings?That is, when looking across shootings and outcomes, do we see roughly equivalent effects or do we observe greater effects for activity on a given side of the gun policy debate?To answer this question, we estimated meta-analytic effects using RDiT estimates of SD shifts following all shootings for which we had data for Google searches for "gun control," "gun rights," gun control organizations ("Brady Campaign" and "Everytown for Gun Safety"), and gun rights organizations ("Gun Owners of America" and "NRA"), all "#guncontrol," "#gunrights," "#everytown," and "#NRA" Tweets, all gun rights and gun control petitions, and then all donations (both dollar amounts and volume) to gun rights (NRA) and gun control (Giffords) PACs.We estimate the meta-analytic effects for the 10 shootings with the most media coverage "Top 10" (panel A), the top 20 shootings with above median levels of media coverage "Top 20" (panel B), and the 20 shootings that garnered the least media coverage "Bottom 20" (panel C).
The results from this analysis are presented in panels A, B, and C in Fig. 8. Looking at just the top-10 shootings in panel A, we observe large changes in internet search and social media activity on both the "gun control" and "gun rights" sides of the equation.For petition signing, which are more unequivocally partisan, on average we observe large changes in signing of gun control petitions but little change in signing of gun rights petitions.Finally, with respect to donations to the PACs of gun policy advocacy organizations, we find that the average shooting in our sample elicits larger sized donations to a gun control organization but not a greater number of donations.These latter finding makes sense given that income constrains donation behavior and there is little reason to expect that public mass shootings trigger broad changes in disposable income.As such, it stands to reason that shootings appear to encourage those already able to give to donate more rather than expanding the number of donors in the gun policy arena.We fail to observe changes in the number or amount of donations for the average shooting in our sample for the gun Fig. 7.The effect of public mass shootings on donations.RDiT treatment effect estimates with 95% CIs.A and B) Triangles are estimates for number of donations and circles for donation amounts.For the three bottom rows of panels A and B, "Top 10," "Top 20," and "Bottom 20" indicate meta-analytic estimates for shootings by rank of media coverage received.Gray point estimates indicate that the CIs includes 0; black point estimates are statistically significant.Missing points arise when there is no/insufficient overlap between the time series we measure (i.e. the dates the organization was registered with the FEC) and when the shootings occurred.rights organization we consider.Moving to panel B, the "Top 20" shootings, we find a similar pattern of effects with some minor differences: first, when focusing on the 20 shooting events receiving above median news coverage, we observe a more pronounced pattern of shootings on average generating more tweets mentioning gun control than gun rights and generating more tweets mentioning the NRA than Everytown; second, average effects on petition signing and donations are smaller in magnitude.Concluding with panel C, the 20 shootings that garnered far less media coverage, we find null effects across all outcomes.
One might wonder whether the weaker effect on average of high-publicity public mass shootings on gun rights Tweets, petition signatures or donations derive from these activities being more a part of the political participation "repertoire" of liberals who favor gun control compared to conservatives who tend to oppose it.In other words, do these tests stack the deck by choosing acts that those in support of gun rights simply do not engage in?Survey data suggest against this, with several representative samples of American adults finding relatively small differentials between those favoring gun rights vs. gun control in self-reported rates of expression of opinions about gun policy on Twitter or signing of petitions on gun policy (11).Moreover, the same surveys find that those supporting gun rights are more likely than those supporting gun control to report donating money to a gun policy organization.Such findings render it unlikely that the differences uncovered in Fig. 8 derive from underlying asymmetry in the types of acts engaged in by supporters vs. opponents of gun control.
Heterogeneity by proximity and race/ethnicity of victims
Previous research argues that the effect of public mass shootings may be conditioned by proximity to the location of shootings, e.g.(20) and the race/ethnicity of shooting victims (65).These works suggest that public engagement with gun policy will be greater among those residing closer to public mass shootings and when the victims of a shooting are mostly white.Appendix B presents the results from analyses exploring the effect of the shootings in our data by proximity using in-state vs. out-of-state as the measure of proximity (Fig. A8) and by the the percentage of shooting victims that were estimated by to be non-Latino white (Fig. A11).The results in Appendix B reveal little discernible heterogeneity across these dimensions.First, for the outcomes in our data where geocodes were readily available (e.g.petitions and donations), we observe similar effects of shootings on engagement arising from the state where shootings occurred (i.e."in-state") compared engagement in all other states (i.e."out-of-state").Second, after retrieving a list of the full names of all victims for each shooting event in our data and estimating the race/ethnicity of each victim (procedure described in Appendix B), we were able to generate an estimate for each shooting event of the percentage of victims that were non-Latino white.As a quick aside, the amount of media coverage received by a shooting event is not correlated with the ethno-racial composition of its victims (Fig. A10); rather, media coverage is heavily positively correlated with the number of victims, which comports with previous research (35).We reestimated RDiTs by tercile of % non-Latino white of victims and find little evidence that public engagement with gun policy systematically varies by race-of-victim.
Conclusion
This study demonstrates that high-publicity public mass shootings cause large surges in myriad forms of public engagement with gun policy.These findings belie the standing wisdom that ordinary Americans do not engage in action oriented toward gun control following the occurrence of public mass shootings.Critically, our analysis finds that these upswings in engagement are countervailing: in many instances, we observe surges in activity on both sides of the policy debate; however, when averaging across shootings and outcome measures, contrary to popular claims about an "enthusiasm gap," we find the shootings-induced activity tilts toward gun control.Finally, our analysis reiterates the vital role of the media in shedding light on these tragic events, as shootings receiving relatively little media attention, on average, did not instigate public engagement with gun policy.In contrast, shootings receiving extensive media coverage generate large effects.If most Americans do not learn about a public mass shooting, there is little chance the shooting will spur widespread political activity.Following the public mass shooting at the Old National Bank in Louisville, KY, in 2023 April 10, a letter to the editor of the Los Angeles Times complained "we are becoming so inured to mass shootings that the one in Louisville made only page A-12 of the following day's print LA Times.If we want lawmakers to do something about this problem, we cannot bury the stories in the back of the paper as if we don't care."k While this article offers the most extensive analysis of public engagement with gun policy following public mass shootings to date, we see several directions for future research.First, future scholarship could analyze additional types of engagement, such as attending a march or rally or contacting an elected official.Second, publicly available FEC data on donations to major gun policy PACs is quite limited; thus, future research could build on our analysis of PAC donations by striving to obtain data on small donations made to a wider range of prominent gun policy organizations.Third, while our analysis focuses on public mass shootings that occurred over the past decade (2011 to 2021), future research could extend our analysis to shootings occurring prior to 2011 or those occurring over the past few years.Fourth, while prior work finds little direct effect of public mass shootings on electoral behavior (24,25), future research could explore whether the surges we observe in nonelectoral engagement with gun policy have downstream effects on electoral behavior.In other words, an open question for future research is whether public mass shootings indirectly heighten voter turnout or Democratic vote choice by first elevating forms of nonelectoral engagement (e.g.information seeking, social media discussion, petition signing, or donating money), which subsequently alter turnout and vote choice.Fifth, while we find little evidence that geographic proximity to shootings or the race/ethnicity of victims condition their effects, future research could explore other possible moderators, such as the amount of prior exposure to gun violence or having school-aged children.Finally, scholars would do well to compare the downstream effects of increases in various types of public engagement with gun policy on the beliefs and actions of elected officials.
Notes
a This documentary is available at The Guardian's website (link).e We attempted to collect data for several other popular gun control and gun rights petitions on Change.orgbut were unable to retrieve contact information for the petition's creator.Change.orgmakes the name of petition creators public but does not provide accompanying contact information.Moreover, support staff at Change.org would not contact petition creators on our behalf.The data we obtained were due to locating contact information for petition creators through internet searches using their name.f We do not include analyses for The Brady Campaign to Prevent Gun Violence PAC, which stopped raising and spending money around 2010, or Everytown for Gun Safety Action Fund PAC, which was established relatively recently and has not yet raised substantial amounts of money.
g Some scholars have noted that RDiTs are conceptually similar to interrupted time series (61).h https://www.gunviolencearchive.org/methodology i For these event chains, we treat the day of the last shooting in the chain (Kalamazoo, Dayton, and Boulder, respectively) as the "treatment" date.This decision yields RDiT estimates that are likely biased downward given the spike in activity in the "control" period that may have been activated by the previous shootings, so we consider any treatment effects for these shootings to be conservative estimates of the effect of these shootings on each outcome.j Generic inverse variance meta-analyses of RDiT effects (64) were estimated via random effects regression using the meta package in R. k https://www.latimes.com/opinion/letters-to-the-editor/story/2023-04-12/run-hide-fight-louisville-mass-shootings
Fig. 1 .
Fig. 1.National media attention to public mass shootings, 2011 to 2021.Bars indicate cumulative proportion of media coverage collected by Media Cloud (top-50 news sources in the United States) that mentioned the terms "mass shooting" for the week following each shooting.Vertical dashed line indicates median cumulative coverage.Source: Media Cloud.
Fig. 3 .
Fig. 3. Effect of public mass shootings on documentary streams.RDiT point estimates with 95% CIs.Gray point estimates indicate that the CIs includes 0; black point estimates are statistically significant.Missing estimates arise when there is no overlap between the time series outcome variable we measure and when a shooting occurred or insufficient data to estimate an effect.For the three bottom rows of the figure, "Top 10," "Top 20," and "Bottom 20" indicate meta-analytic estimates for shootings by rank of media coverage received.
Fig. 5 .
Fig. 5. Gun rights flag sales.RDiT treatment effect estimates with 95% CIs for flag sales of three different pro-gun flags.Gray point estimates indicate that the CIs includes 0; black point estimates are statistically significant.
Fig. 6 .
Fig. 6.The effect of public mass shootings on petition signing.RDiT treatment effect estimates with 95% CIs.Gray point estimates indicate that the CIs includes 0; black point estimates are statistically significant.Missing points arise when there is no/insufficient overlap between the time series we measure and when the shootings occurred.For the three bottom rows of the figure, "Top 10," "Top 20," and "Bottom 20" indicate meta-analytic estimates for shootings by rank of media coverage received.
Fig. 8 .
Fig. 8. Meta-analysis of effects across outcomes by national media attention.Pooled RDiT treatment effect estimates with 95% CIs.Gray point estimates indicate that the CI includes 0; black point estimates are statistically significant."Top 10" (Panel A), "Top 20" (Panel B), and "Bottom 20" (Panel C) refer to rank of shooting based of media coverage received.
b https://financesonline.com/twitter-statistics/ c Pew Research Center's American Trends Panel Poll, Question 24, 31118576.00025,Ipsos, (Cornell University, Ithaca, NY: Roper Center for Public Opinion Research).d Survey Center on American Life, 2020 American National Social Network Survey, Question 26, National Opinion Research Center and Pew Research Center for the People & the Press, 2020 American Trends Panel Wave 78, Question 4.
Table 1 . Data sources. Political act Data sources Date range Examples Shooting events
This documentary was released in 2016 September 16 and had been streamed over 490,000 times at the time of our data collection.We received streaming data for this documentary from The Guardian's YouTube channel starting in 2016 September 16.Accompanying this, the American Public Broadcasting Service (PBS) provided us with data for two publicly available documentaries produced by their investigative journalism program Frontline.The first documentary, Gunned Down: 2021 Jan. 1-2022 Jan. 31 "Second Amendment," "Liberty or Death," "Come and Take It" 3 Influencing decision-making Petition Signing Change.org,Patriot Voices 2013 Mar.6-2023 May 12 "Pass Common Sense Gun Control," "Stop the sale of guns at Walmart Stores," "Physicians Demand Stricter Gun Control," "Maryland Carry Laws Prevent Citizens From Legally Protecting Themselves," "Defend Second Amendment" 32 Donating to PACs FEC 2013 Jan. 7-2020 Dec. 31 Giffords PAC, NRA Victory Fund 27 Reny et al. | 3 newspaper The Guardian.a | 2023-12-05T17:02:25.797Z | 2023-11-29T00:00:00.000 | {
"year": 2023,
"sha1": "8c789d9cce0860e17bc43feb0d2e8f9353f7cbdd",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/pnasnexus/advance-article-pdf/doi/10.1093/pnasnexus/pgad407/53900638/pgad407.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a076b59498f0fe209bb9de04868c316964d217f7",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": []
} |
238226949 | pes2o/s2orc | v3-fos-license | Total Collisions in the N-Body Shape Space
We discuss the total collision singularities of the gravitational N-body problem on shape space. Shape space is the relational configuration space of the system obtained by quotienting ordinary configuration space with respect to the similarity group of total translations, rotations, and scalings. For the zero-energy gravitating N-body system, the dynamics on shape space can be constructed explicitly and the points of total collision, which are the points of central configuration and zero shape momenta, can be analyzed in detail. It turns out that, even on shape space where scale is not part of the description, the equations of motion diverge at (and only at) the points of total collision. We construct and study the stratified total-collision manifold and show that, at the points of total collision on shape space, the singularity is essential. There is, thus, no way to evolve the solutions through these points. This mirrors closely the big bang singularity of general relativity, where the homogeneous-but-not-isotropic cosmological model of Bianchi IX shows an essential singularity at the big bang. A simple modification of the general-relativistic model (the addition of a stiff matter field) changes the system into one whose shape-dynamical description allows for a deterministic evolution through the singularity. We suspect that, similarly, some modification of the dynamics would be required in order to regularize the total collision singularity of the N-body model.
Introduction
Ever since Newton published the Principia in 1687, the gravitational N -body problem and, with it, the total collision singularity has been an object of intensive study among mathematicians, starting from Euler and Lagrange who found special solutions to the Newtonian 3-body problem, up to Poincaré who famously received the price of the King of Sweden for his proposal of a general solution to it-a work he had to withdraw due to errors (still, it was a brilliant work and, in a revised form, became the foundations of chaos theory).
It was then Sundman who solved the 3-body problem in 1907 [1,2]. Sundman made use of the fact-which he proved-that total collisions can occur only if the total angular momentum L of the system is zero (L = 0). By applying a convenient regularization procedure (change of variables) and using the fact that, for N = 3, all L = 0 solutions can be bounded away from the triple collision, Sundman was able to provide a general L = 0 solution to the 3-body problem in form of a convergent infinite power series.
The problem of total collisions in the 3-body model was the subject of a number of studies [3,4,5,6], which concluded that the total collisions cannot be regularized unless the masses take some exceptional values. Still, the problem of N -body collisions remained untouched and while there exists a proof for the analytic continuation of solutions through binary collisions, nothing of that kind exists for N ≥ 3 (cf. Saari [7]).
In this paper, we discuss total collisions on shape space. Shape space is the relational configuration space of the system which is obtained from ordinary configuration space by quotienting with respect to the similarity group of translations, rotations, and scalings (dilations). It has been shown in [8] (see also [9]) that there exists a unique description of the E ≥ 0 Newtonian N -body system on shape space (where E refers to the total energy of the system). Now, since on shape space, scale is no longer part of the description, one might hope to pass the singularity of a total collision by (uniquely) evolving the shape degrees of freedom through that point. If that were possible, one could connect two total-collision solutions from absolute space, one with a collision in its past, one with a collision in its future, to form one solution passing the point of N -body collision (the Big Bang of the E ≥ 0 Newtonian universe).
Unfortunately, this is not the case. Although there exists a unique description of total collisions on shape space-which is interesting in itself since this description is purely shapedynamical, i.e., free of scale-the shape dynamics turns out to be singular precisely at (and only at) these points. Even more, one finds that the singularity is in genera essential, unless the ratios between the particle masses take special values.
In this paper, we explicitly analyze the way in which solutions run into the singularity on shape space and construct the stratified manifold of total-collision solutions. This will constitute Section 3, the main section of this paper. Section 2 will contain an overview of Chazy's noteworthy 1918 result [10] on the asymptotic behavior of solutions at the points of total collision (a result which has been rediscovered much later by Saari [11] and which we use to identify total collisions on shape space). Section 4 will finally compare the result we have obtained for the N -body system to the general-relativistic Bianchi IX model, where the shape-dynamical description allows for a continuation of solutions through the point of zero volume (the Big Bang).
Chazy's 1918 Proof of the Total Collision Theorem
Consider the gravitational N -body problem, involving N point particles of mass m a with coordinates r a ∈ R 3 and momenta p a ∈ R 3 , a = 1, . . . , N , and Hamiltonian Chazy [10] was the first to prove the following theorem: Theorem 2.1 A total collision (r a = r b ∀ a, b) can only happen if the total angular momentum L = N a=1 r a × p a is zero and at a central configuration, that is, a configuration such that Another useful characterization of central configurations is as the stationary points of the complexity function, also known as (minus) the shape potential or the normalized Newton potential: It is easy to check that Equation (2) follows from ∂C S ∂ra = 0. We will sketch here Chazy's proof of the theorem. It uses three fundamental equalities, valid for any homogeneous N -body potential U = U (r a ): 1. Conservation of energy. The following quantity is a constant of motion: where T = N a=1 pa 2 2ma is the total kinetic energy of the system.
2.
Lagrange-Jacobi relation. A first version of this equation has been given by Lagrange [12]. If the potential is homogeneous of degree k, i.e., U (αr a ) = α k U (r a ) for any real positive constant α, thenÏ This identity can be proved using Euler's homogeneous function theorem, which states that N a=1 r a · ∂U ∂ra = kU . In the case of the Newtonian potential k = −1, and this equality turns intoÏ Notice that, in the case of Newton's potential, since 4E − 2U and U < 0, the Lagrange-Jacobi relation implies thatÏ cm > 0 if E ≥ 0. So the moment of inertia is either a U-shaped function, going through a minimum and growing monotonically in the two time directions away from it, or it has a zero at a certain instant t = 0 and is defined only on one side of t = 0, growing monotonically away from it.
It follows directly from the Lagrange-Jacobi equation that, for E ≥ 0, a total collision can only occur at the minimum of the I-curve, whereİ cm = 0. Let us, at this point, introduce the notion of the dilatational momentum D = N i=1 r a · p a . We find that D = 1/2İ cm and, thus, a total collision can only occur at D = 0.
3. Chazy's kinetic energy decomposition theorem. One can write where T S , the shape kinetic energy, is a sum of squares and therefore positive. The above relation was rediscovered much later by Saari [11] as a consequence of his velocity decomposition theorem (which states that the center-of-mass motion, dilatation, rotation, and shape components of the velocity 3N -vector are orthogonal). The above relation is also at the basis of Sundman's inequality 2T ≥ 1
Combining Equations (4) and (6) we getÏ cm = 2E + 2T , and using (7) we can remove the total kinetic energy and geẗ The left-hand side can be rewritten as I Using the fact that E, T cm , and L are conserved quantities, we can integrate the above equation in dt from t 0 to t: Now, suppose thatİ cm ≤ 0 over the whole interval of integration (by what was said above, in the case we are interested in, I cm goes to zero monotonically and is not defined past it), then, since T S > 0, is negative or zero, and because L is conserved, it either diverges to −∞ as I cm → 0, or it stays zero the whole time, if the total angular momentum is zero. The term 4(E + T cm )I 1/2 cm vanishes as I cm → 0, and the integration constant stays constant. Therefore, we have an equation with the structure: where the left-hand side is positive-definite, and f (t) tends to a finite constant as I cm → 0. Therefore, the only way that this identity can be preserved all the way to an instant in which I cm vanishes, is that the total angular momentum has to be zero. Then we are left with the sum of two positive quantities: which is equal to a function that remains finite when I cm → 0. Each of these quantities then have to admit a finite limit at a total collision. Let us focus on the first of those two quantities. Its square root,İ cm I 1/4 cm , will admit a finite limit too. Calling lim where t = 0 is the time of total collision, we can then writė where δ(t) −−→ t→0 0. We conclude that, if a central collision happens at t = 0, the quantity: admits a finite limit as t → 0.
Consider now the following transformation: The new variables are subject to the following equations of motion: where V New (s b ) is Newton's potential, with r a replaced with s a . These equations can be rewritten in an autonomous form, by reparametrizing time with a logarithm, u = − log t, which goes to −∞ at the total collision t = 0: where f (u) = df (u) du . Consider now the quantity J defined in (15). We established already that in the t → 0 (u → +∞) limit, J tends to a constant (which by definition cannot be positive) in the time interval of interest). Now we shall prove that this constant cannot be zero. Consider first the u-derivative of J. We can prove that it vanishes at the total collision. In fact: and, hence, u u 1 du, which tends to infinity as u → +∞, and it is impossible for J to tend to a finite value at infinity.) Therefore J cannot go to zero at the total collision. This proves that J tends to a strictly positive finite value at the total collision. Now consider the third of the Equation (23). We can integrate it with respect to u over an interval beginning at u: Since J (u) → 0 as u → ∞, and J(u) tends to a finite constant, the integral ∞ u 0 S du is finite. However, the integrand is equal to the sum of squares: and Chazy can now prove that ds i a du all go to zero at infinity using the fact that ∞ u 0 S du is finite and that the logarithmic derivative is bounded. If ds i a du all go to zero at u → ∞, then the first of the Equation (23) implies that V New (s a ) attains a finite limit there. Therefore, the partial derivatives (of all orders) of V New (s a ) with respect to s a are bounded, and so are, from Equation (18), all accelerations d 2 s i a du 2 . Differentiating Equation (18) with respect to u, we get which implies that s a is bounded, too. Chazy now quotes a theorem by Hadamard stating that "when a function goes to a finite limit at infinity, and its second derivative is bounded, then its first derivative vanishes at infinity". This theorem implies that d 2 s i a du 2 −−−→ u→∞ 0, and, since the first derivative vanishes as well, the equations of motion (18) imply that, asymptotically, This is identical to the central configuration condition (2), with proportionality factor 9 2 t 2/3 .
Phase Space Reduction of the Planar Three-Body Problem
Here we recount the elimination, from the 3-body problem, of the extrinsic degrees of freedom, i.e., those that have to do with the position and orientation of the system in absolute space.
We will focus on the planar case, that is, when the total angular momentum is orthogonal to the plane of the three particles, which means that the particles never leave that plane and the treatment is simplified. The zero angular momentum case, which we are ultimately interested in, can be seen as a particular case of the planar one. Our treatment will follow the one of [13].
The extended phase space of the Newtonian three-body problem is R 18 , with coordinates r 1 , r 2 , r 3 ∈ R 3 for the particle positions, and p 1 , p 2 , p 3 ∈ R 3 for the momenta. It is well known that if the angular momentum L = 3 a=1 r a × p a is orthogonal to the plane identified by the three particles, L × ((r 1 − r 3 ) × (r 2 − r 3 )) = 0, then the three particles never leave that plane during the evolution.
Let us assume, from now on, that the problem is planar. We can then assume that the position and momenta are two-component vectors r 1 , r 2 , r 3 , p 1 , p 2 , p 3 ∈ R 2 . The degrees of freedom are six, but three are gauge, corresponding to the translations and rotations on the plane of the motion. We have to restrict to the hypersurface P = L ⊥ = 0, where L ⊥ is the remaining non-zero component of the angular momentum, and then we have to quotient by the transformations generated by these constraints. It turns out that in this case we can take the 'royal road' of explicitly identifying a sufficient number of gauge-invariant degrees of freedom (observables), and perform a coordinate transformation in phase space that separates them from the gauge degrees of freedom, making them orthogonal coordinates.
To deal with translations, we define the mass-weighted Jacobi coordinates: The transformation to them is linear and invertible, so, looking at the symplectic potential, it appears obvious that the momenta conjugate to ρ a are related to p a by the transpose of the inverse of the matrix M (notice that M is not symmetric): The inverse transformation is (the transpose and the inverse of an invertible matrix commute): Note that the inverse matrix is and has a constant column. It is the column of κ 3 , which is, therefore, proportional to the total momentum and decouples from the problem. The coordinates ρ 3 are the coordinates of the center of mass, which decouple too. The other two momenta are As we said, the transformation to Jacobi coordinate and momenta is canonical, and, therefore, it leaves the Poisson brackets invariant: The kinetic term is diagonal in the momenta κ a , as is the moment of inertia, (notice how the sum is from a = 1 to 2, because I cm does not depend on the coordinates of the center of mass ρ 3 . The inertia tensor also takes a particularly simple form: We are left with four coordinates ρ 1 , ρ 2 and momenta κ 1 , κ 2 , and a single angular momentum component (the one perpendicular to the plane of the triangle): where with the vector product between two 2-dimensional vector we understand a scalar a×b = a x b y − a y b x . The coordinates are invariant under the remaining rotational symmetry and, therefore, give a complete coordinate system on the reduced configuration space. Notice that w 3 changes sign under a planar reflection (changing the sign of one of the coordinates, say x, of both ρ 1 and ρ 2 ) while w 1 and w 2 remain invariant, and, therefore, the map w 3 → −w 3 relates triangles conjugate under mirror transformations. This also has the consequence that the w 3 = 0 plane contains only collinear configurations (whose mirror image is identical to the original, modulo a planar rotation). This has nothing to do with 3D reflections (obtained by changing the sign of all components of every Euclidean vector). In fact triangles are invariant under such parity transformations, because their parity conjugate is related to the original by a non-planar rotation.
The Euclidean norm of the 3D vector w = (w 1 , w 2 , w 3 ) is proportional to (one quarter) the square of the moment of inertia Figure 1: The shape sphere of the equal-mass 3-body problem. Every point on the sphere (defined as a constant-w surface) is a triangle. Points at the same longitude with opposite latitudes correspond to mirror-conjugated triangles. At the poles (the intersections with the w 3 axis) we have the equilateral triangles, while on the equator (the red circle, w 3 = 0) we have the collinear configurations. Among them, there are six special ones: three binary collisions (red dots, one of which is on the w 1 axis), and Euler configurations (white dots), in which the gravitational force acting on each particle points towards the center of mass and has a magnitude such that, if the system is prepared in rest at one of these configurations, it will fall homothetically (without changing its shape) to a total collision at the centre of mass. The same thing happens at the equilateral triangle (for all values of the masses, as Lagrange showed). Notice that the Euler configurations and binary collisions are on the equator for all values of the three masses, but their relative positions on the equator depends on the masses. The equilateral triangles are at the poles only in the equal-mass case.
so the angular coordinates in the three-space (w 1 , w 2 , w 3 ) coordinatize shape space, which has the topology of a sphere [14]. We call it the shape sphere, and in Figure 1 we describe its salient features.
The norms of the original Jacobi coordinate vectors can be written as and, therefore, the full vectors are specified by where θ = 1 2 [arctan (ρ y 1 /ρ x 1 ) + arctan (ρ y 2 /ρ x 2 )] is an overall orientation angle which is not rotation-invariant and, therefore, is not fixed by the specification of the coordinates w 1 , w 2 , w 3 ; and δ = arctan w 3 w 2 is the angle between ρ 1 and ρ 2 . We now want to find the momenta conjugate to w. To do so, we consider the symplec-tic potential If we replace ρ a with their expressions in terms of w from Equation (47), we get where so we now have a complete canonical transformation from the coordinates (r 1 , r 2 , r 3 ; p 1 , p 3 , The Poisson brackets in these coordinates are canonical, as they should be: In the new coordinates, the kinetic energy decomposes as and Newton's potential takes the form where φ ab are the longitudes on the shape sphere of the two-body collisions between particle a and b. These can be found by using Equation (44): • If r 1 = r 2 , then ρ 1 ∝ r 2 − r 1 = 0, and, therefore, w 2 = ρ 1 · ρ 2 = 0, and w 3 = ρ 1 × ρ 2 = 0: we are on the equator and on the axis 1. Moreover , and, therefore, w 3 ∝= ρ 1 × ρ 2 = 0 and we are on the equator. The 1 and 2 coordinates have values and so the corresponding longitude is: • If r 1 = r 3 , the same reasoning above applies to show that we are on the equator, be- and the longitude is then If we call φ the azimuthal and ψ the polar angle on the shape sphere, then and on the constraint surface κ 3 = 0 the Hamiltonian takes the form where is the 3-body "complexity function" (according to the nomenclature used in [8,15,13,16]). Finally, a short calculation reveals that the dilatational momentum takes basically the same form in the new coordinates: We now want to separate the scale and shape degrees of freedom. Let us use r = w as our scale, and the angles ψ and φ as our shape coordinates. The symplectic potential now takes the form Θ = p r dr + p φ dφ + p ψ dψ + L ⊥ dθ , where p r = 2r(z 1 cos ψ cos φ + z 2 cos ψ sin φ + z 3 sin ψ) , which can be inverted as which puts the Hamiltonian in the following form: where the kinetic term K is a quadratic form in the momenta p ψ , p φ and L ⊥ , which is positive definite for any value of ψ and φ: In the zero angular momentum case L ⊥ = 0, the equations of motion are: and in Lagrangian form: Finally, in the new coordinates the dilatational momentum is of the following form:
Total Collisions in the Zero-Energy 3-Body Problem
In the previous subsection we described the phase space reduction of the 3-body problem (in the planar case, i.e., L orthogonal to the plane of the three bodies) to shape degrees of freedom plus scale (the square root of the moment of inertia). We ended up with a spherical shape space coordinatized by two angles, ψ ∈ (−π/2, π/2) and φ = (0, 2π), plus a global orientation angle θ = (0, 2π) and the scale r = 1 2 I cm , as well as their four conjugate momenta p φ , p ψ , p r , and L ⊥ with canonical symplectic structure. If we specialize to the zero angular momentum case L ⊥ = 0 (the only case we are interested in if we want to study total collisions), the coordinate θ drops out of the problem too, because it is cyclic, and we are left with two shape-space coordinates plus one scale, and their conjugate momenta. Conservation of energy is expressed by the following constraint equation: where E is a constant (the total energy of the system), K is the shape kinetic energy, written in Equation (65), and C S (ψ, φ) is what we have been calling (see [8,15,13,16]) the complexity function, as defined in Equation (59). C S (ψ, φ) is positive-definite, and we will study the vicinity of one of its stationary points, of coordinates (φ 0 , ψ 0 ). The equations of motion in Newtonian time for the scale degree of freedom are: which proves that the dilatational momentum D = r p r is monotonic. Since D is monotonic, we can use it as an internal time parameter τ with τ = D.
Note that, already at this point, the Hamiltonian constraint, together with the fact that τ → 0 at total collisions (which follows from the Lagrange-Jacobi equation, see above), implies that total collisions can happen only if the angular momentum and the shape momenta are all zero. In fact, multiplying (69) by r 2 , we obtain: which, in the limit r → 0 and τ → 0, implies K → 0, and K is a positive-definite quadratic form in p ψ , p φ , and L ⊥ (65). As mentioned above, given that τ is monotonic, we can use it as an internal time parameter. Now the evolution with respect to τ , in the zero-energy case E = 0, is described by the shape Hamiltonian H S , the canonical conjugate of τ = D expressed in terms of the shape variables by means of the Hamiltonian constraint H = 0 (we obtain a reduced Hamiltonian dynamics on shape space precisely because D, our new internal time parameter, is the dilatational momentum, i.e., the generator of scalings, cf. [8]). To determine H S , we thus demand {H S , D}| H=0 ! = 1, so that H S is the logarithm of the solution of H = 0 with respect to r. With p r replaced by τ /r, it is: It follows that the equations of motion of the "decoupled system" are We would now like to impose that the system undergoes a total collision. By what we have seen in the previous section, this can only happen at a central configuration (the stationary points of C S (ψ, φ)), and with vanishing dilatational momentum (τ = D = 0). However, imposing that the solution goes through a central configuration at the instant τ = 0 is not enough: it could simply be reaching a minimum of the moment of inertia (a Janus point) with the shape of a central configuration, and, past this minimum, grow again without ever hitting a total collision. In order to get an actual total collision, the moment of inertia has to vanish, that is, we need to have that r → 0. However, according to Equation (73), r is no longer part of our description of the system (at this level of description, we already are on shape space). The problem is now: if all I have is system (73), how can I tell whether I reached a total collision or simply a r = 0 Janus point with the shape of a central configuration? Is there a 'manifest cause' for a total collision, which can be read off the curve on shape space?
The answer is yes: if some shape momenta p ψ and p φ are non-zero at a central configuration, Equation (73) tend to those of a spherical geodesic (in case the central configuration is on the equator of our coordinate system, the term associated to the non-zero Christoffel symbols of the spherical metric on shape space, , vanishes, and the equations reduce to those of a straight line). Otherwise, if both p ψ = 0 and p φ = 0, Equation (73) appear to diverge. Indeed, it turns out that at a total collision the shape momenta must vanish (compare the remark above). Reconsider the Hamiltonian constraint (69) and multiply it by r 2 : We know, from the discussion of the previous sections, that the dilatational momentum vanishes at a total collision, and, therefore, τ = rp r → 0. Moreover, the complexity function remains bounded, and the quantity E is a constant of motion, so, in the limit r → 0, p 2 ψ + cos −2 ψ p 2 φ must vanish, which implies p ψ = 0 and p φ = 0 (cf. Reichert [17]). This is a remarkable result in itself, for it tells us that there exists a unique description of total collisions on (scale-free) shape space.
We conclude that, in order to discuss a total collision, we need to focus on those solutions of Equation (73) which are perfectly tuned to reach a central configuration with exactly zero shape momenta. Let us now expand Equation (73) in the vicinity of a central configuration φ = φ 0 + δφ, ψ = ψ 0 + δψ , and assume for simplicity that our coordinate system places this central configuration on the equator (i.e., ψ 0 = 0): where H ij = ∂ i ∂ j log C S | ψ=ψ 0 ,φ=φ 0 are the components of the Hessian matrix of the logarithm of the complexity function at the central configuration. Now we can assume that p φ and p ψ are small, too, as we want to focus on a total collision which will make them vanish at τ = 0, so we can write p φ = 0 + δp φ and p ψ = 0 + δp ψ and expand at first order in δp i : This last step killed the Christoffel term, and gave us a set of linear equations that can be diagonalized and solved.
Asymptotics of Total-Collision Solutions
Equation (76) can be diagonalized. Let λ i be the i-th eigenvalue of the Hessian matrix H with components H ij = ∂ i ∂ j log C S | ψ=ψ 0 ,φ=φ 0 . Then, H = T −1 ΛT where Λ is the diagonalized matrix (with eigenvalues λ i as diagonal entries) and T is composed of the normalized eigenvectors. Multiply Equation (76) from the left with T and you obtain: with As a system of first-order ODEs, the above clearly does not satisfy the Picard-Lindelöf theorem at τ = 0: the right-hand side of the first equation is not continuous there, let alone Lipshitz-continuous.
To solve the above equations, multiply the first by τ 2 and differentiate: and, replacing the second equation to eliminate π i : The above equation admits two solutions: c ± (λ i ) = − 1 2 ± 1 4 + 8 λ i , and, therefore, the general solution of the differential equation is: We plot here the real part of c ± vs. λ ( Figure 2): We can see how if λ is negative (as can happen at a saddle point of the shape potential), then the real part of both c + and c − is negative. This means that, if we want to impose that the solution converges as τ → 0, we will have to set A + i = A − i = 0 for each negative eigenvlue. If the eigenvalue is positive, then we see from the plot that c + > 0 while c − < 0 for all λ i > 0, so we have to set A − i = 0.
Generalization to Arbitrary N and Non-Zero Energy
To generalize the result from N = 3 to arbitrary N , we consider the kinetic metric on the extended configuration space: this, in terms of the mass-rescaled Jacobi coordinates, becomes where ρ a coordinatize the relative configuration space (the configuration space quotiented by translations). Now, we can separate the scale and the scale-invariant degrees of freedom by defining the (square root of the) center-of-mass moment of inertia, the scale coordinate: and the translation-invariant configuration space appears now as the Cartesian product between the scale coordinate r ∈ R + and a (3N − 4)-dimensional hypersphere which we call pre shape space. (Pre shape space is the quotient of the extended configuration space, R 3N , by dilatations and translations alone (keeping the redundance due to rotations).) To further quotient global rotations out, we need to exploit the fact that they act as an SO(3) subgroup of the rotation group SO(3N − 3) that realizes the isometries of the (3N − 4)sphere of pre shape space. Quotienting a sphere by a subgroup of its rotation group always results in another sphere. In our case, the end result is a (3N − 7)-sphere: shape space. The kinetic metric then decomposes according to an analog formula to Chazy's kinetic energy decomposition theorem and, in the hyperspherical coordinates on pre-shape space, can be written as There are 3N − 7 conjugate momenta to ϕ A , π A , A = 1, . . . , 3N − 7, one conjugate momentum to r, p r , 3 conjugate momenta to r cm , p cm and 3 components of the total angular momentum L = N −1 a=1 ρ a × π a . The kinetic energy can then be decomposed as where g AB (ϕ C ) is the inverse of the hyperspherical metric, and the Newton potential can be written as with no dependence on the coordinates of the center of mass, which of course implies that their equations of motion arer cm = 0 and their motion can be decoupled from the rest. Assuming now that the angular momentum is zero, and after reabsorbing the kinetic energy of the center of mass into E, the Hamiltonian constraint takes the form If the total energy E is zero, by replacing p r = τ /r, and solving for r, we get a unique solution: and the corresponding pre-shape space Hamiltonian is: the structure of the equations is identical to those for the 3-body problem (73). If now we expand to first order around ϕ A = ϕ A 0 (the coordinates of a central configuration), and π A = 0, we get: where H AB = ∂ 2 log C S ∂ϕ A ∂ϕ B is the Hessian matrix. Now, the Hessian can be diagonalized as before, but there are three zero eigenvalues associated to the directions corresponding to global rotations. For these, we have to put the corresponding momenta to zero, because they are equal to the three components of the total angular momentum of the system, and, unless that is zero, the total collision cannot take place. For those three pre-shape space degrees of freedom, therefore, the equations of motion just say that they are constants, and their conjugate momenta are zero. One is then left with 3N − 7 effective equations, one for each independent true shape degree of freedom, of the form: with λ i real constants depending only on the mass ratios m a /m b . If the total energy is not zero, one has a quadratic equation to solve for r, but to avoid having to deal with multiple solutions we can exploit the fact that r is small near the total collision, and solve the Hamiltonian constraint perturbatively: so The corresponding equations of motion acquire deformation terms which, at first order in π A and ϕ A − ϕ A 0 , take the form: and both deformation terms are irrelevant as τ → 0, compared to the undeformed one which diverge like τ −2 .
The Stratified Manifold of the Total-Collision Solutions
Each central configuration comes with 3N − 7 real eigenvalues λ i . Depending on the nature of the central configurations, they may all be positive (if we are at the minimum of C S -the equilateral triangle in the N = 3 case-which has been conjectured to be unique for all N ), or some of them may be negative (in the case of a saddle point, like the three collinear Euler configurations in the three-body problem). From Figure 2, we see that each negative eigenvalue corresponds to a pair of exponents c + , c − with negative real part, and therefore the corresponding shape degree of freedom cannot hope to converge to its central-configuration value, unless both integration constants A + and A − are set to zero. On the other hand, for each positive eigenvalue, the integration constant A − has to be put to zero in order for the solution to converge. Let us assume first that, at the central configuration of interest, there are M distinct positive λ i 's and 3N − 7 − M negative ones, and assume also that the positive eigenvalues are ordered from smallest to largest, λ 1 < λ 2 < · · · < λ M . Then only M integration constants remain unspecified, and the solutions are of the following form: If A + 1 = 0, then the solution curves all approach the central configuration with the same tangent, parallel to the principal eigendirection ρ 1 (the one corresponding to the largest eigenvalue), and away from it they splay out in all ρ 2 , . . . ρ M directions, at a pace that is determined by the values of the other integration constants A + 2 , . . . , A + M . This is easy to prove: the tangent vector to the parametrized curves is c + (λ 1 )A + 1 τ c + (λ 1 )−1 , . . . , c + (λ M )A + M τ c + (λ M )−1 , and normalizing it to one we get a vector that, in the limit τ → 0, tends to (1, 0, . . . , 0). Moreover, these solutions can be divided in two disjoint components, according to whether A + 1 is positive or negative. The former approach the central configuration along the positive-ρ 1 axis, the latter along the negative one. In Figure 3, we show an example of this family of solutions for the 3-body problem.
• If A + 1 is zero, but A + 2 = 0, the solutions lie in the ρ 1 = 0 subspace, and the analysis exposed above can be repeated within this subspace, this time with λ 2 playing the role of principal eigenvalue. The solutions all approach the central configuration tangentially to the ρ 2 axis, and they divide into two connected components, according to the sign of A + 2 ; • If A + 1 = A + 2 = · · · = A + L = 0, L < M , then the solution lies in the subspace ρ 1 = ρ 2 = · · · = ρ L = 0, and the role of principal eigendirection is played by ρ L+1 , and the solutions are asymptotically tangent to ρ L+1 , and belong to two disconnected components, according to the sign of A + L+1 ; • If only A + M = 0, then there are only two solutions, which remain always on the positive (respectively negative) ρ M axis; • Finally, if all the A + i are zero, the solution is only the homothetic one, which never changes shape as it falls into a total collision.
What we just described is a stratified manifold of solutions, in which each stratum is obtained from the higher one as the special case in which the first non-zero integration constant of the stratum above is set to zero.
In the case of degenerate eigenspaces (when two or more eigenvalues are identical, which happens for example in the three-body problem when the three masses are equal), the count of free integration constants does not change, and, therefore, the dimension of the space of solution is the same as above, as is its structure as stratified manifold. What changes is the fact that, when the degenerate eigenvalue is the principal one (because it is the smallest, or because the integration constants associated to the eigendirections of smaller eigenvalues have been all put to zero), the solution curve can approach the total collision from any direction within the degenerate eigenspace.
The Essential Singularity of Total Collisions
In the previous subsection, we have shown how the total-collision solutions can only approach the central configuration along one of the eigendirections of the Hessian matrix that are associated to a positive eigenvalue. Moreover, we have shown that, in the case of distinct eigenvalues, the solutions that approach the total collision from the eigendirection associated to the lowest positive eigenvalue are just two. The ones approaching it from the second-smallest eigendirection are two disjoint one-parameter families; the ones approaching from the third-smallest eigendirection are two disjoint two-parameter families, and so on, all the way to the highest stratum, which consists of two disjoint (M − 1)-parameters families of solutions. The largest possible stratum of solutions for N particles can be obtained in the case in which all 3N − 7 eigenvalues are positive, which means that the corresponding central configuration is a minimum of the complexity function. Then, there is a stratum which is (3N − 8)-dimensional. So, for example, in the unequal-mass three-body problem, if the total collision asymptotes to an equilateral triangle (the absolute minimum of the complexity function), we get two one-parameter families of solutions.
We know what the tangent to these solution curves does, but knowing the tangent is not enough to fix all integration constants A + i , while the values of the integration constants determine the solution. Since we are interested in investigating the possibility of continuing each solution in a unique way through the total collision, we want to check whether there exist some variables whose values fix all integration constants, and are well-defined at the total collision. One might look for such 'manifest causes' in the geometry of the curve on shape space, which, according to the conjecture at the basis of shape dynamics, captures all there is to know about physical reality. However, one can show that, in the generic case (that is, when none of the constants c + (λ i ) are commensurable), no differential quantity defined on shape space can fix these integration constants, because at total collisions we have an essential singularity. We can see this in this way. Consider the normalized n − th τ -derivative vector of our solution curve: As τ → 0, this quantity asymptotes to So, imagine we want to join two curves that asymptote to the same central configuration, characterized by integration constants A + i and A + i , one reaching the total collision from below (τ → 0 − ) and one from above (τ → 0 + ). They both reach the same point at τ = 0, so whatever pair of curves we choose, they will always be continuous. Now, ask that their tangent is continuous: we want the normalized first derivatives to match. This imposes that is, the two curves have to approach ρ 1 = 0 from the two opposite directions. This can be immediately seen from Figure 3 in the 3-body case. However, if now we hope to fix any further relations between integration constants by asking that any further normalized derivative is continuous, we are disappointed. Once we assume that A + 1 and A + 1 have opposite signs, all derivatives are automatically continuous. We could join any two curves in Figure 3, provided they live in opposite sides of the black axis, and they would always be infinitely differentiable. This is a behavior that signals the presence of an essential singularity: for example the function e −1/x at x → 0 tends to zero, as do all of its derivatives. This function is not analytic in zero, because it is the inverse of e 1/x , which is a textbook example of essential singularity (the function and all of its derivatives diverge in zero).
There are exceptions to this result, in the exceptional case in which, due to the particular values of the eigenvalues λ i and λ j , the ratio of the associated constants c + (λ i )/c + (λ j ) is a rational number. Then, in this case, there exist integers α and β, such that the variables ρ α i and ρ β j admit the finite ratio (A + i ) α /(A + j ) β at τ → 0, which allows us to extract some information on the integration constants A + i and A + j at the singularity. Then, if all M positive eigenvalues are such that the corresponding constants c + (λ i ) are commensurable, we can define a set of M variables, by raising the ρ i to appropriate integer powers, that all tend to zero at τ → 0 as the same power of τ . The simplest such case is that of all-equal eigenvalues, where all ρ i converge to zero with the same power law. Then, in this case, all solutions can be continued uniquely at the singularity, and there is a simple change of variables that makes the equations of motion regular there. These cases, however, account for a countable set of choices of masses, and the generic situation is that described above, of an essential singularity preventing continuation.
Conclusions
As shown in [8,9], the dynamics of the N -body problem can be equivalently formulated as a non-autonomous system of ODEs on shape space, reducing the system to its irreducible core of physical degrees of freedom. In this formulation, as was shown in [17], the total-collision solutions can be characterized neatly as solutions that end at a central configuration with zero dilatational momentum and zero shape momenta. The question then arises, of whether these solutions can be regularized in the manner of two-body collisions, or continued through the singularity similarly to what was done for cosmological solutions of general relativity in [18,19,20]. Regardless of whether the system has positive or zero energy, the asymptotics of the total-collision solutions is universal, and it is captured by Equations (94), which are completely determined by the eigenvalues of the Hessian matrix of the (log of) the shape potential at the central configuration. If the central configuration is a minimum of the shape potential, these eigenvalues are all positive and one has a manifold of total-collision solutions of maximal dimension (3N − 8), and for each negative eigenvalue, the dimension of the total collision manifold decreases by one. The manifold has the structure of a stratified manifold, each stratum obtained by considering the integration constant that were non-zero in the stratum above, and setting to zero the one that corresponds to the highest eigenvalue. In each stratum, the solution curves will approach the singularity tangentially to the eigendirection corresponding to the highest eigenvalue whose integration constant is non-zero.
At the singularity, unless one considers very special choices of masses (e.g., all identical), the dynamical system has an essential singularity, which erases (at least some) information regarding all finite-degree derivatives of the dynamical variables, much like the limit x → 0 of the e −1/x function, whose derivatives are all zero at the singularity. This mirrors what was found in certain homogeneous-but-non-isotropic cosmological models (namely Bianchi IX), where the system, when studied on shape space (In the case of general relativity, with shape space we mean the space of conformal 3-geometry. Similarly to what happens in the N -body problem, a curve on this shape space codifies all the information that is necessary to reconstruct uniquely a solution of general relativity [9]) behaves like a chaotic billiard ball (what Misner nicknamed "mixmaster behavior") which bounces an infinite amount of times in any finite proper-time interval ending at the big bang singularity. This, too, is an essential singularity: the limit set of the dynamics is the border of shape space (which has the topology of a circle), but the location on this border does not admit a well-defined limit, much like the value of sin(1/x) when x → 0 (another classic example of essential singularity).
In the case of Bianchi IX, however, there is a simple extension of the model that removes this singularity: adding a scalar field whose potential does not grow too fast for large values of the field [18,19]. The scalar field changes the asymptotics of the shape momenta in such a way that the "mixmaster" chaotic behavior stops after a finite number of bounces, and the system settles on a so-called "quiescent" solution that admits a well-defined limit at the singularity. This is the foundation of the result [18,19] on the continuation of these solutions through the singularity. Interestingly, this regularization could be attributed to quantum effects, because the Starobinski potential satisfies the conditions specified in [19] for the onset of quiescence. In fact, a scalar field with this particular potential emerges as the lowest-order quantum correction to the Einstein-Hilbert action in an effective field theory approach (it is due to an R 2 term in the action).
It is possible that the total collisions of the N -body model we studied in the present paper might admit a similar regularization, at the cost of adding some correction terms to the dynamics, which become relevant only near a singularity. This would, however, be a departure from the purely Newtonian N -body problem, and is beyond the scope of the present paper. | 2021-10-01T01:16:22.213Z | 2021-09-16T00:00:00.000 | {
"year": 2021,
"sha1": "ca899829ce0fd3d6f2983c0c816aa19e4888e016",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-8994/13/9/1712/pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "ca899829ce0fd3d6f2983c0c816aa19e4888e016",
"s2fieldsofstudy": [
"Physics",
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
} |
5797777 | pes2o/s2orc | v3-fos-license | Complex Time Evolution of Open Quantum Systems
We combine, in a single set-up,the complex time parametrization in path integration, and the closed time formalism of non-equilibrium field theories to produce a compact representation of the time evolution of the reduced density matrix. In this framework we introduce a cluster-type expansion that facilitates perturbative and non-perturbative calculations in the realm of open quantum systems. The technical details of some very simple examples are discussed.
Introduction.
In recent years there has been increasing interest in the consistent description of the dynamics of open quantum systems [1][2][3][4][5]. Quantum decoherence and dissipation are very important phenomena in many different areas of physics. A non-exhaustive list includes problems from quantum optics to many body and field-theoretical systems. Dissipative processes play a basic role in the quantum theory of lasers and photon detection, and they are equally important in nuclear fission and the deep inelastic collisions of heavy ions. More recently, the influence of the environment on a quantum system emerged as an issue of crucial importance, not only due to its fundamental implications, but also due to its practical applications in quantum information theory [8][9][10]. In fact, during the last decade, many new discoveries regarding the physics of open quantum systems were made. Primary examples of a promising progress can be found in the rapidly developing field of quantum optics and the connected continuous variable systems in quantum computation [12,13].
Theoretical studies of decoherence and dissipation in quantum mechanics are centered on the time evolution of the reduced density matrix of a system embedded in a specific environment. The basic tools for studying the reduced dynamics are either effective equations of motion, where the dynamics of the environment are eliminated, such as the Lindblad master equation [6,7], or the influence functional technique introduced by Feynman and Vernon [14]. The latter is based on the path integral approach, and was used by A. Caldeira and A. Leggett [15] in the study of the quantum Brownian motion more than twenty years ago.
In most cases, however, neither the Lindblad equation nor the influence functional can be exactly evaluated, since the interaction between the system and the environment is too complicated. In fact, the simulation of the environment by a system whose degrees of freedom are treated as random variables following a more or less simple distribution, is a rather common practice. Therefore, one usually relies on some simple, specific system-environment models: a harmonic oscillator or a two-level quantum mechanical system embedded in a (thermal) bath of other harmonic oscillators or other spin systems. In the present work we aim to introduce and investigate calculational tools capable of exploring the behavior of an open system in interaction with a specific quantum environment. To be precise, we investigate the possibility to extend the calculational capability of the Feynman-Vernon path integral approach by adopting and combining definite functional methodological tools already known from different research fields. The first such tool is a combination of the well-known "closed (real) time formalism" [18] with the (equally well-known) imaginary time formulation [2] in the context of path integration. The compound result, called "closed complex time formalism" (or CCT ), enables us to isolate, in a simple and compact expression, the influence of the environment on the evolution of the system. It is well known that, in general, the integration of the environmental degrees of freedom does not produce a local "effective action" that controls the dynamics of the sub-system. The so-called Feynman-Vernon action, which incorporates the influence of the environment, is a highly non-local object: it is non-local in time and in space. The proposed CCT technique has a well-defined result: it produces an influence functional that can be viewed as an action local in space. In this action the paths are defined on the complex plane and they are parametrized with the help of a "time" running on a specific contour of the complex plane. The interest in such formalism is not "theoretical" but practical: one hopes to transfer the existent richness of perturbative and non-perturbative path integral techniques into the realm of open quantum systems.
Our second suggestion, strongly related to the first one, is the application of the so-called "cluster expansion" in the CCT context. The foundation of the application of this very powerful technique, of course, lies in the spatial locality -on the complex plane-of the influence functional. The cluster -or cumulant-expansion results to an expression that can be viewed as the "effective action" that governs the dynamics of the system after the elimination of the environmental degrees of freedom. However, in general, the cluster expansion produces an infinite series that contains all the orders of the environmental connected correlators and, if it is to be useful, some kind of truncation is necessary. As a first step in this direction, we consider the case in which the environmental correlators are of very fast decrease. Our formalism allows us to prove quite generally and without any reference to a specific model, that the two-point environmental correlator (which is the most important in our approximation scheme) has all the properties that can lead the subsystem to decoherence and dissipation.
It is worth noting that our proposal can be extended to systems with an infinite number of degrees of freedom, such as the electromagnetic field interacting with matter or other field-theoretical systems.
The remainder of the paper is organized as follows: In Section 2 we present the details of the complex time formalism in the context of the path integral formulation of the Feynman-Vernon influence functional, and we discuss the assumptions under which the aforementioned formalism is applicable. In Section 3 we apply the cluster expansion in the framework of the CCT formalism, and we discuss the emergence of some quite general and very important properties of the influence functional. In subsection 3.1 we provide a specific example of an environment which is just a simple harmonic oscillator (or a collection of non-interacting harmonic oscillators). In Section 4 we consider the case of an environment in which the correlations decay very fast after some characteristic time interval. This stochastic behavior truncates the cluster series, enabling explicit calculations pertaining to the open system per se. As a first step in this direction, in the same section we calculate the entanglement entropy of a simple harmonic oscillator. Finally, in Appendix A we present the details of the calculation needed for deriving the results appearing in section 4.
Time Evolution and the Closed Complex Time Formalism.
The best way to interpret the usefulness of the closed complex time methodology (CCT from now on) is the examination of the time evolution of the reduced (environment averaged) density matrix of an open quantum central system (s from now on), which interacts linearly with its environment (e from now on). Adopting the usual starting point we assume that the total Hamiltonian can be written as the sum of two parts that refer to the system and the environment respectively, and a third part describing their interaction: The total system evolves in time unitarily and, consequently, the reduced density matrix changes in time according to the equation: The dynamical content of the last expression is incorporated into a time evolution operator that contains the degrees of freedom of the whole system: In the last expression we have taken into account a possible time dependence of the Hamiltonian. Physically we understand such dependence in various ways; for example, we can imagine that, after a sudden quench, the coupling between the central system and its environment changes to a different value, remaining constant henceforth. A case of physical interest arises when the coupling changes continuously and slowly enough to consider the evolution of the whole system as adiabatic. Another example is the well-studied case of an external time dependent field coupled linearly to the central system. In any case the operator T takes care of the needed time ordering. For now let us assume that, at the initial time t 0 (for the sake of convenience, in what follows we shall assume that t 0 = 0), the total system is prepared in a pure disentangled state Consequently, we can rewrite the reduced density matrix in the form: Denoting by x and q the coordinates of the central system and the environment respectively, and by X = (x 1 , ..., x D ; q 1 , ..., q D ) the coordinates of the whole system collectively, eq. (5) can be written in the well known form: where the propagating kernel can be read from the expression: Our next assumption is that the environment is initially in its ground state: Then, it can easily be shown that [19]: In the last expression we denoted by L (E) e the Euclidean version of the Lagrangian describing the dynamics of the environment. The origin of eq.(9) can be traced back to the propagator: Introducing the Euclidean time τ = it, taking the limits, τ = −T E , τ = 0, T E → ∞ and assuming that the ground state is unique one can easily deduce that and, consequently the ground state wave function can be determined through an integration of the Euclidean propagator: The above relation is the basis of eq.(9) in which we also introduced the normalization factor ensuring that tr e [ρ e (0)] = 1 and we used a numbering convenient for our future considerations. To proceed further we write: and Figure 1: The Contour C Inserting eqs. (9), (14) and (15) in expression (7) we find: The last factor in the above equation defines the well-known Feynman-Vernon functional [14] which incorporates the influence of the environment to the time evolution of the system: Up to this point the only difference of the last result from the usual line of thinking [1][2][3][4] is that we consider the environment not as a heat bath in thermal equilibrium but as a quantum system -probably a very complicated one-in its ground state.
The expression for the influence functional can now be considerably simplified if we introduce the complex variable z defined on the contour C shown in Fig. 1. This contour consists of 4 different straight lines: The first line L 1 goes parallel to the real axis from the point z = t − i0 to point z = 0 − i0. The second line L 2 begins from the point z = 0 − i0 and, following a path along the imaginary axis, goes to z = 0 − i∞. The line L 3 traces a path along the imaginary axis and joins the points z = 0 + i∞ and z = 0 + i0. The last part of the contour is the straight line L 4 : It goes parallel to the real axis from the point z = 0 + i0 It is now easy to be proved that the "action" in the influence functional (17) can be written as follows: The notation in the last equation is defined as follows: Along the lines L i , i = 1, . . . , 4 , we , q (i) ) and we have introduced a contour dependent coupling g c with g L 1 = g L 4 = g and g L 2 = g L 3 = 0. In expression (18) we have explicitly assumed that the interaction between the system and the environment is linear and has the minimal coupling In what follows we shall also assume that the coupling g is time independent, but our considerations can easily be generalized to a time dependent coupling.
To confirm that eq.(18) does indeed represent the action in the influence functional, let us note that along the lines L 1 and L 4 we can write z = t − i0, and consequently: and Along the lines L 2 and L 3 we write z = 0 − iτ and thus: and Inserting eqs. (19), (20), (21) and (22) into eq.(17) and imposing the boundary condition we get the following compact expression for the influence functional: (S F V stands for the Feynman-Vernon action).
As it is obvious from the above expression, the introduction of the complex time z defined on the contour C, has enabled us to interpret the influence functional as an integral over continuous paths with periodic boundary conditions. In fact, the compactness of the result indicated in eq. (24) is the essence of the CCT formalism. At this point it may be useful to summarize our assumptions. The first one was that initially the system and its environment were disentangled (see eq. (4)). This assumption is not crucial either for the appearance of the influence functional or for the implementation of the closed complex time formalism.
The basic assumption for the latter is that the time evolution begins from a ground state (see eq. (8)). To confirm this statement, let us assume that initially the central system and the environment were entangled and the whole was in the ground state of the Hamiltonian H( P , X, g(0)). Consider now the time evolution with a Hamiltonian H( P , X, g(t)) in which the coupling between the system and the environment changes very slowly (or has a different constant value). The evolution of the reduced density matrix reads now: Following the reasoning that led us to eq. (9) we can write the initial density matrix in the form: Using the CCT formalism the last equation can be rewritten: Thus the evolution of the reduced density matrix assumes the compact form: A detailed analysis of the time evolution of a non-product initial state, under a timedependent Hamiltonian, will be presented in a forthcoming study. For the time being we focus on the case of a disentangled initial state and a time-independent Hamiltonian.
The Cluster Expansion.
It is self-evident that any further calculational step strongly depends on the dynamical details of the environment, as well as on the specific form of the interaction between the latter and the system. In any case, the compact formulation indicated in eqs.(24) or (29) can be combined with all the existent calculational technologies to produce concrete results in the field of open quantum systems. In this framework it is very convenient to use a well-known and very powerful technique: The so-called cluster or cumulant expansion. This fundamental technique is widely used in a great variety of problems from statistical physics to quantum field theories [20]. The methodology has been extensively used in areas such as the resummation of perturbative series and non-perturbative estimations, among others, and has proven to be a very successful tool.
In our case, the cluster expansion theorem can be read from the relation: where In eq.(30) we have introduced the chain of path dependent step functions When the variables z are integrated along different lines the step functions become identically 1 or 0: For example, if z ∈ L 1 and z ∈ L 4 we define θ L 1 ∪L 4 (z − z ) = 1 because the time along the line L 1 decreases, and this happens after its growth along the line L 4 .
The validity of eq.(30) with the definition (33) can be readily proven by expanding the corresponding exponentials. The proof can also be easily extended to the case of noncommutating quadratic matrices with the help of a proper time ordering. Taking into account the above conventions, any well-known result of the ordinary path integration can be transferred into the complex time framework as it is defined by the expressions (24), (30) and (31).
From the preceding analysis we saw that the influence of the environment has been incorporated into the correlators that must be integrated along a closed contour C defined on the complex plane and consisting of 4 lines in a definite order determined by the defining expression (2) for the evolution of the density matrix. The time flow along the aforementioned contour is not causal, in the sense that its growth (along the line L 4 ) comes after its decrease (along the line L 1 ), a fact being taken into account in the properly defined path dependent step functions.
As it is evident from the definition of the path integral in eq. (17), and the fact that the couplings disappear along the imaginary axis, non trivial correlations can exist only along the lines L 1 and L 4 or among them. This is closely related to the fact that initially the central system and its environment were disentangled. However, as we have already seen, the CCT formalism can also be applied if the system and its environment were initially entangled. In such a case, non trivial correlations can exist among all of the lines of the contour C.
At this point, we can highlight the properties of the fundamental functions (34) by discussing some of the properties of the two point correlator which is supposed to be invariant under space rotations and time translations: A first observation is that it must have a non vanishing imaginary part due to the imaginary period over which it is defined. To be concrete, let us consider the propagation along the line L 1 : Along the line L 4 the time flow is reversed, and consequently: At this point we can appeal to the hermiticity of the density matrix: The influence functional must remain the same if we interchange x (1) and x (4) while taking the complex conjugate.
The last action reverses the time ordering along the contour C, and consequently the function ∆ L 1 must be anti-hermitian: Thus we immediately conclude that the real part of the propagator (36) is an odd function, while its imaginary part is an even function of time: The exchange contributions can also be deduced with the same reasoning: Since, as we have discussed, the time along L 1 is after the time along L 4 , the exchange from the line L 4 to the line L 1 is controlled by a function G L 4 ∪L 1 (t 2 − t 1 ) in which t 2 < t 1 , while the exchange from the line L 1 to the line L 4 must be controlled by a function G must hold. The trace of the reduced density matrix must be equal to one, and, consequently, the Feynman-Vernon action must go to zero as x (4) → x (1) . This can happen only if the (forward) propagation L 4 → L 1 exactly cancels the (forward) propagation along L 4 , and the (backward) propagation L 1 → L 4 exactly cancels the (backward) propagation along L 1 : and G (2) These arguments show clearly that, quite generally, the order g 2 contribution to the Feynman-Vernon action assumes the form: It is now readily evident that the Feynman-Vernon action considerably changes the dynamics of the central quantum system. Its fluctuating part, which is connected to the imaginary part of the line propagator, reduces coherence. It is customary [1,2] and convenient to re-express its real part, which is connected to the real part of the line propagator, with the help of an even function γ(t 2 − t 1 ) = γ(t 1 − t 2 ) through the relation: The function γ introduces in the Feynman-Vernon action a term which, on the classical level, can be understood as a damping or "friction" term. Feeding eq.(43) with expression (44) we immediately find that: i S exact contribution of the second cumulant in the cluster expansion of the Feynman-Vernon action, is not an approximate one. Despite the fact that it formally reproduces a colored -noise simulation of an uncontrollable environment [1,2], it is the first term in a systematic approximation of the environmental dynamics.
A Simple Example.
As a specific example, let us compute, in the framework of the preceding analysis, the influence functional for the case in which the environment is just a simple harmonic oscillator In this very simple case only one term appears in the rhs exponential in eq.(24): The Green function appearing in the last equation obeys periodic boundary conditions and assumes the well-known form with The period is obviously imaginaryT = −2iT E , and consequently: Given that g L 1 = g L 4 = g and g L 2 = g L 3 = 0 we can split the integration in (47) as follows: Where we used the notation: In the last integral we have connected the result pertaining to the specific choice (46) with the general result (36) through the relations: sin ω e (t 2 − t 1 ).
The second term in eq.(47) reads: With the same reasoning the last term in eq.(47) takes the form: Inserting eqs.(52), (54) and (55) into eq.(47) we confirm the general result (43) with the specific expressions (53) for the real and the imaginary part of the line propagator. These forms can be readily extended to the case of a collection of N harmonic oscillators: The last expressions are obviously the T → 0 limit of the well known result for an environment which is a heat bath consisting of a collection of harmonic oscillators in thermal equilibrium [15].
The Stochastic Environment.
The cluster expansion discussed in the previous section, helped us to interpret the Feynman-Vernon action, and consequently the influence functional, as an infinite series over all possible correlations among the environmental degrees of freedom. However, it is evident, that such an interpretation can be useful only if the infinite series can be truncated with negligible error. The case of weak coupling between the system and its environment is a first and obvious example; we shall not discuss this occurrence in the present paper, but is worth noting that the use of the cluster expansion facilitates the resummation of the (asymptotic) perturbative series.
In the present study we adopt the hypothesis that the dynamics of the environment establish a characteristic time scale τ e after which all internal correlations decay very fast: The scale τ e appearing in our starting relations (57), is such a time interval that, when it elapses, the environment returns to its initial state. We shall also assume [1,2,8,10,11] that there exists a second distinct time scale τ s , characterizing the interaction between the two parts of the entire system and, consequently, the evolution of the reduced density matrix, which is much larger than τ e : τ s τ e .
In order to be more precise, let us assign an order of magnitude ||K (2) || to the second order cumulant appearing in eq.(30). We shall consider as stochastic the limit: As clearly shown by its definition ||K (2) || is a measure of the average "strength" of the interaction between the central system and its environment: ||K (2) || ∼ V . Defining the time scale τ s as τ s ∼ / V , the limit indicated in eq.(58) can be obviously rephrased as τ e /τ s → 0.
We can now examine how the cluster expansion is formed at the stochastic limit. Assuming that q c q = 0 the first non-vanishing contribution comes from the second order term, which, following the discussion in the previous section, assumes the quite general form (45).
As we are interested for t τ e we take into account eqs. (57) and (58), and, performing the expansion we get In the last expression we have introduced the quantity: In the same way, the second term in the rhs of eq.(45) can be approximated as follows: With the help of a time rescaling t i = τ eti and using the defining relation for the γ function (see eq. (43)) we can estimate that: After the preceding approximations the second order contribution to the Feynman-Vernon action reads: Our claim is that, at the stochastic limit (58), the cluster expansion, and consequently the Feynman-Vernon action, is dominated by the second order cumulant which, in this case, is expressed, by the above written eq.(64). Indeed, each of the terms K (n) in the cumulant expansion represents a cluster that must be integrated over time intervals much larger than the time scale characterizing its exponential decay. Thus in the integrals 59) and (61), we conclude that: This conclusion can be used to give concrete meaning to the environment characterized as stochastic: It is the environment whose influence can be approximated by keeping only the second order correlator in the cluster expansion.
In other words, the Feynman-Vernon action, at the stochastic limit, can be approximated as follows: At this point we must underline, once again, the strong resemblance of our result (64) to the case of the so-called Ohmic environment [1][2][3][4]15]; that is, to the case of the quantum mechanical simulation of a white-noise reservoir. Despite the fact that the expression (64) for the Feynman-Vernon action is, in both cases, formally the same, our result must be understood in a different context: It is the first term in a systematic approximation of an exact result which is supposed to be valid at zero temperature. The parameters appearing in eq.(64) are not phenomenological, but they are strictly related to the two-point correlation function of the environment, and, in principle, can be calculated at least numerically. In the same context, the expression (24) which is approximated by (67), does not represent the introduction of a random complex-valued Gaussian stochastic force: It is the specific environment under consideration and its dynamics that justify the stochastic approximation.
Having in mind the extension of our work to infinite degrees of freedom, the non-Abelian gauge theories [21] constitute the primary example of such a stochastic behavior.
In the present study, the undertaken task is, so to speak, "phenomenological": given the approximation (67) for the influence of the environment, we try to estimate the consequences on the central system.
In any case, the result (67) considerably facilitates the process of determining the time evolution of the reduced density matrix. The final result depends, of course, on the initial state of the central system, as well as on its specific dynamics. In what follows we shall consider the case in which the central system begins from its ground state In such an occurrence we can use for ρ s (0) an expression analogous to the one (cf. eq.(9)) used in the previous section for the environmental density matrix: Inserting the last expression into eq.(6) we immediately get, at the stochasticity limit, the following path integral representation for the reduced density matrix: As expressed in the last equation, the result for the reduced density matrix is simple and compact. This is due to the complex parametrization of the paths under integration. To obtain the final result, the integration over the central degrees of freedom must be performed and, obviously, this is a task that cannot be exactly accomplished in the general sense: some kind of approximation is needed. In any case eq.(70) sets the scene where any available approximation technique can be performed. We can demonstrate the calculational abilities of our formalism by considering,once again, the zero order approximation i.e., the simple case in which the system is just one simple harmonic oscillator (we neglect any space index): It is now suffices to observe that the contribution from the Feynman-Vernon action is quadratic, and consequently, the dependence of the reduced density matrix on the boundary values x and x can be deduced just from the classical path: In the last equation the rhs must be read in terms of the stochastic limit (64). Thus we readily obtain: The last two terms appearing in the rhs of the previous relation, cancel each other due to the quadratic nature of the truncated Feynman-Vernon action. Thus we conclude: The appearance of the classical trajectory in the last equation calls for the solution of the equation of motion (70). This is a lengthy but straightforward task, and it is presented in full detail in Appendix A. At this point it is enough to observe that the dependence of the classical solution on the boundary values x and x is easily determined using the quite general ansartz:ẋ In the Appendix A we determine the coefficients in the above relations and confirm the validity of the relations δ(t) = α * (t) and γ(t) = β * (t), which are necessary for the hermiticity of the reduced density matrix. Inserting expressions (75) in eq.(74) we find that: The suppression of the off-diagonal terms in the representation (76) of the reduced density matrix is obviously related to the non-zero imaginary part of the function α(t), which in turn, as we confirm in the Appendix A, is related to the non-vanishing imaginary part of the environmental correlations. The normalization factor in equation (76) is now determined by demanding: The explicit calculations presented in Appendix A show that (α(t) + β(t)) = 0 (78) yielding the conclusion that C = 1/L → 0, where L is the volume of the space in which the system lives. In this case the reduced density matrix reads: The explicit form of the function α(t) is presented in Appendix A. Here suffice it to note that α is a positive definite increasing function of time. It is strictly related to the imaginary part of the environmental second order correlator since α ∝ σ. Thus, the real factor of the density matrix (79) is formally the density matrix of a free particle in a heat bath of temperature k B T = 1 2 α ∝ σ. The exact time dependence of the function α(t) is tied with the value of the quantity: If q 2 > 0, α(t) becomes time independent for t|q| 1 and For q 2 = 0 and for (ω − λ/m)t 1, α(t) is again time independent: If q 2 ≡ −k 2 < 0, and for kt 1, α remains an increasing function of time: The reduced density matrix is the crucial quantity for the physics of an open system, playing a key role for determining all the system properties. As an interesting example, we shall focus on the entanglement entropy The calculation of the entropy can be performed with the help of the so-called replica method [19]. To apply it, one introduces the quantity After calculating the function f (n) for integer n, we consider the function Using analytic continuation we can find the entanglement entropy from the relation Inserting eq.(79) into eq.(85) we get: Consider now the propagation of a free particle with mass m from the point x to the point x in the Euclidean time interval t E = 2/ α(t): Inserting the last expression into eq.(88) we find that: The last integral must be performed over periodic trajectories with period nt E . Thus we can immediately conclude that: The entanglement entropy is now easily computed with the help of eq.(87): It is worth noting that, as it is well-known, the entanglement entropy S ent. ∼ ln L is not an extensive quantity: contrary to the thermal entropy, is not analogous to the volume of the space in which the subsystem lives.
Conclusions and Perspectives.
In The purpose of this first work was to introduce and discuss the properties of a general formalism that can be applied in a variety of problems. We have confined ourselves only to a first -and in some sense trivial-application in order to demonstrate the underlying calculational machinery. In a forthcoming study we shall present the far more interesting case of the so-called quantum resonance. The general scene in such a problem is a double well embedded in a stochastic environment and in interaction with an external time dependent field. The path integral description of the tunneling and the role of the "classical" solutions in the framework of CCT is a very interesting and far from trivial problem that is under investigation.
Appendix A
In this Appendix we shall determine the functions α(t) and β(t) beginning from the classical equation of motion Due to its nonlocal character the above equation must be examined independently in every segment of the contour C.
Along the line L 4 the classical equation takes the form: where we defined Along the lines L 3 and L 2 we have The last part of the classical equation refers to the line L 1 : Seeking for continuous and differentiable solutions of the above system of classical equations, we impose the following boundary conditions: cl cl. (0) = x The solutions y (±) of the last equations are now trivially obtained and they lead us immediately to the result: (A.14) x (1) cl In the above expression we have written: which assumes the form: The coefficients in eqs.(A.14) and (A.15) can straightforwardly be obtained with the help of the boundary conditions (A.7) and (A.10): x − x 2 , (A.23) A 2 (t) = λ + (t) D(t) x − x 2 , (A.24) with D(t) = λ + (t)e −α − t − λ − (t)e α + t ,D(t) = (α + − ω)e −α − t + (α − + ω)e α + t , (A.27) λ ± (t) =φ At this point we are ready to confirm some of the claims presented in the main text. We must distinguish two cases. The first is when: In such a case α ± are real and consequently ϕ (4) ± ) * . Observing that λ ± = λ * ± , µ ± = −µ * ± we immediately see that: ∓ ) * , and since λ ± , µ ± turn out to be the same as in the case (A.33), we verify once again the relations (A.34) and (A.35). | 2011-04-19T08:35:54.000Z | 2011-04-19T00:00:00.000 | {
"year": 2011,
"sha1": "d5a9480604029d13d2f7e7d8121dd546b0a6a799",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1104.3671.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d5a9480604029d13d2f7e7d8121dd546b0a6a799",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Computer Science",
"Mathematics"
]
} |
256886955 | pes2o/s2orc | v3-fos-license | Cyber Security Awareness (CSA) and Cyber Crime in Bangladesh: A Statistical Modeling Approach
The need to combat cybercrime is becoming more and more urgent. This effect is crucial for developing nations like Bangladesh, which is currently building out its infrastructure in preparation for fully secure digitization. This study aims to identify the numerous factors that contribute to cybercrime, its challenges, the relationships between different cyber security variables, potential solutions to these issues and various behavioral viewpoints individuals and organizations hold regarding cybercrime victimization. A simple random sampling method has been conducted to collect 200 data from individuals on this topic. Factor analysis based on Principal Component Analysis (PCA) was fitted to the data to analyze cyber behaviour, Binary Logistic Regression was fitted to analyze cyber victimization status and Poisson Regression model was fitted to analyze victimization frequency. The research demonstrates that the dependent variable cybercrime victimization is strongly associated with the independent variables which are password sharing status, using a common password, cyber security knowledge Status, personal information online storage status, downloading free antivirus from an unknown source, disabling antivirus for downloading, download digital media from an unknown source, clicking links unauthorized sites, personal info Sharing with stranger over online. According to the regression model's findings, women are more likely than men to experience cybercrime. Cybersecurity knowledge is found to be a key factor in preventing cyberattacks. Additional research on this subject can be conducted utilizing large-scale data to gain more trustworthy conclusions on the underlying factors contributing to cybercrime victimization. Overall, developing a digital Bangladesh where our cyber security is robust can be accomplished by learning about cybersecurity and practicing safe online behavior.
INTRODUCTION:
In the era of globalization secure cyberspace plays a significant role in achieving economic prosperity and building a modern, and powerful nation. With the rapid spread of cyberspace and communication technology, cybercrimes have become a considerable security concern. With the progressive increase in the number of internet users in Bangladesh, the percentage of attacks is rising too. According to the Kaspersky Security Bulletin 2015, Bangladesh is in the second position in the level of infection among all the countries. 69.55% of unique users are at the highest risk of local virus infection in Bangladesh. 80% of users are the victim of spam attacks according to Trend Micro Global Spam
Literature Review
The issue of cyberattacks, which has emerged as one of the most crucial aspects of the Internet of Things (IoT), was discussed in a publication by Ulven & Wangen, (2021). By safeguarding IoT assets and user privacy, IoT cybersecurity aims to lower cybersecurity risk for businesses and consumers. The authors of the paper provided theoretical vulnerabilities faced by the IoT, major security issues, and necessary steps for the protection of cyber security and the IoT (Abomhara & Køien, 2015). Ramirez & Choucri, (2016) researched the 21st-century trends of cyberization and the rising demand for computer security. A recent increase in new technology investment has coincided with an increase in cybercrime, digital currency, and e-governance. Businesses and governments are starting to focus their attention on all-encompassing cybersecurity solutions. Using an integrated approach, Ali et al. (2022) investigated the causes of IT system failure in Bangladesh's banking sector. Cyberattacks, database hacks, server failure, network outages, broadcast data mistakes, virus impacts, etc. were the reported factors. Then, to facilitate managers' critical decision-making, these factors were examined. On a few Indian public and private sector banks, Atul et al. (2013) exposed the numerous cyberattack techniques used by cybercrimenals as well as the various cyber defense strategies and how they relate to cyberattacks. According to the report, 60% of bank executives acknowledged that their bank has discovered internet theft. Scholars examined the cyber threat posed by smart cities, assessed, exposed, and evaluated the advancement of data-driven solutions for situational awareness. The author assessed attack detection approaches, risk assessment methodologies, and ways for modeling relationships across different smart city infrastructures (Neshenko et al., 2020). Chen et al. (2015) did an exploratory study using the flux-fluctuation law, the Markov state TPM, and predictability measurements to look for patterns and predictability in cyberattacks. Unsurprisingly, they discovered the fundamental pattern of cyberattacks and discovered that just a small number of attacker groups were responsible for practically all the attacks. A comparative analysis of twenty nations' national cyber security strategy was conducted by Shafqat & Masood, (2016). The timeframe of development clearly stated objectives and goals, degree of prioritization, nations' perceptions of cyber threats, organizational overview, incident response capabilities, etc. were used as comparative criteria. It was discovered that while the purposes and objectives of all the strategies were quite similar, their scopes and methods were very dissimilar. Additionally, the UK, USA, and Germany had the best strategy overall. Maalem Lahcen et al. (2020) reviewed pertinent theories and ideas and offered insights, as well as a framework that integrates modeling and simulation, behavioral cybersecurity, and human factors. To emphasize the significance of social behavior, environment, biases, perceptions, deterrent, intent, attitude, norms, alternatives, punishments, decisionmaking, etc. in comprehending cybercrimes, Matyokurehwa et al. (2020) studied the Cyber Security Awareness (CSA) perspectives among students at Zimbabwean universities to build a model of the effectiveness of cyber security training programs. They worked on some statistical analysis on their primary data to find any significant relationship between cyberattacks and CSA. They found that malware attacks, social engineering attacks and IoT attacks are positively related to CSA. In addition, they developed a cross-case analysis which showed that CSA is invariant on age and sex while CSA has a noticeable impact on the level of education and institution. Alqahtani (2022) launched a study about the factors behind cybersecurity awareness among students taking higher study. Based on the CSA data taken from Imam Abdulrahman Bin Faisal University college students, he analyzed and created a module to make the students aware about cybersecurity. Many relevant statistical analyses including ANOVA, multiple regression, correlation test, multicollinearity test was carried out considering password security, browser security, and social media security as three main variables. All the three-security component was found significantly influential on cybersecurity awareness. Kovacevic et al.
(2020) explored how cyber security behavior is impacted by cyber security awareness. The study defined socio-demographics, cyber security perceptions, previous cyber security breaches, IT usage, and knowledge as CSA factors. Through correlation and regression analysis, knowledge and IT usage was found to be a significant factor in cyber security behavior. Ben-Asher & Gonzalez, (2015) inquired about how knowledge plays a role in the accurate classification of malicious events and prevents damages from cyber-attack. They evaluated the impact of cyber security knowledge on the detection of cyber-attack. A reliable tool for detection is an Intrusion Detection System (IDS) which detects by matching known attack patterns of network events. But 99% of the alerts from IDS are false alerts so a human analyst is required for triage analysis (Monitoring & Detection). And more knowledge about cyber security significantly helps in the correct detection of malicious events and decreases false classification. Haque, (2019) studied public opinion on cyber security condition of Bangladesh. He found that 78.4% internet users thought the condition be vulnerable. He also referred to some cyber threats and recent cyber-attacks specially in financial sectors in our country. Describing the deficiency of awareness in this sector, this paper further discussed some necessary policy regarding cyber security. Mazumder & Hossain, (2022) looked for a connection between board composition and disclosure of cyber security in Bangladesh's banking industry. Multiple linear regression analysis and automated content analysis were employed in the study. Throughout the research pe-riod, the cyber security division trend in Bangla-desh's banking sector was up (2014-2020). According to the data, larger boards do not substantially affect CSD whereas increased female involvement is linked to higher CSD. Kundu et al. (2018) analyzed cyber-attack in the monetary sector of Bangladesh and investigated the causes of that in Bangladesh. As they found increasing trend of cyber-attack, they suggested available framework against cybercrime in this paper. Hadlington, (2017) made a survey on attitude towards cybercrime as well as cyber security in business scale, Internet addiction and risky cyber security behaviors. By regression analysis the research shown that employee attitudes towards cyber security correlated negatively with which they engaged in risky cyber security behavior self-reporting is the Limitations for the study.
Research highlights employee attitudes & knowledge can play vital role in cyber security. Astromskis (2017) developed a conceptual cyber security regulation framework, based on the fundamentals of transaction cost theory. The study evaluated it in the context of emerging legal technologies. Bowen et al. (2011) conducted an experiment on randomly selected 4000 students and staffs using forged phishing emails to investigate a new method to measure, quantify and evaluate the security state of large corporation organizations and government agencies. According to them, computer security depends on the people who operate the system aside technology and systems. Nifakos et al.
(2021) aimed a review study to find out the factors causing cyber-attacks in healthcare sector. They analyzed and reported human behavioral causes of cyber threats in health organizations. They also researched the possible policies and measures which could be taken by the healthcare-providing organizations. In order to understand the mechanics of cyber-attack campaigns, Lallie et al. (2021) examined the cyberattacks that occurred during the COVID-19 epidemic.
Additionally, it showed how cybercriminals use actual crises and tragedies as cover for opportunistic assaults. Finally, the effects of these attacks on persons who work from home were explored, along with some future planning ideas. Sardi et al. (2020) studied by giving a special emphasis on one of the main challenges in the healthcare sector during the COVID-19 pandemic, the cyber risk. Since the beginning of the Covid-19 pandemic, the World Health Organization has detected a dramatic increase in the number of cyber-attacks. Information security and cyber security are two different concepts, according to Von Solms & Van Niekerk, (2013). They contended that these two aren't quite interchangeable or similar. The safeguarding of information assets is known as information security. However, cyber security is the defense of the internet's physical infrastructure, its users, and the assets that can be accessed through it. Consequently, cyber security has a further component. Staheli et al. (2014) surveyed and categorized the visualization evaluation metrics, components and techniques for cyber security that were utilized in the previous decade of VizSec (A research community that focuses on visualization of cyber security) research literature. They also defined existing methodological gaps in evaluating visualization in cyber security as well as suggested potential avenues for future research. Švábenský et al.
(2020) studied the fact that cybersecurity is now more important than ever, and so is education in this field.
However, the cybersecurity domain encompasses an extensive set of concepts, which can be taught in different ways and contexts to understand the state of the art of cybersecurity education and related research. Klimburg et al. (2011) had outlined a cyberstrategy that provided the stance of the United States of America (USA) on cyber-related issues and outlined a unified approach to the USA's engagement with other countries on cyber issues. They analyzed about technologies that might be used to protect the cyber environment and organization and user's assets. Becker & Quille, (2019) studied about cyber-Security issues that needed to be integrated in the educational process in the beginning at an early age (Mia et al., 2022).
This study focuses on cyber security emerging trends while adopting new technologies such as mobile computing, cloud computing, e-commerce, and social networking. The paper also described the challenges due to lack of coordination between Security agencies and the Critical IT Infrastructure. Lebek et al. (2014) provided an overview of theories used in the field of employees' information systems (IS) security behavior by analyzing and synthesizing previous literature.
Data Collection and Processing
Questionnaires were used as the data collection tool for this cross-sectional study. Both personal interviews and mail questionnaires through google forms were used for this purpose. Internet users who are greater than 16 years old were the target population of this study. Simple random sampling was adopted in collecting data from individuals. For large samples, the formula for estimating sample size through Simple Random Sampling is- Here, in this study, P, Assumed proportion in target population =0.50; q=1-p =0.50; d, Degree of accuracy expected in the estimated population =.07; Z, Standard normal deviate = 1.96. Accordingly, 200 data from Dhaka city was gathered for the study. Data were analyzed using SPSS software in computer.
Principal Component Analysis
By turning a set of values for correlated variables into a set of values for linearly uncorrelated variables, PCA is used to reduce the number of dimensions. Old dimensions are changed into new dimensions. These new dimensions indicate that since the majority of the information is included in the first few dimensions, it is acceptable to eliminate other dimensions containing less information/variance and instead choose the most significant ones, which results in dimensionality reduction. In this project, orthogonal transformation is used in variance reduction.
Binary Logistic Regression Model
Let us define the binary random variable Z= { 1 if the outcome is a success 0 if the outcome is a failure with probabilities Pr (Z = 1) = π and Pr (Z = 0) = 1− π, which is the Bernoulli distribution B(π). If there are n such random variables Z 1 ,..., Z n , which are independent with Pr(Z j = 1) Which is a member of the exponential family.
Next, for the case where the π j 's are all equal, we can So that Y is the number of successes in n "trials." The random variable Y has the distribution Bin (n, π): Pr (Y = y) =( ) (1 − ) − , y = 0,1,...,n.
For i th random variable Y i , = ( ) = is the expected number of successes. We can allow to depend on (vector of explanatory variables) via the link function Where is a vector of parameters. Finally, we consider the general case of N independent random variable Y 1 , Y 2 ,...,Y N corresponding to the numbers of successes in N different subgroups or strata. If Y i ∼ Bin (n i ,π i ), the log-likelihood function is l(π 1 ,...,π N ;y 1 ,...,y N ) = ∑ [ log ( The parameter vector can be estimated numerically using numerical methods. Finally, the model can be written as log ( π 1−π ) = .
Poisson Regression Model
Let us consider Y 1 ,...,Y N be independent random variables with denoting the number of events observed from exposure n i for the i th covariate pattern. The expected value of can be written as ( ) = µ = .
The dependence of on the explanatory variables is usually modelled by = e x i T β .
The natural link function for the Poisson distribution, the logarithmic function, yields a linear component For a binary explanatory variable denoted by an indictor variable, = 0 if the factor is absent and = 1 if it is present. The rate ratio, RR, for presence vs. absence is When the response variable is over dispersed, more sophisticated model such as Negative Binomial Regression Model can be used.
RESULTS AND DISCUSSIONS: Factor analysis using principal component analysis on cyber behavior
The goal of the traditional principal component analysis is to reduce the number of m variables to a smalller number of p uncorrelated variables known as principal components which account for the variance of the data as much as possible. PCA is suitable for continuous variables, and it assumes a linear relationship between variables, it is not an appropriate method for dimension reduction in categorical variables. Alternatively, categorical principal component analysis (CATPCA) has been developed for data having mixed measurements such as nominal, ordinal, or numeric which may not have linear relationships with each other. We refer to Gifi, (1990) for a historical review of CATPCA using optimal scaling. We compute the Bartlett's test for sphericity and find the Kaiser-Meyer-Olkin measure of sampling adequacy before proceeding to factor analysis. .000 Here, the Kaiser-Meyer-Olkin measure is .751 which indicates the dataset is valid for factor analysis. Bartlett's test for sphericity tests the hypothesis that a correlation matrix is an identity matrix, which means the variables are unrelated. For our data, we have pvalue .000 for Bartlett's test for sphericity. Therefore, we have enough evidence to conclude that the factor analysis is useful for the data. Now we can approach for the factor analysis in our dataset. The initial values of commonalities are set to 1. The highest extracted value is for the variable "Sharing password" is .693 indicating that a 69.3% variation in "Sharing password" is explained by the principal factors. 65.6% variation in "Same password multiple use" is explained by principal components. The least explained variable is "Insecure payment info online storage" which has an extraction value of about .294. As all values here are greater than .25, the communalities are acceptable ( Table 2). The scree plot shows that eigenvalues drop somewhat rapidly from components one to four. As 4 components are above one, four components are selected. Variables that are most strongly correlated with each component are selected in Table 5 from the rotated factor matrix ( Table 4).
We assume 0.5 as a threshold value and select the variables for each principal component accordingly. As the factors cannot explain the total variance more than 60%, we may fit our statistical models with individual variables.
Fitting Binary Logistic Regression Model to assess victimization status
In Table 6, the odds ratio is discussed to show the effect of the covariates on victimization status. The odds ratio describes the odds that an event occurs given a particular exposure is present compared to an event that occurs given the exposure is absent. Controlling for all other variables in the model, cybercrime victimization is 3.028 times more likely for those who use common password than those who do not (p-value=.012). Also controlling for every other variable, the odds of cybercrime victimization is 2.526 times higher as person shifts from not storing personal data online to storing them online (p-value=.034). For persons leaving payment information on website with no clear security compared to those who do not, the odds of victimization are significantly 66.3% lower (p-value=.02). However, this seems illogical and may be observed due to our sample data. Having the habit of disabling antivirus while downloading significantly increases the victimization odds by approximately 3 times (p-value=.014). The practice of downloading digital media from unknown sources significantly rises the victimization odds by 2.398 times (p-value=.041). The likelihood of cybercrime victimization when a person shares personal information to strangers over the internet is 4.422 times greater than that of their counterparts. The p-value here is .002 which refers to the factor being highly significant at a 5% significance level. The significant outcomes support the research hypothesis I. All the other covariates are statistically insignificant at 5% level of significance.
CONCLUSION AND RECOMMENDATIONS:
Cyber facilities have brought a wave of change in our modern life. The purpose of the study is to see the behavior of cybercrime victimization, knowledge of cyber security, and causes of cybercrime victimization and to find some possible solutions and recommendations for this problem. The most common sort of cybercrime happening around is found to be hacking, identity fraud, phishing, monetary loss, computer virus and so on. The research demonstrates that the dependent variable cybercrime victimization is strongly associated with the independent variables which are password sharing status, using common password, cyber security knowledge Status, personal information online storage status, downloading free antivirus from unknown source, disabling antivirus for downloading, download digital media from unknown source, clicking links unauthorized sites, personal info Sharing with stranger over online. However, not all other variables have significant impact on cybercrime victimization. According to the regression model's findings, women are more likely than men to experience cybercrime. It is also evident from the views of the respondents that women are not very protected online. The study also contributes to some important opinions on cybercrime in the industrial sector. 69.5% of respondents strongly agree that management has the responsibility to ensure a company is protected from cybercrime. 65.2% of respondents strongly agree everyone in the company has a role to play in protecting against threats from cyber criminals. 56.52% of respondents agree that they don't have the right skills to be able to protect the organization from cybercrime. 52.1% of respondent agree that the Police cannot deal with cybercrime effectively.
39.13% of respondents were neutral that they worry that if they report a cyber-attack to the Police, it might damage the reputation of the company. The economic & digital development of the world along with our country is going on in a rapid speed. For this purpose, it is cyber security that is playing a vital role and contributing in these sectors. So, after conducting the study and recognizing reasons for cybercrime, we recommend following suggestions.
1) The Govt. should initiate cyber training programs.
2) The prevailing Law of Cybercrime should be implemented. 3) Strict cyber law should be imposed. 4) More and more seminars should be arranged to raise awareness among people. 5) Back dated software are unable to protect the device from cyber-attack. So, users should use up to date software in their devices. 6) For cyber security passwords is an exigent object. To avoid hacking, users should use strong & unique passwords. 7) Users should backup the data & review online accounts regularly. 8) Unauthorized & unknown sites contain viruses.
So, downloading any content from unknown sources should be avoided. 9) There is a high risk of identity theft, making fake accounts, harassment for sharing personal information. Therefore, sharing personal information with anyone should be avoided. | 2023-02-16T16:11:17.389Z | 2023-02-09T00:00:00.000 | {
"year": 2023,
"sha1": "bd305cd002c856f24d9c415d57d68b35f99f4a77",
"oa_license": "CCBY",
"oa_url": "https://universepg.com/public/storage/journal-pdf/Cyber%20security%20awareness%20(CSA)%20and%20cyber%20crime%20in%20Bangladesh_a%20statistical%20modeling%20approach.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "710e3be376b4e15d098bdc62a390f0fae58fc68e",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
54859527 | pes2o/s2orc | v3-fos-license | Journal of Molecular Biomarkers & Diagnosis
Lung cancer is the leading cause of cancer related deaths in general population. Early diagnosis of malignant pulmonary nodule, can improve 5-year survival rate of lung cancer by upto 80%. There is increase in incidentally detected pulmonary nodules with the increased usage of diagnostic imaging modalities especially computed tomography (CT) of chest. Most often, physicians and trainee doctors have to depend on the experienced radiologists to confidently label these nodules as benign or malignant, thereby raising a need for some method, which could help them in self-learning and also could assist radiologists in ruling out malignancy with good certainty and confidence.
Description
Lung cancer is the leading cause of cancer related deaths in general population [1]. Early diagnosis of malignant pulmonary nodule, can improve 5-year survival rate of lung cancer by upto 80% [2]. There is increase in incidentally detected pulmonary nodules with the increased usage of diagnostic imaging modalities especially computed tomography (CT) of chest. Most often, physicians and trainee doctors have to depend on the experienced radiologists to confidently label these nodules as benign or malignant, thereby raising a need for some method, which could help them in self-learning and also could assist radiologists in ruling out malignancy with good certainty and confidence.
To fulfill these requirements, there has been extensive research on Computer-Aided Diagnosis (CAD) in characterization of the lung nodules. Content-Based Image Retrieval (CBIR) is a type of CAD tool, which involves two main steps, feature extraction and image retrieval. Several features are used by a radiologist to characterize a nodule i.e., texture, shape, size, density and margins. CBIR uses these image features to build searching index. When given a query image, it uses similarity metrics to retrieve similar objects from a database. Hence, CBIR can act as a learning tool and also assist the radiologists in diagnosis of lung cancer by showing them examples of similar nodules from a prestored database of proven cases.
The clinical relevance of CBIR was elaborated initially by Muller et al. [3], who emphasized its usefulness in clinical decision-making, medical research and medical education. This has motivated various researchers to work on CBIR. The principal objective of researchers is to develop algorithms, which will help to retrieve similar images to help in diagnosis. There are several CBIR research projects underway in the medical field with few of them focusing exclusively on lung nodules. CBIR named BRISC (acronym for BRISC Really IS Cool) was developed by Lam et al. [4], in which nodules were segmented using boundary information. Different texture features were extracted from each CT image. For a query nodule, other nodules were retrieved from a database. Retrieved nodules were considered to be relevant if they were different slices of the same query nodule. They were also considered to be relevant if they were the same slice of the query nodule, evaluated by a different radiologist. This system did not help in differential diagnosis or self-learning. However, it set the way forward for further work in this context.
A CBIR system was developed by Seitz et al. [5], in which 64 visual features were extracted including features of texture, size, shape and intensity. The Euclidean distance was used for measuring similarity. However, they used manual technique for segmentation of nodules, which was pretty time consuming.
Kuruvilla et al. [6] also used CBIR system, in which CT examinations with similar nodules were retrieved, according to the parameters that calculate the accuracy of the neural network algorithm. The similarity metrics used were Euclidean distance, Manhattan distance, Chebychev, Tversky distance, Bray-curtis, Canberra distance, City block distance, squared chord distance and chi-squared distance Lucena et al. [7] used weighted Euclidean distance (WED) with weight adjustments to improve the precision of CBIR for retrieval, than systems using Euclidean distance. Using WED, precision increased on average by 17.3%.
Very recently, Dhara et al. [8] developed CBIR based CAD where lung nodules were segmented using semi-automatic technique followed by annotation and ground truth delineation for features viz. size, shape, margins, texture etc. in the nodules. They proposed a rank of malignancy on a scale of 1-5, which was correlated with biopsy results and created a benchmark ground truth database. After creating the database, CBIR-based CAD was developed which was later validated on lung image database consortium (LIDC) and image database resource initiative (IDRI). The radiologist just provided a seed point on the query nodule. This resulted in automatic retrieval of top 5 similar nodules, by comparing features of query nodule with nodules from the database. Retrieval system used 2D shape-based, 3D shape-based, 2D texture-based, 3D texture-based and margin-based features of nodules. The similarity metrics used to retrieve and rank nodules were Euclidean, Manhattan and Chebyshev. In this CBIR based CAD system, the retrieved nodules were ranked and placed in the descending order of similarity with the query nodule, along with their class label. The class label determined the Decision Index (DI). Higher DI meant high likelihood that the query nodule will be malignant.
Conclusion
In conclusion, CBIR based CAD system is a good self-learning tool, can assist trainee radiologist in determining the malignancy status of a nodule and can be used for second opinion even by experienced radiologists. However, more collaborative research is required to improve CBIR based CAD system, particularly for automated segmentation of lung nodules, for improvement of feature set and for improving retrieval strategies. | 2019-03-17T13:10:05.489Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "e259eb703800aecd9f067c1109f3aeb0985a3dd2",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4172/2155-9929.s2-033",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "c2d88ed346e8064cb50584cee5999fb01686b83e",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
208005309 | pes2o/s2orc | v3-fos-license | DIFFERENTIAL DIAGNOSTICS OF ASEPTIC AND SEPTIC LOOSENING OF THE CUP OF THE ENDOPROSTHESIS OF THE ARTIFICIAL HIP JOINT BY THE METHODS OF POLARISATION TOMOGRAPHY
Manuscript contains structural-logical scheme and analytical description of the differential diagnosis of aseptic and septic loosening of the artificial hip joint endoprosthesis using the methods of differential Mueller-matrix mapping of circular birefringence (CB) distributions of polycrystalline synovial films (SF) and results of determining the sensitivity, specificity and accuracy of the wavelet analysis method of differential Mueller-matrix mapping of the distributions of the CB values of polycrystalline films SF patients from the control group and groups with different severity of the hip joint pathology.
Introduction
Methods of laser polarimetry are among the most important in the development of the latest introscopy systems of the polycrystalline structure of biological layers. The main pivot of such techniques is the Muller-matrix polarimetry (MMP) [9,11,15]. This optical technology provides the most complete information about the optical anisotropic properties of biological tissues.
This manuscript contains structural-logical schemes and analytical descriptions of the differential diagnosis of aseptic and septic loosening of the artificial hip joint endoprosthesis using the methods of differential Mueller-matrix mapping of circular birefringence (CB) distributions of polycrystalline synovial films (SF) films and results of determining the sensitivity, specificity and accuracy of the wavelet analysis method of differential Mueller-matrix mapping of the distributions of the CB values of polycrystalline films SF patients from the control group and groups with different severity of the hip joint pathology [16,17]. Table 1. structural-logical scheme of differential Mueller-matrix tomography of polycrystalline films of SF in the differential diagnosis of aseptic and septic loosening of the endoprosthesis cup of an artificial hip joint Polycrystalline films of synovial fluid (SF) Differential Mueller-matrix mapping of polycrystalline SF films CB maps LB maps Statistical and correlation analysis Mean values and fluctuations of the magnitude of the statistical moments of the 1st -4th orders characterizing the distribution of the value of CB of the samples of samples of SF Information analysis of the Mueller-matrix polarization tomography method for polycrystalline structure of SF films Sensitivity, Se Specificity, Sp Accuracy, Ac Statistical analysis of the amplitude distributions of the wavelet coefficients of the CB and LB cards Mean values and fluctuations of the magnitude of statistical moments of the 1st -4th orders, as well as the dispersion and sharpness of the peak of autocorrelation functions within the set of CB maps of the polycrystalline structure of samples of SF Information analysis of the Mueller-matrix polarization tomography method of polycrystalline structure of SF films Sensitivity, Se Specificity, Sp Accuracy, Ac Criteria for differential diagnostics method Mueller-matrix polarization tomography polycrystal structures of SF
Differential diagnostics of aseptic and septic loosening of the endoprosthesis cup using the Mueller-matrix reconstruction
This part of the research contains materials on the experimental implementation of polarisation reproduction with a comprehensive statistical and correlation analysis of the coordinate distributions of the magnitude of circular birefringence of polycrystalline films of synovial fluid of the hip joint of patients from the control group 1 and research groups 2 and 3 (section 3) [1,16,18].
An experimental method for determining the coordinate distributions of the magnitude of the CB of samples of SF polycrystalline films is presented in section 2 (clause 2, paragraph 2).
On a series of fragments of Fig. 1 presents the maps of the CB (left parts), the coordinate distributions of the magnitude of the CB (right parts) and the autocorrelation functions of the maps of the CB (lower parts), which are defined for the polycrystalline films of the SF of the hip joint of patients from group 1 (Fig. 1), group 2 ( Fig. 2) and group 3 (Fig. 3) [2,8,20].
Comparison of the results of Mueller-matrix tomography ( Fig. 2 -4) of the parameters of the optical anisotropy of polycrystalline films of SF of the hip joint of patients of all groups revealed [14,19,23]: individual for each group of samples topographic structure of the coordinate distributions of the magnitude of CB (left parts of Fig. 1 -3); significant range of coordinate-non-uniform change in the magnitude of circular birefringence (right parts of Fig. 1 -3). complex and asymmetric structure for each of the groups of distributions of the eigenvalues of the autocorrelation functions (the lower parts of Fig. 1 -3) [3, 10, 13].
Information analysis of the Mueller-matrix data reconstruction
This section contains the results of determining the strength of the Mueller-Maric method of reproducing the distribution of the magnitude of the CB of SF polycrystalline films by calculating the magnitude of a set of operational characteristicssensitivity, specificity and accuracy of the polarization tomography technique for all three groups of patients [12,21,22]. From the analysis of the operational characteristics of the method of polarisation reproduction of the distributions of the CB given in Table 2, the following follows: the range of variation of the specificity of the polarisation reproduction of the distributions of CB is % 88 % 80 Sp ; the range of variation of the value of the balanced accuracy of the polarisation reproduction of the distributions of the CB is The overall level of the operational characteristics of the metrology of polarisation tomography (sensitivity, specificity and balanced accuracy) reaches 90%.
The overall level of the operational characteristics of the polarisation tomography technique (sensitivity Se , specificity Sp and balanced accuracy Ac ) reaches 90% [4,7].
Wavelet analysis of CB cards of polycrystalline SF films
On fragments of the series Fig. 4 -6 shows the wavelet distribution coefficient map CB (upper part) linear section with the MHAT (Maxican hat) scale of function 15 (lower part). Quantitatively, changes in the distributions of the CB value, which is caused by the optical activity of polycrystalline SF films, which illustrate the distributions (mean and variance) of the amplitudes of the wavelet coefficients of maps of circular birefringence of SF samples of the hip joint of patients of all groups, which are presented in table 2 [5,6].
Information method of wavelet analysis of the distribution of CB values
The data of the information analysis of the strength of the wavelet analysis method of the Muller-matrix tomography of the distribution of values of CB are presented in table 3. Thus, there is an increase in the overall level of the operational characteristics of the polarisation tomography technique (sensitivity Se , specificity Sp and balanced accuracy Ac ) of the distributions of circular birefringence of polycrystalline SF films to 87% -92%. | 2019-10-24T09:17:21.717Z | 2019-09-26T00:00:00.000 | {
"year": 2019,
"sha1": "a49a69446b331bf91789382800989d81abf0a97c",
"oa_license": "CCBYSA",
"oa_url": "https://ph.pollub.pl/index.php/iapgos/article/download/237/119",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "86230b8d7a676001058a51fd3759ed87c3b8ca6e",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
90310850 | pes2o/s2orc | v3-fos-license | Agriculture and livestock impacts on river floodplain wetlands : a study case from the lower Uruguay river
1Instituto de Botánica Darwinion, Labardén 200, Casilla de Correo 22, B1642HYD, San Isidro, Buenos Aires, Argentina. mjuliabena@gmail.com 2Reserva Ecológica Costanera Sur, Gobierno de la Ciudad de Buenos Aires, Buenos Aires, Argentina 3Instituto de Ecología, Genética y Evolución, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Pabellón II, 4to piso, Ciudad Universitaria, C1428EHA, Buenos Aires, Argentina Agriculture and livestock impacts on river floodplain wetlands: a study case from the lower Uruguay river
IntroductIon
Wetland ecosystems are considered very valuable environments, not only because of the biodiversity they harbor, but also because of their economic importance.However, a large portion of wetlands has suffered and continues to suffer heavy exploitation and transformation processes.Several human activities, such as the use os dikes and embankments, artificial drainage, deforestation and livestock grazing, affect the biodiversity of these ecosystems (Junk et al. 2013, Wang et al. 2011).Many riparian species have specific requirements of hydrology; therefore, changes in water flow magnitude, duration, timing, frequency, and predictability may alter the plant community composition (Neiff et al. 2011).Also, modified environmental conditions, either through climate changes, eutrophication, and/or river regulation, might facilitate the spread of invasive species (Catford et al. 2011).
Over the last few decades, the advance of the agricultural frontier has had serious consequences for these ecosystems.Among them is cattle displacement towards wetlands, since they have traditionally been considered marginal areas for culture.Livestock farming has replaced an extensive seasonal approach for intensive permanent farming practices (Belloso 2008).In some regions, the construction of drainage and diking systems to prevent flooding interrupts the transfer of new materials from rivers to their floodplains, disrupting the natural cycle of the ecosystem.This can lead to a floristic change and modifications in the plant species cover, that are associated with alterations in the hydrologic cycle (Kalesnik et al. 2015).At the regional level, these practices can result in a decrease of the buffering capacity of the excess of water and a consequent change in plant structure (Bó et al. 2010).Furthermore, the presence of livestock and grazing induces changes in species richness, diversity, and dominance (Haretche & Rodríguez 2006).An intensification of grazing can reduce vegetal biomass by increasing erosion or salinity levels and decrease ecosystems productivity (Morris & Reich 2013).
When all land masses are considered, the South American wetlands constitute the vastest in the biosphere.Drainage basins of large rivers represent the biggest area occupied by these environments and over 80% can be found in warm climates (Baigún et al. 2008).Among these, the Uruguay River basin is one of the main wetlands systems in South America.The southern portion of this basin is located near a corridor that exhibits great human intervention.Therefore, preserving this part of the river floodplain would be important for mantaining vital ecological functions, such as biodiversity and gene pool of rare species from these systems.The lower Uruguay River is surrounded by large wetlands whose flooding regime can be evidenced by the presence of several plant species with high resistance to the changing conditions that dominate the area.However, few studies have focused on the distribution and structure of these plant communities and the impact of different alterations in the area (Di Persia & Neiff 1986).
When considering the importance of wetlands as a source of water supply for human settlements and for the development of several ecological processes (Keddy et al. 2009, Mitch & Gosselink 2007), it is crucial to understand the effect that different alterations may have on these environments.Identifying and describing spatial patterns of vegetation is essential to study wetlands function, while analyzing the effect of disturbances may serve as a starting point for the development of conservation plans (Paruelo et al. 2004).
The main goal of this study is to characterize the herbaceous plant communities associated with the final portion of the Uruguay River flood plain, focusing on the agriculture and livestock impacts that these communities may suffer.Our main hipotesis is that communities developing in modified areas exhibit different structure and composition patterns with respect to less-modified communities, and suffer ingression of exotic and invasive plant species that change theirs ecological attributes.In particular, we aim to analyse the plant structure and composition in both modified and less-modified areas and to describe the attributes of the associated plant communities.
Study area
The landscape of the Uruguay River basin is the result of marine ingression and regression processes occured during the mid Holocene, superimposed by current fluvial processes.Both phenomena combined generated the pattern of marine landforms and flood plain environments seen today (Pereyra et al. 2002;Cavallotto et al. 2005).This basin comprises scrublands, forests, dunes and halophyte fields (Burkart 1957, Kalesnik et al. 2009a, Quintana et al. 2009;Aceñolaza et al. 2014).According to the ecoregions classification (Olson et al. 2001) the studied area is part of the Paraná flooded savanna, whose southern extreme reaches the Pampas phytogeographic unit considered in the traditional Argentinean biogeographic schemes (Brown & Pacheco 2006, Burkart et al. 1999, Cabrera 1976).
The Paraná flooded savanna is rich in flora and fauna that is not present in the surrounding regions.The presence of large water bodies generates a humid atmosphere that mitigates extreme daily and seasonal temperatures.This allows for the local presence of communities and species typical of the humid subtropical regions which, at a more global scale, are found further northeast.
The present study focused on lowlands of two farms (site 1 and site 2) located on the banks of the Uruguay River (Gualeguaychú, Entre Ríos, Argentina) (Fig. 1).The area is placed in the climate region Cfa based on the Köppen-Geiger classification system (Peel et al. 2007), exhibiting temperate climate, year-round precipitations, and average temperature of the warmest month above 22 ºC.However, during the year of sampling (2008), the Entre Ríos province suffered extreme drought conditions, with limited precipitations in terms of the historical average, low humidity and, therefore, high rate of evapotranspiration (INTA 2008).
Site 1 is located at the south of Gualeguaychú (centered location 33º 20' S, 58º 28' W) (Fig. 1), spanning over 10 km of riverbank.This site is surrounded by a 3 m high artificial embankment to prevent flooding.The enclosed surface area of 1 581 ha contains 0.6 to 0.8 heads of cattle per ha.Environments within the embanked portion of land have also been affected by the use of rollers.This technique consists on passing a metal cylinder filled with water that crushes the original vegetation and stirs the soil (Anriquez et al. 2005).This site also includes 206 ha of non-embanked land, composed of vast native scrublands and grasslands.Site 2 is situated north of the town of Gualeguaychú (centered location 32º 54' S, 58º 11' W) (Fig. 1).
It comprises 30 km of riverbank and 9 844 ha of a mosaic of wetlands, including vast scrublands, floodable grasslands, and marshes.
Data collection
Considering a previous zoning and landscape units' description (Kalesnik et al. 2009a, Quintana et al. 2009), field surveys were carried out in lowlands dominated by herbaceous vegetation within the two sites.Between March and May 2008, a random stratified sampling was performed in 72 plots (1-36 site 1 and 37-72 site 2).Out of the total number of plots, 16 were developed in anthropically modified environments.Plant species were sampled in each 5x5 m plot and assigned an abundancecover value according to the modified Braun Blanquet scale (Müeller-Dumbois & Ellenberg 1974).Geographical coordinates of each plot were registered.Species lifeform type was assigned according to Barkam's (1988) classification andZuloaga et al. (2008).
Numerical analyses
Detrended correspondence analysis (DCA) was performed to detect ordination patterns of plots and species using CANOCO for Windows ver.4.5 (Ter Braak & Smilauer, 1998).Plant species associations were studied using Two-way indicator species technique (TWINSPAN) (Hill & Smilauer, 2005).For each group, relative importance of species was estimated considering mean cover (MC) and relative frequency (RF).MC was calculated as the mean value of cover for each species, normalized by the total number of plots from each group.RF was the total number of plots within each group where the species was identified, relative to the total number of plots from each group.The ecological attributes considered were richness (S), diversity (H) and evenness (J).Biological diversity was estimated with the Shannon-Wiener index (Magurran, 1988).Normality of data was tested using Shapiro-Wilk test (Shapiro & Wilk 1965).A One-Way Analysis of Variance (ANOVA) was used to test differences in the attributes between communities, followed by the Tukey method of multiple comparisons when significant differences were found.Mean richness values did not meet the normality assumption and were, therefore, log transformed (MLS).All statistical analyses were performed using the XLSTAT software V.7.5.2 (2007).
Classification analysis
A total of 270 species were found in the study area (including species with MC and RF < 5%) (Tab.1).Six groups of plots were identified based on classification (Tab.1).Group 1 was dominated by Coleataenia prionitis (Nees) Soreng, forming grasslands of up to 2 m high.Hymenachne grumosa (Nees) Zuloaga was found among accompanying species (Tab.1).The dominant species in group 2 was Schoenoplectus californicus (C. A. Mey.) Soják.Among secondary species found in this group were Mikania micrantha Kunth and Polygonum punctatum Eliott.Group 3 was exclusively formed by plots that develop in heavily modified environments, with presence of livestock and use of rollers within the dyque.Two plots within this group correspond to dry watercourses with low to null water flow due to the disruption of the riverbeds by the dyque.Abundant species in group 3 were mostly low herbaceous plants, such as Alternanthera sp.
Ordination analysis
The groups found in Table 1 were in general also discriminated by the ordination analysis (Fig. 2).The first two axes of the DCA explained ~13% of the variance.Ordination along axis 1 (eigenvalue: 0.916) may be attributed to the flooding preferences of the more abundant species of each group (Fig. 2).Therefore, groups 2 and 4 were placed on the positive end of the axis.Centrally positioned were groups 3, 5, and 6.Environments less liable to flooding comprised group 1 and were placed on the axis end nearest to zero, together with some plots of group 3 which corresponded to modified environments.Ordination of the plots along Axis 2 (eigenvalue: 0.774) does not show a pattern related to an environmental variable.
Attributes
Groups 1, 3, and, 4 exhibited the highest S values, with MLS significantly different from group 2 (P=0.001).Groups 5, and 6 did not show significant differences from the rest of the groups in MLS (Table 2).Group 2 showed the lowest H values and group 3 the highest (P=0.013).The latter also exhibited the highest J, even though this attribute was not significantly different between any of the groups (P=0.075).
dIscussIon
As the interest in the role of wetlands in biodiversity conservation and water supplies has become increasingly important, studies concerning the structure of these ecosystems and the alterations they suffer are essential.In this study we considered two sites in the southern portion of the Uruguay River floodplain, to identify and characterize the consequences of the human intervention that these wetlands undergo.
The results obtained suggest that the composition and structure of the herbaceous communities analyzed respond, not only to the natural environmental conditions, but also to the great human disturbances in the area.The plots that develop in highly disturbed environments were almost all gathered in the same community (group 3), which clearly differentiated from the rest of the groups in composition and structure.Is to be noted that the modifications were applied in different ways and intensity along the embanked area (cattle farming, roller use or both simultaneously).Despite that, almost all these plots grouped within the same community, resulting in a more homogeneous environment.To the contrary, communities with no apparent disturbances were distinguishable from each other in composition and structure (Table 1, Fig. 2).The distribution of both dominant species and biological types in undisturbed areas is strongly associated to the natural hydrological regime of the region (Morandeira et al. 2011).
The plant associations that develop in modified environments exhibited higher diversity and mean Log richness values.This could be due to an increase in the cover of ruderal and disturbanceresistant species (e.g.G. richardianum; E. bonariensis; S. glaucophyllum; M. micrantha; P. distichum and C. bonariensis) as a result of changes in the hydrological regime (Mitch & Gosselink 2007).Also, introduced and adventitious species were only present in this group or with higher MC values than in less-modified areas (Tab.1).The use of rollers which flatten the original plant cover and stir the soil, and the livestock farming within embanked systems could have favored the settlement of these species that are typical of the Pampa grasslands ("pampeanization") (Bó et al. 2010;Morris & Reich 2013).The dry watercourses that are included in this group had marsh vegetation but also terrestrial plant species, suggesting that they are going through a succession process.This alteration could also lead to changes in the neighboring areas, since watercourses give rise to a large number of internal topographic gradients (Stevaux et al. 2013).
Bol. Soc.Argent.Bot.51 (2) 2016 The less-modified community dominated by E. crista-galli showed the second highest diversity value, even though not significant.This may be due to the capacity of E. crista-galli to serve as substrate for a number of epiphyte and creeper species (De Andrade Kersten & Kuniyoshi 2009;Giudice et al. 2011) resulting in the presence of several accompanying taxa.In addition, the ability of this species to grow in different soil types makes it the only tree that establishes plant associations in lowlands of the studied area (Burkart 1957, Kandus 1999).Grasslands dominated by H. grumosa (groups 5 and 6) were located in intermediate areas in terms of flooding preferences, in accordance with earlier studies (Kandus et al. 2002).Group dominated by S. californicus, exhibited the lowest mean log richness and diversity, as it is expected for this kind of community in natural conditions (Burkart 1957).Grasslands dominated by C. prionitis (group 1) developed in soils less prone to flooding, on hillocks, and on an old tidal plain showing signs of previous rice farming practices.Communities localized in less-modified areas are simple in terms of their low richness and are characterized by the dominance of a few species but, from a floristic and environmental point of view, they form a heterogeneous mosaic (Kandus et al. 2003).
To the contrary, it seems to be a homogenization of the environment within the communities that suffered human disturbances.In most habitats, plant communities have great influence on the distribution and interactions of the native fauna.Also, it has been documented a positive relationship between vegetation-shaped habitat heterogeneity and animal species diversity, at both local and regional scales (Ricklef & Schluter 1993, Tews et al. 2004).Therefore, the modification of the local plant communities has important implications on the native fauna.
It can be concluded that herbaceous communities developing in modified environments suffer changes in structure and in their main attributes (richness and diversity) associated with the type of modifications conducted in the area.Conservation of herbaceous communities in less-modified environments is of vital importance since they represent the typical plant associations of the natural landscape without interference by invasive exotic species, within a region highly fragmented by human activities (Kalesnik & Malvárez 2003;Kalesnik et al. 2009b).Alteration of these communities could not only impact wetlands values and functions but also those of the neighboring environments.In this regard, the analyzed communities serve a primary role in the conservation of regional biodiversity and should be included in natural reserve projects or other actions related to the conservation of biological diversity they represent.From a biogeographical standpoint, the studied communities share traits with the Paraná River system (e.g.composition, structure, and geoforms) constituting a complex system interconnected since the mid Holocene (Pereyra et al. 2002).
acknowledgMents
Table 2. Ecological attributes of the identified groups (mean ± standard deviation).P values of ANOVA test are showed.In bold, attributes that exhibited significant differences.N is the number of plots and between brackets the number of plots in modified environments.MS= mean richness, MLS= mean log richness, MH= mean diversity, MJ= mean eveness.
Table 1 .
Relative frequency and mean cover of species in each of the groups determined by TWINSPAN.Species with MC*RF < 5% are not shown. | 2018-12-05T07:51:24.111Z | 2016-06-15T00:00:00.000 | {
"year": 2016,
"sha1": "da82ca574c063d43bc7dc4833508846f6ce479b8",
"oa_license": "CCBYNC",
"oa_url": "https://revistas.unc.edu.ar/index.php/BSAB/article/download/14848/14802",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "da82ca574c063d43bc7dc4833508846f6ce479b8",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
} |
236211729 | pes2o/s2orc | v3-fos-license | Application of Concretes Made with Glass Powder Binder at High Replacement Rates
Glass is a material that can be reused, except for a small part that, due to its residual characteristics, cannot be reused and becomes a nonbiodegradable waste to accumulate in landfills. The chemical composition and pozzolanic properties of waste glass are encouraging for the use of these wastes in the cement and concrete industries and for providing technically and environmentally viable solutions. In this study, we propose the production of deactivated concretes with a high content of glass powder in the binder. The substitution percentage of glass powder for cement used in this work was between 70% and 80%. Consistency, air content, bulk density, workability, compression strength, and permeability tests were performed. Regarding compressive strength, the results obtained at 90 days for percentages of cement substitution by glass powder of 70 and 80%, respectively, were 14.2 and 8.6. The chemical analysis of leachates showed concentrations of Fe, Cu, V, Ni, and Mo, in mg L−1, of 1.57, 1.38, 0.85, 0.95, and 0.44, respectively. The results obtained, compared with the relevant legislation, have proved that the inclusion of glass powder in a high percentage of substitution and with a granulometry of 20 µm in the manufacture of deactivated concretes is feasible for exterior pavements.
Introduction
Currently, the industrial sector generates large quantities of waste, part of which is recycled and the other part of which is deposited in landfills, causing environmental impacts [1]. These wastes have been the subject of numerous studies in recent years to determine possible uses based on their composition. In this way, the aim is, on the one hand, to reduce the effects on the environment by using these wastes and converting them into raw materials for other processes, thus reducing the exploitation of natural resources, and on the other hand, to generate new products in a cheaper way [2].
The sustainability of the civil engineering sector is crucial to drive society toward a circular economy, and to make this possible, the application of the waste hierarchy principle (prevention, preparation for reuse, recycling, recovery, and, as a last option, disposal) [3] is a priority. This sector, which advances day by day, is in a constant search for the best alternatives to provide solutions to the different market requirements [4]. The aim is for structures to be as resistant as possible and to ensure a certain useful life and optimum performance of the materials used without losing sight of the environmental aspect [5].
Concrete is by far the most widely used material in civil engineering. It is estimated that around 10 billion tons of this material are produced worldwide each year, which Materials 2021, 14, 3796 2 of 12 involves the use of nonrenewable natural resources, a significant demand for energy, as well as the emission of greenhouse gases [6]. For example, the production of one ton of Portland cement releases approximately one ton of carbon dioxide (CO 2 ) into the atmosphere. Globally, the cement industry contributes 7% of the CO 2 generated [7].
The traditional use of concrete is to build load-bearing structures due to its mechanical properties, durability, and workability. In recent times, the general commitment to sustainability has prompted a search for new concretes that can improve their properties in terms of environmental protection and aesthetic performance.
Numerous studies have shown the good performance of mortars and concretes made from different waste materials, especially waste glass [8][9][10].
The final glass wastes from packaging, demolition of buildings, the automotive industry, sanitary containers, and the ceramic industry that are not reusable by the glass industry can be recovered as raw material for the manufacture, after a process of fine grinding and mixing with nonpolluting reagents, of hydraulic binders [11].
Despite all benefits indicated, in the use of glass powder as a binder in the manufacture of concrete, it should be considered that these materials might contain potentially harmful substances such as heavy metals and trace elements, which can contaminate surface water that are often sources of drinking water supply [12][13][14]. In this sense, studies carried out show that the concentrations of heavy metals contained in the leachates emitted by different types of concrete do not endanger the quality of the aquifers [14][15][16].
Studies on the pozzolanic activity of waste glass carried out by Shao et al. [17] showed that glass powder ground to a particle size lower than 38 mm had some pozzolanic activity. Concrete made with 30% glass powder as a binder showed lower compressive strength before 28 days, but higher strength at 90 days [18]. This change in setting behavior of mortars and concretes made with glass powder composite binder, compared to mortars and concretes made with conventional binders (Portland cement), is attributed to the pozzolanic reaction of the glass powder [19]. It is shown that the pozzolanic activity of glass powder increases with decreasing particle size of the glass powder and increasing curing temperature [20,21]. The dissolution of the alkalis provided by the glass particles causes the cement hydration processes to be accelerated based on the amount of glass powder used in the manufacture of cement [22,23]. However, the amount of alkalis released is insufficient to compensate for hydration and early strength reduction caused by cement dilution [24].
Hongjian et al. [9] studied the properties of cements manufactured with a high percentage of glass powder replacement (above 60%) and concluded that all mixtures containing cement manufactured with a replacement percentage above 30% showed pozzolanic reactions after one year. This translates into a longer setting time and a self-healing capacity against the appearance of small cracks or differential settling that conventional cements do not have [24]. Más et al. [25] indicated the suitability of the use of concretes manufactured with a percentage of glass powder higher than 50% for pavements.
The range of pavements includes deactivated concrete, exposed aggregate, and washed concrete. This is a type of paving that is easy to install on site. The aggregates are visible on the surface in a concrete screed. This type of pavement, in addition to its mechanical function, can have an infinite number of finishes. This makes possible to obtain greater slip resistance and to improve the aesthetic component of concrete, until now considered by many as something gray and smooth [26]. Deactivated concrete slabs manufactured with a high percentage of glass dust in the binder, represent an ecofriendly alternative and in line with the circular economy, which is currently imposed as a principle in civil engineering.
In this article, we propose the use of glass powder as a high percentage cement substitute in the manufacture of deactivated concrete to be used as pavement in outdoor areas, studying its mechanical characteristics and leaching to verify that it does not cause a negative environmental impact on soils. Aggregates, essentially siliceous and nonreactive, are used. They are sand with a grain size <4 mm, gravel 4-12 mm, and gravel 12-20 mm. Figure 1 shows the particle size curve of these aggregates.
Materials 2021, 14, x FOR PEER REVIEW 3 of 12 studying its mechanical characteristics and leaching to verify that it does not cause a negative environmental impact on soils.
Aggregates
Aggregates, essentially siliceous and nonreactive, are used. They are sand with a grain size <4 mm, gravel 4-12 mm, and gravel 12-20 mm. Figure 1 shows the particle size curve of these aggregates.
Cement
The cement used was a commercial Portland cement CEM I 52.5 R (Cementos Portland Valderrivas, Morata de Tajuña, Madrid, Spain). This cement has a density of 3.12 g/cm 3 , a specific surface area of 4440 cm 2 /g, and a greenish gray color. The particle size of cement CEM I 52.5 R in volume fraction for particle diameter lower than 8 µm was 41.5% and for particle diameter lower than 96 µm was 99.7%.
Glass powder
The material used is the last fraction of glass that cannot be reused by the glass industry. This residue is ground with a bar mill until a grain size of 21 µm is obtained. The size was based on the results of the analysis in the COULTER LS 100 Q laser particle sizer (Beckman Coulter, Inc., Brea, CA, USA). Table 1 shows the diameters of the glass powder used in making concrete. Based on the results of the analysis in the particle sizer, the accumulated particle size curve was obtained, which shows the different particle size distribution as can be observed in Figure 2.
Cement
The cement used was a commercial Portland cement CEM I 52.5 R (Cementos Portland Valderrivas, Morata de Tajuña, Madrid, Spain). This cement has a density of 3.12 g/cm 3 , a specific surface area of 4440 cm 2 /g, and a greenish gray color. The particle size of cement CEM I 52.5 R in volume fraction for particle diameter lower than 8 µm was 41.5% and for particle diameter lower than 96 µm was 99.7%.
Chemical composition of CEM I 52.5 R cement was as follows: CaO (65%), SiO 2 (19%), Glass powder The material used is the last fraction of glass that cannot be reused by the glass industry. This residue is ground with a bar mill until a grain size of 21 µm is obtained. The size was based on the results of the analysis in the COULTER LS 100 Q laser particle sizer (Beckman Coulter, Inc., Brea, CA, USA). Table 1 shows the diameters of the glass powder used in making concrete. Based on the results of the analysis in the particle sizer, the accumulated particle size curve was obtained, which shows the different particle size distribution as can be observed in Figure 2.
used. The first one consisted of deactivating the surface setting process by applying a deactivating product during the hours after the concrete has been placed. This chemical product was then removed by water jetting to expose the aggregate surface grains. The final appearance depends on the type and the amount of deactivator used, and the time of application before rinsing. In this case, a surface deactivator used for concrete floors with exposed aggregates similar to the one indicated by the Pieri VBA 2002 trademark ( Figure 2). The application time for our tests was 2 days. The second one was a mechanical method consisting of passing a broom over the concrete surface during the hours following its placement, in order to drag a thin layer of mortar or grout, thus revealing the aggregate grains on the surface. This method allowed the work to be finished immediately after sweeping. The final appearance varied according to the intensity of the sweeping (Figures 3 and 4). The broom could be used as soon as the concrete surface was sufficiently dry. This surface sweeping was possible during the first hours after the laying of the concrete thanks to the slow setting time due to the presence of the glass powder in the binder.
Sample Preparation
Two series of specimens, each of which had 15 units [27], were manufactured where the only parameter that varies is the replacement rate in percentage of glass powder by cement CEM I 52.5 R. Table 1 shows the formulation used in the manufacture of the specimens, where G70 and G80 correspond, respectively, to substitution rates of 70% and 80% of glass powder for cement. d 50 glass powder of 21 µm was used (dimension of sample particles for which 50% of them have a diameter lower than a certain value). Table 2 shows a summary of experimental conditions for the samples prepared. To study the compressive strengths of the manufactured concretes, they were introduced into 10 × 30 cylindrical test tubes. Once compacted and after 24 h, they were removed from the mold and placed in a humid curing chamber at a temperature of 20 • C. The curing time ranged between 28 and 90 days. After these curing times, they were broken following the UNE 83-304-84 standard [28] and their compressive strengths were measured.
1.
Consistency test It was carried out according to the UNE-EN12350-2 [29] consistency standard by means of the settlement test. This slump test is sensitive when the average slump is between 10 and 200 mm.
2.
Air content The air content of ready-mixed concrete was determined according to UNE-EN 12350-7 [30] by pressure methods.
Apparent density
Materials 2021, 14, 3796 5 of 12 The procedure followed for the calculation of densities and porosities is based on UNE-EN 1015-6 [31].
Permeability test
Permeability tests were performed according to UNE-EN 12390-8 [33]. The purpose of this test was to evaluate the presence of chemical elements in leachate from filtration water and to study their possible environmental effects. Measurements were carried out on concrete preserved in endogenous medium at 20 • C for 100 h. The control was a cylinder of 4.07 ± 0.01 cm in length and 3.92 ± 0.01 cm in diameter. The water injection pressure was kept constant at 10 bar throughout the test.
In order to analyze the leached elements, a vacuum filtration was conducted. The filtrations were collected in successive fractions of ±20 cm 3 through a 0.7 µm glass filter. They were later analyzed using atomic absorption spectrometry (FAAS) and inductively coupled plasma mass spectrometry (ICP-MS).
Slab Manufacturing
With the formulations shown in Table 1 (G70 and G80), 10 decorative concrete slabs were produced in 50 cm × 50 cm removable wooden molds. Five slabs were manufactured with the G70 formulation and the other five with the G80 formulation. The manufacturing was carried out by vibrocompacting using a vibrating tray with uniform pressure on the mold.
In order to make the aggregate visible on the surface of the plates, two methods were used. The first one consisted of deactivating the surface setting process by applying a deactivating product during the hours after the concrete has been placed. This chemical product was then removed by water jetting to expose the aggregate surface grains. The final appearance depends on the type and the amount of deactivator used, and the time of application before rinsing. In this case, a surface deactivator used for concrete floors with exposed aggregates similar to the one indicated by the Pieri VBA 2002 trademark ( Figure 2). The application time for our tests was 2 days.
The second one was a mechanical method consisting of passing a broom over the concrete surface during the hours following its placement, in order to drag a thin layer of mortar or grout, thus revealing the aggregate grains on the surface. This method allowed the work to be finished immediately after sweeping. The final appearance varied according to the intensity of the sweeping (Figures 3 and 4). The broom could be used as soon as the concrete surface was sufficiently dry. This surface sweeping was possible during the first hours after the laying of the concrete thanks to the slow setting time due to the presence of the glass powder in the binder.
Results of the Characterization of Concrete
The percentage of glass powder substitution by cement was based on the study carried out by García del Toro et al. [34], who reported that concretes manufactured with a percentage of cement substitution by glass powder higher than 50% were suitable to be used as pavements. The percentages of 70% and 80% were chosen in order to reuse a greater amount of waste and contribute in a greater extent to the circular economy and environmental protection.
Results obtained for consistency, air content, apparent density, and workability tests are shown in Table 3.
Results of the Characterization of Concrete
The percentage of glass powder substitution by cement was based on the study carried out by García del Toro et al. [34], who reported that concretes manufactured with a percentage of cement substitution by glass powder higher than 50% were suitable to be used as pavements. The percentages of 70% and 80% were chosen in order to reuse a greater amount of waste and contribute in a greater extent to the circular economy and environmental protection.
Results obtained for consistency, air content, apparent density, and workability tests are shown in Table 3. Table 3, it can be observed that the air content and consistency are the same for concretes made with glass powder regardless of the percentage of substitution, and in both cases, they are lower than for the control specimen, in which all the binder is cement. As for the bulk density, it can be stated that it decreases when the proportion of cement substitution by glass powder in the binder increases. This is explained by the fact that the density of cement is higher than that of glass powder, 3.12 g/cm 3 compared to 2.54 g/cm 3 for glass powder. As for workability, it can be observed that as the percentage of glass powder in the mix increases, the concrete becomes more workable, a fact that is also related to the lower density of glass powder compared to cement.
According to consistency tests (Abrams cone slump), these concretes can be qualified as suitable for pavements. It can also be observed that the water content of these concretes is somewhat lower than that of conventional concretes in order to obtain a drier concrete. The air content is higher due to the presence of ground glass. The presence of air is convenient for this product since it facilitates its workability. On the other hand, it causes a decrease in compressive strength in the short term.
Mechanical Properties of Concrete
The results of the compressive strength tests of the concrete with 70% and 80% substitution of cement by glass powder are shown in Table 4. As previously observed [34], for concrete prepared with cement replaced by glass powder, the compressive strength of the concrete decreased as the amount of glass powder in the binder increased, which can be attributed to the fact that glass powder has a low pozzolanic activity at the early ages, although, in this case, the granulometry is larger and the concretes are drier and with higher air content. This causes the appearance of different C-S-H type hydrates and slower setting [31] and lower compressive strengths in the short term.
Results of Permeability Tests
In a previous work [16], a preliminary study was carried out on permeability characteristics in this kind of concretes. In the present work, this test was further performed, obtaining similar results. In the first hours of the test, there was a rapid increase in the flow rate. This was due to the incomplete saturation of the control before the start of the test and to the absorption of water by the binder, a process that gradually decreased. After 2 h 30 m, the filtrate flow rate decreased before stabilizing and then increased slightly. This decrease in the filtrate flow rate corresponded to the internal reorganization of the free solid particles inside the core, some of which were blocked in the narrowing of the pores. This reorganization and the pushing of the finest particles caused a slight increase in the flow rate and, consequently, in the permeability of the core. This indicated that the C-S-H type gels responsible for the setting of the mortar began to form.
The results below are the average of the results obtained by the two detection methods used: atomic absorption spectrometry (FAAS) and inductively coupled plasma mass spectrometry (ICP-MS).
Most of the elements showed no relevant concentrations in the first hours of the test, reaching not detectable concentrations at the end of the test. For the rest of elements, in order to study their concentration in the leachates and their variation over time, they were divided into two groups. We took potentially contaminating elements (Figure 5b), which can be dangerous for soils and aquifers, and the rest of the elements found (Figure 5a) into account. The graphical representation of experimental data in these figures has been carried out in a logarithmic scale for a better visualization of data.
Most of the elements showed no relevant concentrations in the first hours of the test, reaching not detectable concentrations at the end of the test. For the rest of elements, in order to study their concentration in the leachates and their variation over time, they were divided into two groups. We took potentially contaminating elements (Figure 5b), which can be dangerous for soils and aquifers, and the rest of the elements found (Figure 5a) into account. The graphical representation of experimental data in these figures has been carried out in a logarithmic scale for a better visualization of data. The potentially hazardous elements-molybdenum, chromium, copper, vanadium, nickel, iron, boron, and aluminum-were all found in the glass composition. The elements iron, copper, vanadium, nickel, and molybdenum showed concentrations of 1.57, 1.38, 0.85, 0.95, and 0.44 ppm, respectively, after five hours of the test, with a decreasing tendency until they stabilized after 20 h. At 72 h, their concentrations remained below the quantification limits of the analytical techniques used. Figure 5b shows how the boron concentration was appreciable at the beginning of the test, decreasing rapidly with time from 2.75 to 0.37 ppm. Chromium was also detected in the first hours, with a concentration of 3.85 ppm. This concentration decreased rapidly to a concentration of 0.015 ppm at the end of the test. Aluminum was another component of the glass used in the test, at a percentage of 2.2%, so it also appeared in the filtered water. The concentration of this element, as can be seen in Figure 5b, decreased rapidly from 17.5 ppm to 1.15 ppm. The cement, consisting of 5.5% aluminum, also contributed a certain amount.
For the rest of the elements, and in order to verify that the results obtained in this work were in agreement with those obtained in a preliminary study by Mas et al. [16], a hypothesis test for equality of means with a confidence level of 95% was performed to ensure that there were no significant differences between the results. First, the variances of each mean were compared using the F test and it was found that there were no significant differences between them. At this regard, the hypothesis statement was as follows: The criteria to accept H 0 is : |t 0 | < t tabulated As previously mentioned, the hypothesis contrast was performed for the elements whose concentrations at 72 h remained above the quantification limits of the analytical techniques used. Table 5 shows the results obtained in the previous test performed by Mas et al. and the current ones, together with the experimental Student's t used to test the hypothesis. Table 5. Quantitative results for analyzed elements, expressed as mg L −1 (mean ± standard deviation, n = 3) and Student's t values obtained for the contrast of means.
Element
Experimental Value (ppm) Since the Student's t test result obtained with the experimental data for each and every one of the elements analyzed was lower than the tabulated data for a Student's t distribution with ν degrees of freedom and a 95% confidence level, it was accepted that there were no significant differences for the means of the data obtained for each of the elements analyzed.
Results Obtained by Mas et al. [16] (ppm)
Of all the elements, sodium (Na) was the one released in the greatest quantity; its concentration in the filtered water was around 5350 ppm in the first 5 h of the test. Figure 5a shows how the sodium concentration decreased rapidly to around 100 ppm at the end of the test. This result was obtained for sodium mainly because this element is found in a high percentage in the glass used (≈11.3%), and also because sodium has high mobility. However, the cement used contains only 0.15% sodium, contributing little to the concentration. As for potassium, Figure 5a shows how the concentration of this element dropped from 359 ppm to 32 ppm. This element came from both glass and cement, whose potassium content is 0.6% and 0.7%, respectively. As for silicon, which is also shown in Figure 5a, it can be observed that its concentration decreased more slowly than the rest of the elements, from 112 to 47 ppm. The silicon collected in the filtrate water came from both glass (70%) and cement (19%). The calcium concentration, as shown in Figure 5a, varied differently over time. At the beginning of the trial, there was a decrease in calcium concentration from 31 to 1.2 ppm. Subsequently, the concentration remained constant until 30 h have elapsed. After this time, the concentration increased until it reached a final value of 3.2 ppm. According to Marco et al. [11] the dissolution of the glass did not release calcium, so it can be concluded that the calcium collected in the filtration came to a large extent from the cement, since cement is made up of 65% Ca.
Sodium and potassium, especially sodium, mainly from glass, were released in significant quantities, but this does not pose a danger to the environment, since they were not released in sufficient quantity to cause changes in the electrical conductivity of the soil. Calcium does not constitute an environmental hazard; since it is a nontoxic element and at the concentration level determined, it would not cause soil pH modifications. The presence of chromium and boron during the first hours of the test in the filtrate water was relevant. However, these elements tended to disappear at the end of the test, probably because they got into the cementitious matrix.
Conclusions
In the present work, the use of glass powder as a high percentage cement substitute in the manufacture of deactivated concrete to be used as pavement in outdoor areas was examined. Deactivated concrete slabs manufactured with a high percentage of glass dust in the binder represent an ecofriendly alternative that is in line with the circular economy, which is currently imposed as a principle in civil engineering.
The experimental results indicated that it is feasible to produce mortar with fine glass powder and that floors made of deactivated concrete with a glass powder binder base are highly stable against atmospheric and mechanical agents. These characteristics give them a long-life cycle and low maintenance. Exposed aggregate finishes are rough, nonslip, and resistant to wear and tear and the action of atmospheric agents. Its application would be suitable for pedestrian areas such as park streets and outdoor traffic areas in need of a durable pavement, as well as accesses to garages, terraces and patios, swimming pool areas, and roads with light traffic.
Regarding leachates, alkaline elements, especially sodium, were released in significant quantities, although not in quantities sufficient to cause changes in the electrical conductivity of the soil, so they do not pose an environmental hazard, similar to calcium, which is a nontoxic element and, at the concentration level determined, would not cause a soil pH modification. The considerable presence of chromium and boron, potentially contaminating elements, in the filtrate water during the first hours of the test is remarkable. These elements tended to disappear at the end of the process, since they were incorporated into the cement matrix; therefore, they do not cause environmental contamination. Finally, the rest of the elements detected were found at trace levels in the filtered water, so due to their scarcity, they cannot constitute a danger to the natural environment according to the legislation in force. | 2021-07-25T06:17:03.790Z | 2021-07-01T00:00:00.000 | {
"year": 2021,
"sha1": "6758a90e6ec5590204e8a795d03aeb1316d30ec5",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/14/14/3796/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "11164487a035a4ceb0b7b1332774ba2b9d026f25",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
260517738 | pes2o/s2orc | v3-fos-license | Deriving a preference-based utility measure for cancer patients from the european Organisation for the Research and Treatment of cancer ’ s Quality of life Questionnaire c 30 : a confirmatory versus exploratory approach
License. The full terms of the License are available at http://creativecommons.org/licenses/by-nc/3.0/. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. Permissions beyond the scope of the License are administered by Dove Medical Press Limited. Information on how to request permission may be found at: http://www.dovepress.com/permissions.php Patient Related Outcome Measures 2014:5 119–129 Patient Related Outcome Measures Dovepress
Introduction
Multi attribute utility instruments (MAUIs) are preference-based quality of life measures that can be used in cost-utility analysis. 1 MAUIs have two components. The first submit your manuscript | www.dovepress.com Dovepress Dovepress 120 costa et al is a "health state classification system" (HSCS), comprising core domains of health-related quality of life (HRQOL), each comprising a number of levels (eg, poor, moderate, good). For example, the widely used MAUI, EQ-5D, has five dimensions, each with three levels. 2 These dimensions (or "attributes") and levels define the HSCS. Thus, the HSCS of the EQ-5D comprises 3 5 =243 unique health states. The second component is a scoring algorithm, which assigns a utility value to each health state, based on the valuation elicited, using a preference-based assessment method, typically from a general population sample.
MAUIs have previously been derived from various HRQOL measures. [3][4][5] This typically involves two stages. The first stage involves selecting a subset of domains and items from the HRQOL measure to form a HSCS. This reduction stage is required because HRQOL measures typically include more items and domains than is manageable in the preference-based valuation exercise required for the second stage, in which a sample of health states is valued and an algorithm derived for estimating the utility of all possible health states.
The European Organisation for Research and Treatment of Cancer's (EORTC) core Quality of Life Questionnaire (QLQ-C30) 6 is one of the most widely used cancer-specific HRQOL instruments, but is not a preference-based measure 7 and, therefore, cannot be used in cost-utility analysis. One solution is to "map" the QLQ-C30 to a preference-based measure. 8 A more theoretically rigorous approach is to develop a cancer-specific MAUI from the QLQ-C30, as has been done by Rowen et al. 9 Rowen et al applied the methods of Young et al, 10 starting with exploratory factor analysis (EFA) to identify clusters of correlated items as a prerequisite to Rasch analysis to assess psychometric properties of items relevant to their performance in a MAUI. 5,10 Items that did not perform well on various psychometric criteria related to EFA and/or Rasch procedures were excluded, and then one or two items from each domain were retained as the basis for the HSCS for the MAUI. The main advantages of this method are that the resulting classification system represents the dimensionality of the measure using observed data. Further, this method can be used for any measures, regardless of whether it has an established dimensional structure. One crucial disadvantage is that EFA will produce only factors, as opposed to clinically coherent HRQOL dimensions.
When a HSCS is to be derived from a questionnaire with an established dimensional structure that is psychometrically robust and clinically sensible, arguably a confirmatory approach to the question of dimensionality is more appropriate than an exploratory approach. The QLQ-C30 is such an instrument. The confirmatory approach involves the positing of a specific dimensional structure (the conceptual model) that is tested with confirmatory factor analysis (CFA). This has three advantages over the exploratory approach. First, many of the arbitrary decisions involved in EFA (eg, method of extraction, method of rotation, number of factors to extract) are removed, replaced instead with more theoretically or clinically driven decisions, such as which items are hypothesized to load on which factors. Second, without a priori clinical guidance, any given solution may lack clinical cohesion. Third, the positing of a specific model allows clinical considerations -which we define here as the views of both patients and clinicians about issues relevant to HRQOL in cancer -to play a more structured a priori role than EFA can allow. Certain items may be included in or excluded from the model a priori, based on clinical or theoretical considerations, meaning that clinical considerations can be built in to the general method of item assessment, rather than acting as a post hoc, context-specific activity. Items deemed important in the trade-off between HRQOL and survival may thus be selected solely according to clinical considerations. For such items, clinical considerations would override statistical criteria, ensuring that the condition-specific preference-based measure contains symptoms of particular relevance to that condition. In cancer, these include fatigue, pain, and nausea. 11,12 The aim of the current paper is to compare confirmatory with exploratory approaches in deriving a cancer-specific MAUI from the QLQ-C30, given its well-established domain structure. Note that the objective of the analyses reported in this paper was not to develop a specific HSCS, but rather to refine and make further recommendations on the appropriate methodology for defining the dimension structure for the MAUI, focusing on step 1 of the seven-step item selection procedure described by Young et al. 10 Methods Ethical approval for this study was granted by the University of Sydney Human Research Ethics Committee (Protocol Number 13207).
Quality of life instrument
The European Organisation for the Research and Treatment of Cancer QLQ-C30 (Version 3) is a multidimensional instrument containing 30 items assessing symptoms, functioning, and overall HRQOL (
121
Deriving a preference-based utility measure for cancer well established. 6,13 Responses to items 1-28 are made on a four-point scale (1= "Not at all", 2= "A little", 3= "Quite a bit", 4= "Very much"), and responses to items 29 and 30 (global health and quality of life items) are made on a seven-point scale (1= "Very poor" and 7= "Excellent"). Items 6-30 have a recall period of the past week; no recall period is specified for items 1-5 (Physical Functioning). The 30 items form five functioning scales, three multi-item symptom scales, five single-item symptom scales (plus a financial difficulties item), and a global health status and HRQOL scale (Table 1).
Data set
A secondary analysis was conducted on data collected with the QLQ-C30 (Version 3) from a sample of 356 patients (53% Norwegian and 47% Swedish) with stage IV/ recurrent/metastatic cancer from a variety of primary sites (36% prostate, 30% breast, 11% lung, and 23% other), all undergoing palliative radiotherapy in a randomized clinical trial comparing two fractionations. 14 analysis Exploratory versus confirmatory factor analysis EFA is a statistical procedure in which variables are grouped into relatively independent subsets based on their intercorrelations, without any prior assumptions about the composition of these subsets. In contrast, CFA involves testing a prespecified arrangement of items into subsets, guided by a conceptual model. EFA and CFA were conducted to assess the dimensional structure of the QLQ-C30 and the results compared. The model of HRQOL tested using CFA was based on both the established structure of the QLQ-C30 15 and clinical considerations (described below). Three items were excluded a priori from both the EFA and CFA. Item 28 (financial difficulties) was excluded from all analyses as it is neither a symptom nor a measure of functioning. The two global items (29 and 30) were also excluded because each item in the HSCS should represent a specific domain of HRQOL (functioning or symptom) rather than global quality of life. For the initial EFA, principal axis factoring (PAF) was used with a direct oblimin rotation to allow factors to be correlated. The suitability of the data for EFA was assessed using the Kaiser-Myer-Olkin measure of sampling adequacy and the Bartlett test of sphericity. Criteria for suitability are Kaiser-Myer-Olkin .0.8 and a P-value for Bartlett's χ 2 of less than 0.01. 16 Parallel analysis, 17 using the Monte Carlo PCA for Parallel Analysis software, was used to inform selection of factors. This involves computing mean eigenvalues from randomly generated sets of data (N=1,000) of the same size (number of items and number of observations) as the observed data set. Any factor obtained from the observed data set with an eigenvalue exceeding the corresponding eigenvalue generated from parallel analysis was considered for selection. A scree plot was also inspected. An item was considered to load on a factor if it had a pattern matrix loading greater than 0.3 and did not load on any other component.
We also conducted a sensitivity analysis involving all 15 combinations of: two extraction methods (PAF, maximum likelihood), and principal components analysis, and five rotation methods (oblimin, promax, varimax, equamax, and quartimax), comparing the degree of variability in solutions obtained due to variation in these technical parameters.
Confirmatory approach a priori clinical considerations
The guiding principle here was to consider which aspects of functioning, symptoms, and side effects should be included in the HSCS, and hence the utility function of cancer-specific MAUI, in order for it to have face validity for economic evaluation of cancer treatments. Inclusion of dimensions was determined by three considerations: a) the dimensions available in the QLQ-C30; b) the patient's perspective (which symptoms, side effects, and aspects of functioning are considered important by patients in their overall assessment of quality of life); and c) the clinician's perspective (which dimensions matter when assessing the value of alternative treatments). Previous research has shown that patients 13 and clinicians 7 consider pain, fatigue, nausea/vomiting, constipation, and diarrhea to be important. All are available in the QLQ-C30. It is also well established that the various aspects of functioning are correlated with measures of overall quality of life. 14 Regression analysis has also revealed certain domains to be strong predictors of global quality of life, eg, emotional functioning and fatigue. 18,19 The primary difference between clinical considerations using the confirmatory approach versus previous exploratory approaches is that in the confirmatory approach, they are incorporated a priori as part of the procedure to assess items for inclusion.
established structure of the QlQ-c30 We defined the "conceptual model" as the arrangement of items on the QLQ-C30 into domains based on the established structure of the QLQ-C30 6 and the clinical considerations described above. We defined the "measurement model" as the subset of the conceptual model that was empirically tested using CFA.
The conceptual model to be used as a starting point for the QLQ-C30 was thus composed of the following eight latent variables and five single-item domains: Functioning: physical functioning (items 1-5); role functioning (items 6 and 7); emotional functioning (items 21-24); social functioning (items 26-27); and cognitive functioning (items 20 and 25). Symptoms: pain (items 9 and 19); fatigue (items 10, 12, and 18); nausea and vomiting (items 14 and 15); dyspnea (item 8); sleep (item 11); appetite (item 13); constipation (item 16); and diarrhea (item 17). Items included a priori in the conceptual model and therefore excluded from measurement model: dyspnea, sleep, appetite, constipation, and diarrhea were considered of sufficient clinical importance for consideration in the HSCS, but as these domains are represented by single items (8,11,13,16, and 17, respectively), these items were excluded from the measurement model.
CFA based on the conceptual models described above was conducted using the mean-and variance-adjusted weighted least squares estimation method (as recommended for ordinal data) 20 in Mplus Version 6. Correlations amongst the latent variables were not constrained, while correlations between error terms were fixed to 0. The fit of the model to the data was assessed using the following indices and their corresponding widely accepted guidelines indicating good model fit: 21 chi-squared statistic/degrees of freedom (less than 2); comparative fit index (.0.95); Tucker-Lewis index (.0.95); root mean square error of approximation (,0.05). If model fit was poor on any one of the measures, then factor loadings and residual correlations (those .0.1 considered noteworthy) 22 were examined in order to determine alterations to the model that improved fit. Modification indices were also examined to determine what other parameters might be estimated. The model was modified and retested until a
123
Deriving a preference-based utility measure for cancer model was obtained that was conceptually meaningful and also adequately fitted the data.
item assessment using Rasch analysis Young et al 10 used a variety of techniques to select or reject items for the HSCS. These methods use Rasch analysis within dimensions identified by EFA. To address the aims of this paper, we conduct the Rasch analyses separately for the factor solutions obtained from EFA and CFA to further explore the consequence of these two approaches when applying Young et al's method to the QLQ-C30. These techniques are described in detail by Young et al 10 and interested readers are referred to step 2 of their guidance for deriving a MAUI. These are summarized briefly below.
In Rasch analysis, observed responses to items are assumed to reflect an underlying latent variable, such that the probability of endorsing an item is a monotonic increasing function of the underlying latent variable. Items that met the criteria described below were deemed to conform to the Rasch model 23 and were therefore retained for consideration in the HSCS.
All Rasch analyses were conducted using RUMM 2030 24 and were performed separately for the dimensions identified using EFA and CFA. All procedures and guidelines were consistent with those recommended by Pallant and Tennant. 25 The initial stage of Rasch analysis was conducted with the aim of determining whether any of the items exhibited problems with fit to the model, item response threshold ordering, or differential item functioning. 25 Local dependence was also assessed. Any items that exhibited such problems were considered for exclusion from the HSCS. See the Supplementary materials for further details regarding these criteria. Table 2 provides a summary of the results from the primary EFA (PAF extraction and oblimin rotation) and related The three factors identified for subsequent Rasch analysis were as follows:
exploratory approach
• EFA Factor 1. Items 1-7, 9, 10, 19, and 27 (encompassing the physical and role functioning domains, the two pain items, one of the three fatigue items, and one of the two social functioning items); • EFA Factor 2. Items 11, 20-24, and 26 (encompassing the emotional functioning domain, the insomnia item, one of the two cognitive functioning items, and one of the two social functioning items); and • EFA Factor 3. Items 12-15 and 18 (encompassing two of the three fatigue items, the appetite loss item, and the two nausea/vomiting items). The two cross-loading items (fatigue 12 and 18) were assigned to this factor because they are symptoms that are more closely related to the items on this factor than Factor 2. The results of EFA differed slightly depending on the extraction and rotation method used. Using all 15 combinations of methods: items 1-7, 9, and 19 loaded on Factor 1; items 11, 21-24, and 26 loaded on Factor 2; items 13-15 loaded on Factor 3; and items 8 and 16 exhibited weak loadings on all factors. There were a few noteworthy differences. Items 17 (diarrhea, Factor 3) and 25 (memory, Factor 2/ Factor 3) had stronger loadings for PCA than for PAF and maximum likelihood, to the extent that, using a loading cutoff of 0.3, they would have been comfortably included in the PCA solution, but not PAF or maximum likelihood. For items 12 (weak) and 18 (tired), for all extraction methods loadings were strongest for Factors 2 and 3 except for when quartimax rotation was used; in this case, Factor 1 exhibited the dominant loadings. For items 10 (rest) and 27 (interfered with social activities), Factor 1 exhibited the dominant loading but strength of cross-loadings differed between extraction/rotation combinations, and the same for item 20 (concentration) except that Factor 2 dominated. Results are available from the authors on request.
Confirmatory approach
The factor loadings obtained from CFA are presented in Table 3. The loadings of all items on their respective factors were relatively strong and all statistically significant (P,0.001). Model fit was adequate (χ 2 /df =2.79, comparative fit index =0.964, Tucker-Lewis index =0.953, root mean square error of approximation =0.075). Residual correlations and modification indices suggested additional relations between items 4 and 10, and items 2 and 3. Items 4 and 10 cover similar content (needing to rest), as do items 2 and 3 (trouble taking a long walk and short walk). Because items 4 and 10 were posited to load on different factors (Physical Functioning and Fatigue, respectively) crossloadings were introduced for these items and domains, whereas because items 2 and 3 were posited to load on the same factor (Physical Functioning), the covariance between their error terms was estimated. Estimation of these cross-loadings and covariance resulted in improved model fit (χ 2 /df=1.51, comparative fit index =0.990, Tucker-Lewis index =0.987, root mean square error of approximation =0.040).
The correlations between the eight factors are displayed in Table 4. Most noteworthy was the very high (0.86) correlation between role and physical functioning, suggesting that the items in these two factors may reflect a single factor.
Although the hypothesized eight-factor structure of the QLQ-C30 was generally supported, it was decided that the physical functioning domain (items 1-5) be combined with the role functioning domain (items 6 and 7) as well as item 10 for the purpose of Rasch analysis, based on the results above. Item 10 was not included in the fatigue domain (with items 12 and 18) for Rasch analysis. The other domains were subjected to Rasch analysis without any change from the factor specified a priori.
Rasch analysis Based on eFa
The factor-level results of the Rasch analysis for the factors derived using EFA are shown in the left panel of Table 2. This table illustrates that Factors 1 and 2 required the removal of items to achieve adequate fit to the Rasch model. High residual correlations were observed between items 2 (long walk) and 3 (short walk), items 4 (stay in bed) and 10 (need to rest), items 6 (daily activities) and 7 (leisure activities), items 12 (weak) and 18 (tired), and items 14 (nausea) and 15 (vomiting). The correlations between items 6 and 7, items 12 and 18, and items 14 and 15 were unsurprising, as the traditional QLQ-C30 domain structure treats these as separate domains (role functioning, fatigue, and nausea/ vomiting, respectively). The other two pairs of residual The results are for the refined model, in which loadings for items 4 and 10 on both physical functioning and the covariance between items 2 and 3 were estimated. Rasch statistics are those obtained from the final analyses, ie, those with misfitting items removed. a grouping variables exhibiting differential item functioning for the item are listed in this column; b values in this column represent numbers of items with which the item has a residual correlation following Rasch analysis; c cancer sites included prostate, breast, lung, and other; d estimate of loading on the non-a priori factor, ie, fatigue for item 4, physical functioning for item 10. Abbreviation: CFA, confirmatory factor analysis. correlations are also unsurprising, given the content of the items. No individual items exhibited misfit or disordered thresholds. Items 1, 6, 14, 21, and 22 exhibited differential item functioning (Table 2). Table 3 provides a summary of the results from the CFA and related Rasch analyses, and the factor-level results are shown in the right panel of Table 3. Only Factor 2 required the removal of items to achieve adequate fit to the Rasch model (see Table 5 for factor-level Rasch analysis statistics). High residual correlations were observed between items 2 (long walk) and 3 (short walk), items 4 (stay in bed) and 10 (need to rest), and items 6 (daily activities) and 7 (leisure activities). No individual items exhibited misfit or disordered thresholds. Items 1,6,12,14,15,21,22, and 27 exhibited differential item functioning (see Table 3).
Discussion
The factor structures obtained from EFA and CFA followed by Rasch analysis were similar; however, CFA produced more readily interpretable solutions than EFA. Many of the discrepancies between the hypothesized factor structure in CFA and the clusters of items that emerged from EFA were eliminated when the factors obtained from EFA were subjected to Rasch analysis. For example, EFA Factor 2 originally comprised items 11, 20-24, and 26, but following Rasch analysis, this dimension was reduced to the emotional functioning domain of the QLQ-C30 (items 21-24). Item 23 was then further found to misfit and removed. The key point is that the confirmatory approach arrived at this solution more (20,25) initial: Item fit =0.44 (good) Person fit =1.08 (good) Factor 5 (9,19) initial: Item fit =0.83 (good) Person fit =0.97 (good) Factor 6 (12,18) initial: Item fit =0.32 (good) Person fit =0.99 (good) Factor 7 (14,15) initial: Item fit =0.36 (good) Person fit =0.85 (good) Notes: Item fit for both item and person represent the fit residual standard deviation, where a value greater than 1.5 is considered poor. a Although item fit was poor, no individual item exhibited misfit.
the inclusion or exclusion of item 17 (diarrhea) and different decisions about which domain should include the fatigue items (12 and 18) may affect the composition of the HSCS. Some aspects of the EFA solution were difficult to interpret. For example, the social functioning items loaded on different factors; specifically, item 26 (interfered with family life) loaded with physical/role functioning items and item 27 (interfered with social activities) loaded with emotional functioning items. Similarly, fatigue items loaded with nausea, vomiting, and lack of appetite. Although post hoc explanations of these relations are possible, and may well be causal (as discussed below), it is difficult to justify the inclusion of such items in the same domain for the purpose of selecting items for a utility instrument. For example, whether respondents experience interference with social activities is arguably a substantively different issue to whether respondents feel tense, and it seems inappropriate for these two items to be competing candidates for inclusion to represent the same factor in the HSCS. This means that judgment must be applied when using EFA as the factor analysis will establish "factors", and clinical input and interpretation is required to derive the " dimensions" from these factors. In contrast, in the CFA approach this guidance is provided at the outset to inform the factor analysis, meaning that the results directly represent the dimensionality of the measure. It is worth noting that three of the four items with weak EFA loadings (items 8, 16, and 17) were also three of the five items (along with items 11 and 13) that were excluded from the measurement model a priori.
EFA produced a solution that combined the physical (items 1-5) and role functioning domains (items 6 and 7) of the QLQ-C30. In the CFA, model fit was adequate with these two domains kept separate, although the two domains were very highly correlated. Residual PCA, as part of the Rasch analysis, confirmed that these are in fact two separate domains. One possible reason for this is that items 6 and 7 differ from items 1-5 in their "item difficulty", a phenomenon that would be more readily identified by Rasch analysis than factor analysis. An alternative explanation is that there exists a higher order factor that encompasses both physical and role functioning, or that there is some causal relation between these two factors. These latter possibilities are addressed further below, but are in any case more readily addressed using a confirmatory than an exploratory approach.
The confirmatory approach employed in the present analysis provided a structured role for clinical considerations and an explicitly articulated relation to the statistical and psychometric criteria used in the item selection process, whereas efficiently than the exploratory approach. Furthermore, the two adjustments to the measurement model tested in CFA that were required (namely, the estimation of the relations between items 4 and 10 and items 2 and 3) were readily identified and accommodated in the model.
The EFA results were found to differ somewhat depending on the method of extraction and rotation employed. Although these differences were not large, they may have had some impact on the item selection process. For example, in the previously employed exploratory approach, clinical considerations were less formally specified and explicitly integrated with the statistical analysis.
Rowen et al 9 in the derivation of EORTC 8D employed the input of a clinician to ensure the statistical results made sense clinically. In the present analysis, we have developed the structured integration of clinical considerations further into the predefined set of judgment criteria. Furthermore, by identifying certain items as of interest a priori allows a structured approach to the selection of items that are of clinical relevance but may not perform adequately in the statistical analysis. For example, although few respondents in this data set reported problems with diarrhea (item 17), the a priori inclusion of this item in the conceptual model allowed clinical considerations to override the statistical criteria. The importance of this is illustrated by the ALTTO trial, in which diarrhea was a critical side effect distinguishing trastuzimab from lapatinib. 19 The omission of diarrhea on statistical grounds, in this case, would result in the loss of potentially important information from the HSCS. This is not to say that the exploratory approach has little value in establishing the domain structure for a HSCS, particularly in cases where an instrument does not have a well-established domain structure.
limitations
Our analysis was conducted on a sample of patients who were either Norwegian or Swedish, with two-thirds having primary cancer sites that were either breast or prostate and all having recurrent/metastatic cancer. Different results may be obtained from samples of patients with different profiles. Indeed, the EFA solution we obtained differed from that of Rowen et al, 9 who analyzed data from newly diagnosed multiple myeloma patients. Their factor solution may also have differed from ours for reasons related to analysis details, eg, use of parallel analysis to select the number of factors in the present case versus eigenvalues and variance explained. The conclusions drawn from the present analysis would be strengthened by replication using data from patients with a variety of cancer sites, stages, and treatments, and from various countries, using identical statistical techniques.
Conclusion
A confirmatory approach to determining dimensionality for the construction of a HSCS was found to be more efficient and to produce a more readily interpretable domain structure for the QLQ-C30. The confirmatory aspect of this prototype analysis will now be applied on a much larger scale as part of the Multi-Attribute Utility in Cancer (MAUCa) project, involving the pooling of a large number of international data sets covering a range of countries, cancer sites, and stages. Based on the results, a definitive HSCS will be determined. The results of the present analysis will guide this large-scale analysis only inasmuch as they support the use of the particular method -the specific composition of dimensions and psychometric properties of dimensions and items obtained will be assessed independently of the results of the present analysis. This will pave the way for valuation surveys that will provide country-specific utility weights for this HSCS, and thereby complete the provision of a preference-based measure derived from the QLQ-C30.
Publish your work in this journal
Submit your manuscript here: http://www.dovepress.com/patient-related-outcome-measures-journal Patient Related Outcome Measures is an international, peer-reviewed, open access journal focusing on treatment outcomes specifically relevant to patients. All aspects of patient care are addressed within the journal and practitioners from all disciplines are invited to submit their work as well as healthcare researchers and patient support groups. The manuscript management system is completely online and includes a very quick and fair peer-review system. Visit http://www.dovepress. com/testimonials.php to read real quotes from published authors.
Supplementary material Rasch analysis criteria Poor item fit
The overall fit of the Rasch model was examined using the item-trait interaction χ 2 statistic. Good model fit was indicated by a nonsignificant chi-squared statistic. A Bonferroni correction was applied to the criterion of significance with the alpha value (0.05) divided by the number of items. The presence of misfitting items or persons was indicated by a fit residual standard deviation value of 1.5 or above. Items with individual Fit Residual values exceeding 2.5 were removed from the Rasch analysis. Persons with fit residuals that exceeded 2.5 were removed only if they appeared to contribute to item misfit. This process was repeated until only well-fitting items remained, and the overall goodness of fit of the model was nonsignificant. Any items excluded due to misfit were kept aside and assessed according to other criteria, including descriptive statistics and clinical considerations (described to follow).
assessment of response format
An appropriately functioning item requires a response format that respondents use in a consistent manner. Examining response thresholds -the points at which each consecutive response category for an item is equally likely to be endorsed -allows the assessment of response format in this regard. For an appropriately functioning item, the response thresholds between successive categories should be ordered, such that the threshold between categories 1 and 2 falls below the threshold between categories 2 and 3, and so on. A disordered response threshold indicates that respondents are not selecting response categories expected according to their overall scale score.
invariance of item functioning across different groups
For an item to be included in the HSCS, the probability of selecting a certain response category for a given value of the latent trait should be invariant across groups. If it is not, the item exhibits differential item function (DIF). DIF is a form of bias in which systematic differences in patterns of responding to an item are observed between individuals with different characteristics, despite having the same level of the latent variable. If two or more groups showed a consistent difference in item responses across the range of values for the latent variable, this is known as "uniform DIF". "Nonuniform DIF" occurs when the differences between groups vary over the range of values of the latent variable. In RUMM 2020, DIF is assessed using two-way analysis of variance, with predicted score compared across the different levels of the grouping variable and across different levels of the latent trait (where individuals are grouped into a number of "class intervals" based on their latent trait score [35]). The data were examined for DIF across sex and cancer site. (DIF across country is an important issue but has been examined previously.) Because cross-population comparisons using the HSCS are desirable, any items exhibiting DIF were excluded from the HSCS.
local dependence
Local dependence among items, indicating an association above and beyond that shared by the underlying trait, was assessed by inspection of the residual correlation matrix for values exceeding 0.3. | 2018-04-03T04:46:35.072Z | 2014-01-01T00:00:00.000 | {
"year": 2014,
"sha1": "a421b6fd67c061a1b9cfddcc261397e6b5375840",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=22314",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "019d3d47ae465d818e4ab505118f1883f4ff4121",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
266176668 | pes2o/s2orc | v3-fos-license | Noise-Robust Semi-Supervised Learning for Distantly Supervised Relation Extraction
,
Introduction
Relation extraction (RE) is a fundamental process for constructing knowledge graphs, as it aims to predict the relationship between entities based on their context.However, most supervised RE Bag of <Obama, United states> Obama was born in the United States.
Obama is the 44th president of the United States.
Obama is a household name in the United States.techniques require extensive labeled training data, which can be difficult to obtain manually.To address this issue, Distant Supervision (DS) was proposed (Mintz et al., 2009) to automatically generate labeled text corpus by aligning plain texts with knowledge bases (KB).For instance, if a sentence contains both the subject (s) and object (o) of a triple ⟨s, r, o⟩ (⟨subject, relation, object⟩), then the DS method considers ⟨s, r, o⟩ as a valid sample for that sentence.Conversely, if no relational triples apply, then the sentence is labeled as "NA".
Distantly supervised datasets usually face high label noise in training data due to the annotation process.To mitigate the impact of noisy labels caused by distant supervision, contemporary techniques (Lin et al., 2016;Alt et al., 2019;Chen et al., 2021b;Li et al., 2022;Dai et al., 2022) usually employ multi-instance-learning (MIL) framework or modify MIL to train the relation extraction model.
Although MIL-based techniques can identify bag relation labels, they are not proficient in precisely mapping each sentence in a bag with explicit sentence labels (Feng et al., 2018;Jia et al., 2019;Ma et al., 2021).Several studies have focused on improving sentence-level DSRE and have empirically demonstrated the inadequacy of bag-level methods on sentence-level evaluation.(Feng et al., 2018;Qin et al., 2018) apply reinforcement learning to train a sample selector.(Jia et al., 2019) iden-tify confident samples by frequent patterns.(Ma et al., 2021) utilizes negative training to separate noisy data from the training data.
However, these methods has two main issues: (1) These works simply discard all noisy samples and train a relation extraction model with selected samples.However, filtering out noisy samples directly results in the loss of useful information.(Gao et al., 2021a) notes that the DSRE dataset has a noiserate exceeding 50%.Discarding all these samples would result in a significant loss of information.
(2) The confident samples selection procedure is not impeccable, and there may still exist a small amount of noise in the chosen confident samples.Directly training a classifier in the presence of label noise is known to result in noise memorization.
To address the two issues, this work proposes a novel semi-supervised-learning framework for sentence-level DSRE, First, we construct a K-NN graph for all samples using the hidden features.Then, we identify confident samples from the graph structure and consider the remaining samples as noisy.For issue (1): Our method discards only the noisy labels and treats corresponding samples as unlabeled data.We then utilize this unlabeled data by pseudo labeling within our robust semi-supervised learning framework to learn a better feature representation for relation.For issue (2): Despite our initial selection of confident samples, there may still be noise in the labeled dataset.we have developed a noise-robust semi-supervised learning framework that leverages mixup supervised contrastive learning to learn from the labeled dataset and curriculum pseudo labeling to learn from the unlabeled dataset.
To summarize the contribution of this work: • We propose a noise-robust Semi-Supervised-Learning framework SSLRE for DSRE task, which effectively mitigate the impact of noisy labels.
• We propose to use graph structure information (weighted K-NN) to identify the confident samples and effectively convert noisy samples as useful training data by utilizing pseudo labeling.
• The proposed framework achieves significant improvement over previous methods in terms of both sent-level and bag-level relation extraction performance.
2 Related work
Distantly Supervised Relation Extraction
Relation extraction (RE) is a fundamental pro-cess for constructing knowledge graphs (Zhang et al., 2023a;Xia et al., 2022;Zhang et al., 2023b).To generate large-scale auto-labeled data without human effort, (Mintz et al., 2009) first use Distant Supervision to label sentences mentioning two entities with their relations in KGs, which inevitably brings wrongly labeled instances.To tackle the predicament of noise, most of the existing studies on DSRE are founded on the multi-instance learning framework.This approach is leveraged to handle noisy sentences in each bag and train models by capitalizing on the constructed trustworthy baglevel representations.Usually, these methods employ attention mechanisms to assign less weights to the probable noisy sentences in the bag (Lin et al., 2016;Han et al., 2018b;Alt et al., 2019;Shen et al., 2019;Ye and Ling, 2019;Chen et al., 2021a;Li et al., 2022), apply adversarial training or reinforcement learning to remove the noisy sentences from the bag (Zeng et al., 2015;Qin et al., 2018;Shang and Wei, 2019;Chen et al., 2020;Hao et al., 2021).However, the studies (Feng et al., 2018;Jia et al., 2019;Ma et al., 2021;Gao et al., 2021a) indicate that the bag-level DSRE methods are ineffective for sentence-level relation extraction.This work focuses on extracting relations at the sentence level, (Feng et al., 2018) applied reinforcement learning to identify confident instances based on the reward of noisy labels.(Jia et al., 2019) involve building initial reliable sentences based on several manually defined frequent relation patterns.(Ma et al., 2021) assigning complementary labels that cooperate with negative training to filter out noisy instances.Unlike previous studies, our method only discard noisy labels and keep the unlabeled samples.We use pseudo labeling to effectively utilize unlabeled samples, which helps to learn better representation.
Semi-Supervised-Learning
In SSL, a portion of training dataset is labeled and the remaining portion is unlabeled.SSL has seen great progress in recent years.Since (Bachman et al., 2014) proposed a consistency regularization based method, many approaches have migrated it into the semi-supervised learning field.MixMatch (Berthelot et al., 2019) proposes to combine consistency regularization with entropy minimization.
Mean Teacher (Tarvainen and Valpola, 2017) and Dual Student (Ke et al., 2019) are also based on consistent learning, aiming for the same outputs for different networks.Recently, FixMatch (Sohn et al., 2020) provides a simple yet effective weak-tostrong consistency regularization framework.Flexmatch (Zhang et al., 2021) provides curriculum pseudo pesudo labels learning to combat the imbalance of pseudo labels.
Our SSLRE framework differs from these frameworks in two main ways.Firstly, our labeled dataset still contains a small amount of noise due to the fact that confident sample identification cannot achieve perfection.Therefore, we utilize mixup contrastive supervised learning to combat this noise.Secondly, current SSL methods generate and utilize pseudo labels with the same head, which causes error accumulation during the training stage.To address this issue, we propose utilizing a pseudo classifier head, which decouples the generation and utilization of pseudo labels by two parameterindependent heads to avoid error accumulation.
Learning with Noisy Data
In both computer vision and natural language processing, learning with noisy data is a widely discussed problem.Existing approaches include but not limit to estimating the noise transition matrix (Chen and Gupta, 2015;Goldberger and Ben-Reuven, 2016), leveraging a robust loss function (Lyu and Tsang, 2019;Ghosh et al., 2017;Liu and Guo, 2019), introducing regularization (Liu et al., 2020;Iscen et al., 2022), selecting noisy samples by multi-network learning or multi-round learning (Han et al., 2018a;Wu et al., 2020), re-weighting examples (Liu and Tao, 2014), generating pseudo labels (Li et al., 2020;Han et al., 2019), and so on.In addition, some advanced state-of-the-art methods combine serveral techniques, e.g., Dividemix (Li et al., 2020) and ELR+ (Liu et al., 2020).
In this paper, we address the issue of noisy labels in distant relation extraction.Our approach first involves constructing a K-NN graph to identify confident samples based on their graph structure, and then use noise-robust mixup supervised contrastive learning to train with the labeled samples.
Methodology
To achieve sentence-level relation extraction in DSRE, we propose a framework called SSLRE, which consists of two main steps.Firstly, we select confident samples from the distantly supervised dataset using a weighted K-NN approach built by all sample representations.We use the selected samples as labeled data and the remaining samples as unlabeled data (as detailed in Section 3.1).Secondly, we employ our robust Semi-Supervised Learning framework to learn from the Semi-Supervised datasets (as described in Section 3.2).Appendix A delineates the full algorithm.
Specifically, we denote the original dataset in this task as The labeled dataset (identified confident samples) is denoted as
Confident Samples Identification with
Weighted K-NN Our Semi-Supervised-Learning module requires us to divide the noisy dataset into a labeled dataset and an unlabeled dataset.Inspired by (Lee et al., 2019;Bahri et al., 2020), we utilize neighborhood information of the hidden feature spaces to identify confident samples We employ supervised contrastive learning to warm up our model and obtain the representations for all instances.It is noteworthy that deep neural networks tend to initially fit the training data with clean labels during an early training phase, prior to ultimately memorizing the examples with false labels (Arpit et al., 2017;Liu et al., 2020).Consequently, we only warm up our model for a single epoch.Given two sentences s i and s j , we can obtain their low-dimensional representations as z i = θ(s i ) and z j = θ(s j ), where θ is the sentence encoder.We then calculate their representation similarity using the cosine distance Then, we build a weighted K-NN graph for all samples based on the consine distance.To quantify the agreement between s i and ỹi , We first use the label distribution in the K-neighborhood to approximate clean posterior probabilities, θ w and θ s mean forward with lower and higher dropout rate, respectively.ϕand ψ are classifier head and pseudo classifier head.L s is the mixup supervised contrastive loss defined in Eq. ( 12), and L u,t is the unsupervised loss defined in Eq. ( 6).
where N i represents the set of K closest neighbors to s i .We then use the cross-entropy loss ℓ to calculate the disagreement between qc (s i ) and ỹi .Denoting the set of confident examples belonging to the c-th class as X c , we have (3) where γ c is a threshold for the c-th class, which is dynamically defined to ensure a class-balanced set of identified confident examples.To achieve this goal, we use the α fractile of per-class agreements between the original label ỹi and max c qc (s i ) across all classes to determine how many examples should be selected for each class, i.e.
Finally, we can get the labeled set and unlabeled set as (4)
Noise-Robust Semi-Supervised learning
Despite selecting confident samples from the distantly supervised dataset, there still remains a small amount of noise in the labeled dataset.Naively training a classifier in the presence of label noise leads to noise memorization (Liu et al., 2020), which degrades the performance.We propose a noise-robust semi-supervised learning framework to mitigate the influence of remaining noise.
Data Augmentation with Dropout
Inspired by SimCSE (Gao et al., 2021b), we augment training samples by embedding processing.
In particular, We obtain different embeddings of a sentence by applying dropout during the forward process.Additionally, we propose using a high dropout rate for strong augmentation and a low dropout rate for weak augmentation.The sentence encoder is denoted as θ, with forward propagation using a high dropout rate denoted as θ s and forward propagation using a low dropout rate denoted as θ w .
Unsupervised Learning with Pseudo Labeling
In this part, we propose two modules to learn from the unlabeled dataset: (1) To generate and utilize pseudo labels independently, we propose pseudo classifier head.(2) Utilize Curriculum Pseudo Labeling to perform consistent learning while combating the unbalance of the generated pseudo labels.Pseudo classifier head: pseudo labeling is one of the prevalent techniques in semi-supervised learning.The existing approaches generate and utilize pseudo labels with the same head.However, this may cause training bias, ultimately amplifying the model's errors as self-training continues.(Wang et al., 2022).To reduce this bias when using pseudo labels, we propose utilizing a two-classifier model consisting of an encoder θ with both a classifier head ϕ and a pseudo classifier head ψ.We optimize the classifier head ϕ using only labeled samples and without any unreliable pseudo labels from unlabeled samples.Unlabeled samples are used solely for updating encoder θ and pseudo classifier head ψ.In particular, the classifier head ϕ generates pseudo labels (ϕ • θ w )(u b ) for unlabeled samples (which have no gradient), the loss of unlabeled samples is calculated by ℓ((ψ , where ℓ denotes cross entropy loss.This decouples the generation and utilization of pseudo labels by two parameter-independent heads to mitigate error accumulation. Curriculum Pseudo Labeling: Due to the highly unbalanced dataset, using a constant cut-off τ for all classes in Pseudo labeling results in almost all selected samples (those with confidence greater than the cut-off) being labeled as 'NA', which is the dominant class.
Inspired by Flexmatch (Zhang et al., 2021), we use Curriculum Pseudo Labeling (CPL) to combat unbalanced pseudo labels.We first generate pseudo labels for iteration t These labels are then used as the target of stronglyaugmented data.The unsupervised loss term has the form as where and σ t (c) represents the numbers of the samples whose predictions fall into class c and above the threshold, formulated as
Mixup Supervised Contrastive Learning
We target learning robust relation representation in the presence of remaining label noise.In particular, we adopt the contrastive learning approach and randomly sample N sentences and inference the sentences twice with same dropout rate to get two view.Then we normalize the embedding by L 2 normalization.The resulting minibatch {z i , y i } 2N i=1 consists of 2N normalized sentence embedding and corresponding labels.We perform supervised contrastive learning on labeled samples To make representation learning robust, we add Mixup (Berthelot et al., 2019) to supervised contrastive learning.Mixup strategies have demonstated excellent performance in classification frameworks and have futher shown promising results to prevent label noise memorization.Inspired by this success, we propose mixup supervised contrastive learning, a novel adaptation of mixup data augmentation for supervised contrastive learning.
Mixup performs convex combinations of pairs of samples as where λ ∈ [0, 1] ∼ Beta(α m , α m ) and x i denotes the training example that combines two mini-batch examples x a and x b .A linear relation in the contrastive loss is imposed as where L a and L b have the same form as L i in Eq. ( 9).The supervised loss is the sum of Eq. ( 11) for each mixed instance: Mixup supervised contrastive learning helps to learn a robust representation for relations, but it cannot map the representation to a class.To learn the map function from the learned representation to relation class, classification learning using cross entropy loss is also employed as (13)
Training Objective
Combining the above analyses, the total objective loss is 4 Experiments
Evaluation and Parameter Settings
To guarantee the fairness of evaluation.We take both sentence-level evaluation and bag-level evaluation in our experiments.Further details of the evaluation methods are available in the appendix C. To achieve bag-level evaluation under sentence-level training, we use at-least-one (ONE) aggregation stragy (Zeng et al., 2015), which first predicts relation scores for each sentence in the bag, and then takes the highest score for each relation.The details of the hyper-parameters are available in the appendix D.
Baselines
In order to prove the effectiveness of the SSLRE, we compare our method with state-of-the-art methods sentence-level DSRE framework and bag-level DSRE framwork.
For bag-level methods baselines, RESIDE (Vashishth et al., 2018) exploits the information of entity type and relation alias to add a soft limitation for DSRE.DISTRE (Alt et al., 2019) combines the selective attention to its Transformer-based model.CIL (Chen et al., 2021a) utilize contrastive instance learning under MIL framwork, HiCLRE (Li et al., 2022) propose hierarchical contrastive learning framwork, PARE (Rathore et al., 2022) propose concatenate all sentences in the bag to attend every token in the bag.Besides, we combine Bert with different aggregation strategies: ONE, which is mentioned in section 4.2; AVG averages the representations of all the sentences in the bag; ATT (Lin et al., 2016) produces bag-level representation as a weighted average over embeddings of sentences and determines weights by attention scores between sentences and relations.
For sentence-level baselines, RL-DSRE (Feng et al., 2018) apply reinforcement learning to train sample selector by feedback from the manually designed reward function.ARNOR (Jia et al., 2019) selects the reliable instances based on reward of attention score on the selected patterns.SENT (Ma et al., 2021) filters noisy instances based on negative training.
Results
We first evaluate our SSLRE framework in the NYT10m and WIKI20m dataset.Table 1 shows the overall performance in terms of sentence-level evaluation.From the results, we can observe that (1) Our SSLRE framework demonstrates superior performance on both datasets, surpassing all strong baseline models significantly in terms of F1 score.In comparison to the most robust baseline models in two distantly supervised datasets, SSLRE displays a significant enhancement in performance (i.e., +6.3% F1 and +3.4% F1). ( 2) The current sentence-level DSRE models (SENT, ARNOR) fail to outperform the state-of-the-art MIL-based techniques in terms of F1 score on the aforementioned datasets.This could be attributed to the loss of information resulting from the elimination of samples.Unlike the MIL-based methods that employ all samples for training, these models only utilize selected samples.Moreover, the selection procedure may not always be reliable.(3) The performance of state-of-the-art MIL-based methods is not substantially superior to that of the Bert baseline.This suggests that the MIL modules, which are specifically crafted for this task, do not exhibit significant effectiveness when evaluated at the sentence level.
Table 2 MIL framework, our SSLRE framework achieves state-of-the-art performance on bag-level relation extraction.This finding suggests that sentencelevel training can also yield excellent results on baglabel prediction.This observation is also consistent with (Gao et al., 2021a;Amin et al., 2020).On the wiki20m dataset, we note a consistent improvement on as well, although it is not as evident as in the case of NYT10m.We surmise that this could be attributed to the fact that the wiki20m dataset is relatively less noisy when compared to NYT10m.
We also compared our framework to several strong baselines using held-out evaluation on the NYT10 dataset, which is detailed in appendix B.
Ablation Study
We conducted ablation study experiments on the NYT10m dataset to assess the effectiveness of different modules in our SSLRE framework.We specifically removed each of the argued contributions one by one to evaluate their effectiveness.For unsupervised learning part, we remove the pseudo classifier head and CPL one at a time.For supervised learning part, we switch from mixup supervised learning to supervised contrastive learning and cross entropy as our new objective.In terms of confident samples identifications methods, we alternate between using random (randomly selection) and NLI-based selection instead of our K-NN method.The NLI method involves performing zero-shot relation extraction through Natural Lan-guage Inference (NLI) (Sainz et al., 2021), then identify the confident samples based on the level of agreement between the distant label and NLI soft label.Table .3 shows the ablation study results.We conclude that (1) Unsupervised learning apart effectively utilize the unlabeled samples.Removing pseudo classifier head and CPL leads to a decrease of 1.3% and 5.1% on micro-F1, respectively.
Methods
(2) When dealing with noisy labeled data in supervised learning, Mixup Contrastive Supervised Learning proves to be more robust than both Supervised Contrastive Learning (-2.9%) and Cross Entropy (-4.2%).(3) Our K-NN-based confident samples identification method outperforms the random method by 6.5% and the NLI method by 3.3%.This indicates that our K-NN method can effectively select confident samples.
Analysis on KNN
We conducted an evaluation of the performance of weighted k-nearest neighbors (kNN) in terms of its ability to select confident samples.To elaborate, we intentionally corrupted the labels of instances in the nyt10m test set with a random probability of 20%, 40%, and 60%.Our objective was to assess whether our weighted kNN method could effectively identify the uncorrupted (confident) instances.We trained our model on the corrupted nyt10m test set for 10 epochs, considering its relatively smaller size compared to the training set, which required more epochs to converge.In order to evaluate the ability of the weighted kNN in identifying confident samples, we reported the recall and precision metrics.The results are shown as 4: It is worth noting that precision is the more important metric because our goal is to make the la-Pre. Rec.
t-SNE analysis
To demonstrate that preserving unlabeled samples can facilitate the learning of a superior representation compared to discarding them, we utilized sentence representations obtained from theta as the input to conduct dimension reduction via t-SNE and acquire two-dimensional representations.We focused on four primary categories of relation classes, which are "/location/location/contains", "/business/person/company", "/location/administra-tive_division/country", and "/people/person/nationality".As depicted in Figure 4, leveraging unlabeled samples via Pseudo labeling enhances the clustering of identical relation data points and effectively separates distinct classes from one another.
Appendix F shows the t-SNE results of all classes.
Conclusion
In this paper, we propose SSLRE, a novel sentence-level framework that is grounded in Semi-Supervised Learning for the DSRE task.Our SSLRE framework employs mixup supervised contrastive learning to tackle the noise present in selected samples, and it leverages unlabeled samples through Pseudo Labeling, which effectively utilize the information contained within noisy samples.
Experimental results demonstrate that SSLRE outperforms strong baseline methods in both sentencelevel and bag-level relation extraction.
Limitations
In order to augment textual instances, we leverage dropout during forward propagation.This necessitates propagating each instance twice to generate the augmented sentence embeddings.However, the demand for GPU resources is higher compared to previous methods.Furthermore, we adjust the dropout rate to regulate the augmentation intensity for semi-supervised learning and show its effectiveness through the performance results.Nonetheless, we have not conducted explicit experiments to investigate the interpretability, which needs further investigation.
A Algorithm
Algorithm 1 provides the pseudo-code of the overall framework.
B Held-out evaluation
C Evaluation Settings
Sentence-level evaluation: Different from baglevel evaluation used by MIL-based model, a sentence-level(or instance-level) evaluation accesses model performance directly on all of the individual instances in the dataset, which require the model to accurately predict the relation for each sentence.Following (Jia et al., 2019;Ma et al., 2021;Liu et al., 2022), we report micro-Precision(µPrec.),micro-Recall(µRec.) and micro-F1(µF1) for sentence-level evaluation.
D Parameter Settings
Bag-level evaluation: Bag evaluation accesses the performance of bag relation label extraction.Since manually annotated data are at the sentence-level, following (Gao et al., 2021a), we construct baglevel annotations in the following way: For each bag, if one sentence in the bag has a human-labeled relation, this bag is labeled with this relation; if no sentence in the bag is annotated with any relation, The underlying encoder for sentence are implemented by BERT-base (Devlin et al., 2019), which generates 768 hidden units for each token's contextaware representation.During the training stage, we set the learning rate of the model to 2 × 10 −5 and the batch size to 64, which was determined through a grid search over batch sizes in {16, 32, 64} and learning rates in {1e-5, 2e-5, 5e-5}.We train the model for 5 epochs and use Adam (Kingma and Ba, 2014) as the optimizer.The Mixup parameter α m is set to 1, the classifier loss weight λ c is set to 0.2, the fractile alpha is set to 0.8, the unsupervised loss weight λ u is set to 1, and the CPL threshold τ is set to 0.95.We set the dropout rate for weak augmentation to 0.2 and the dropout rate for strong augmentation to 0.4.Further analysis on the strong augmentation dropout rate is presented in Section 4.7.
E PR-curve
We report the P-R curve on NYT10m dataset as Figure 5:
F Additional t-SNE analysis
Figure 6 shows the t-SNE results on all classes of the sentence representation.
Algorithm 1: SSLRE Algorithm input :Noisy Dataset D output : 1 Warm up θ and ϕ for one epoch using supervised contrastive learning and get θ ′ .
Figure 1 :
Figure 1: Bag-level RE maps a bag of sentences to bag labels.Sentence-level RE maps each sentence to a specific relation.
Figure 2 :
Figure2: An overview of the proposed framework, SSLRE.D, X , U denote the original noisy dataset, labeled dataset and unlabeled dataset.θ indicates the encoder.θ w and θ s mean forward with lower and higher dropout rate, respectively.ϕand ψ are classifier head and pseudo classifier head.L s is the mixup supervised contrastive loss defined in Eq. (12), and L u,t is the unsupervised loss defined in Eq. (6).
Figure 3 :
Figure 3: Strong augmentation with different dropout rate
Figure 6
Figure6: t-SNE visualization of representations with Pseudo labeling(SSL) and without(Sup).SSL achieves a better cluster results comparing with Sup, especially on color green and light purple.
Table 1 :
Sentence-level evaluation results on NYT10m and wiki20m.Bold and underline indicate the best and the second best scores.
Table 2 :
Bag-level evaluation results on nyt10m and wiki20m.SSLRE-ONE represents the SSLRE with ONE aggregation strategy
Table 3 :
Ablation study of SSLRE on NYT10m
Table 4 :
The effect of KNN.
Mengqi Zhang, Yuwei Xia, Qiang Liu, Shu Wu, and Liang Wang.2023a.Learning latent relations for temporal knowledge graph reasoning.In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12617-12631, Toronto, Canada.Association for Computational Linguistics.Mengqi Zhang, Yuwei Xia, Qiang Liu, Shu Wu, and Liang Wang.2023b.Learning long-and short-term representations for temporal knowledge graph reasoning.In Proceedings of the ACM Web Conference 2023, WWW '23, page 2412-2422, New York, NY, USA.Association for Computing Machinery.
Table 5 :
Held-out evaluation on NYT10 | 2023-12-13T14:10:15.578Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "e32f79d7db7e80fc9ae0213606a18f92cef3c854",
"oa_license": "CCBY",
"oa_url": "https://aclanthology.org/2023.findings-emnlp.876.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "8970ae3f65410b912c3901e1ae04c50d94fa033e",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
255290883 | pes2o/s2orc | v3-fos-license | Inequality in Drug Utilization among Chronic Myeloid Leukaemia Patients in Malaysia: A Cost-Utility Analysis
Background: The burden of chronic myeloid leukaemia (CML) is increasing due to longer patient survival, better life expectancy of the general population, and increasing drug prices. Funding is one of the main concerns in the choice of CML medication used worldwide; thus, patient assistance programmes were introduced to ensure accessibility to affordable treatment. In this study, we evaluated CML drug distribution inequality in Malaysia through patient assistance programmes, using pharmaco-economics methods to evaluate CML treatment from the care provider’s perspective. Methods: Patients with CML were recruited from outpatient haematological clinics at the national centre of intervention and referral for haematological conditions and a public teaching hospital. The health-related quality of life or utility scores were derived using the EuroQol EQ-5D-5L questionnaire. Costing data were obtained from the Ministry of Health Malaysia Casemix MalaysianDRG. Imatinib and nilotinib drug costs were obtained from the administration of the participating hospitals and pharmaceutical company. Results: Of the 221 respondents in this study, 68.8% were imatinib users. The total care provider cost for CML treatment was USD23,014.40 for imatinib and USD43,442.69 for nilotinib. The governmental financial assistance programme reduced the total care provider cost to USD13,693.51 for imatinib and USD19,193.45 for nilotinib. The quality-adjusted life years (QALYs) were 17.87 and 20.91 per imatinib and nilotinib user, respectively. Nilotinib had a higher drug cost than imatinib, yet its users had better life expectancy, utility score, and QALYs. Imatinib yielded the lowest cost per QALYs at USD766.29. Conclusion: Overall, imatinib is more cost-effective than nilotinib for treating CML in Malaysia from the care provider’s perspective. The findings demonstrate the importance of cancer drug funding assistance for ensuring that the appropriate treatments are accessible and affordable and that patients with cancer use and benefit from such patient assistance programmes. To establish effective health expenditure, drug distribution inequality should be addressed.
Introduction
Medicines represent one of the most frequently used health technology components for disease prevention and treatment. More than 24% of global total health expenditure is on drug purchasing (Milani and Scholten, 2011). In developing countries, a significant amount of pharmaceutical spending is paid out-of-pocket by individuals (Du et al., 2019;Kong et al., 2010;Lu et al., 2011), which imposes a substantial financial burden on patients and presents an increased challenge for care providers, particularly for cancer drugs.
To ensure universal healthcare coverage, many countries introduced patient assistance programmes to ensure the availability of affordable treatment in sufficient quantities and to improve access to medicines likely to RESEARCH ARTICLE
Inequality in Drug Utilization among Chronic Myeloid Leukaemia Patients in Malaysia: A Cost-Utility Analysis
have a high budget influence either due to high treatment cost per patient or large volumes of use (Lu et al., 2015). Nonetheless, such programmes are typically of limited duration. Thereafter, some countries shifted the financial burden to their governments, whereas a few opted for self-paying patients to pay for the medicines, which may lead to catastrophic health expenditure (CHE). Such a financial burden would affect the patient's quality of life in addition to the other effects of the economic burden of health conditions, such as multiple hospital admissions, increased length of hospital stay, hospital investigations, and medical procedures.
Chronic myeloid leukaemia (CML) is a chronic debilitating health condition that exemplifies a substantial burden requiring lifelong treatment, recurrent hospital appointments, and costly medication. CML accounts for 15% of all leukaemias, where its incidence varies globally from 0.4 to 1.5 per 100,000 population (Au et al., 2009;Besa et al., 2014;Kim et al., 2010). Nevertheless, the CML disease burden is increasing due to longer patient survival, better life expectancy of the general population, and increasing drug prices. A group of targeted therapy drugs, tyrosine kinase inhibitors (TKI) substantially improve the survival and life of patients with CML, ensuring that they achieve the life expectancy of the normal population (Dalziel et al., 2004;Druker et al., 2006;Reed et al., 2004;Rochau, Kluibenschaedl, et al., 2015). Unfortunately, TKI are among the most expensive outpatient cancer drugs available. When it became available in 2001, the firstgeneration TKI imatinib cost approximately USD30,000 per year of treatment (Chhatwal et al., 2015). By 2012, the price had tripled to USD92,000. The second-and thirdgeneration TKI cost approximately USD100,000 in 2010 or more annually (Chhatwal et al., 2015).
In Malaysia, the To date, only imatinib and nilotinib, first-and secondgeneration TKI, respectively, are approved for treating CML under the MOH medicine formulary. Nonetheless, the TKI quantity provided under this scheme is limited and there is a long waiting list for patients with CML before they can access the medication.
Accordingly, the study objective was to evaluate CML drug distribution inequality through patient assistance programmes in Malaysia by using pharmaco-economics methods to evaluate the treatment cost for patients with CML from the care provider's perspective. This study highlights the significance of effective and efficient health expenditure of CML treatment and aims to act as a potential reference for government assistance programmes for cancer drugs.
Design and Setting
This was a cross-sectional study conducted at two health centres in Malaysia between November 2019 and March 2020. Ampang Hospital, Kuala Lumpur, is the national intervention centre for all haematological conditions, and its haematology expertise has led to the MOH naming it the National Reference Centre (Ministry of Health Malaysia, 2017). This tertiary publicly funded specialist hospital caters to more than 800 patients with CML from at least three neighbouring states, thus serving approximately half of the patients with CML in the country. Hospital Canselor Tuanku Muhriz, Kuala Lumpur, is one of the four public teaching hospitals in Malaysia. It has more than 1,000 beds and manages approximately 100 patients with CML (Malaysian Medical Resources, 2020).
For this study, patients with CML were recruited via random sampling from the haematological outpatient clinics of Hospital Ampang and Hospital Canselor Tuanku Muhriz. Participation was voluntary and written informed consent was obtained from the patients or the parents/ legal caretakers of patients aged <18 years. The inclusion criteria were patient with CML receiving treatment at the aforementioned health centres and at least 1-month treatment with either imatinib or nilotinib but not both. All eligible patients were informed of the study purpose and those who consented to participate were provided with a validated questionnaire in English or Malay based on their preference. We obtained further information from the hospital information system and the patients' medical notes. The study was approved by the ethics committees of the participating health centres and the MOH Medical Research and Ethics Committee (MREC) (NMRR-19-1090-47137 IIR).
Cost Analysis
The total cost of managing CML was calculated from the care provider's perspective. Costing data were obtained from the MOH Casemix MalaysianDRG. The MOH Casemix System was developed as a patient classification tool that groups patients with relatively homogenous resources and clinical characteristics for each group using MalaysianDRG V2 2016 (Pharmacoeconomic guidelines for Malaysia, 2018). The main outputs retrievable from the Executive Information System module include Major Diagnostic Categories, Diagnosis-Related Group (DRG), treatment cost per DRG, Severity of Illness, and Casemix Index (hospital efficiency index) (Pharmacoeconomic guidelines for Malaysia, 2018). To date, a total of 102 government hospitals and medical institutions in all 14 Malaysian states use the casemix system and MalaysianDRG. The casemix system uses the top-down costing approach and consists of two types of costing data: direct and indirect cost. Medication cost was calculated based on the unit price of drug year 2018, which was obtained from the hospital administrations and the pharmaceutical company to acquire both the assisted and unassisted drug cost, respectively. The unassisted or listing cost is the recommended retail price at which the manufacturer recommends that the retailer sell the product. The costs were converted into 2020 US dollars (RM4.20 = USD1.00) (Central Bank of Malaysia, 2020).
Then, both the casemix costing data and medication cost were added to yield the total care provider's cost of CML treatment in Malaysia. To prevent double counting, the drug supply component within the casemix costing data was subtracted prior to determining the total provider treatment cost.
Next, the differences between the cost and outcomes of imatinib and nilotinib treatment for patients with CML were compared using the incremental cost effectiveness ratio (ICER). The ICER is the ratio of the increase in Malaysian population (Shafie et al., 2019a(Shafie et al., , 2019b. Prior permission was obtained from EuroQol for the usage of the EQ-5D forms in this study (registration ID: 30756). The survival life-years of patients with CML was obtained via literature review, as there are a lack of published data on local survival analysis for CML.
Sensitivity Analysis
Sensitivity analysis was performed to resolve uncertainties behind the input parameters and to determine and evaluate the robustness of the outcomes towards variations in the final decision model. A best-, base, and worst-case scenario analysis was performed. Multiway sensitivity analysis was conducted to reflect the best-and worst-case scenarios by applying several circumstances for cost and outcomes, such as drug price variation, outcome variation, and the presence of governmental financial assistance (i.e. assisted vs. unassisted). Statistical analysis was performed using the Statistical Package for the Social Sciences 22 (SPSS Inc.). Cost analysis was performed using Microsoft Excel 365 (Microsoft Corp.).
Results
A total of 221 respondents (response rate: 99.5%) consented to participate in this study. Table 1 lists the respondents' demographic characteristics. More than half were male (56.6%) and 53.4% were Malay. The CML stage at diagnosis was predominantly chronic (89.6%), followed by those in the accelerated stage (8.6%) and blast stage (1.8%). More than half of the participants were imatinib users (68.8%) and 31.8% were nilotinib users. The mean age at diagnosis (n = 221) was 40.87 -16.45 years (range: 13-81 years). The mean duration of diagnosis was 6.38 -4.65 years, the mean duration to starting TKI medication was 94.24 -401.27 days, and the mean utility score was 0.889 -0.140. Table 2 illustrates the utility score and life expectancy against the CML treatments. The overall mean utility score the cost difference of the two treatments to the increase in the difference of the two outcomes of this study. The gross domestic product (GDP) per capita was used as the cost-effectiveness threshold (CET) as recommended by the World Health Organization (WHO) Commission on Macroeconomics and Health (World Health Organization, 2001). An intervention that is <1 GDP per capita is the most cost-effective CML treatment choice (Adam, 2003;Grosse, 2008). In this study, the CET was the 2019 Malaysian GDP per capita of RM46,450 (USD11,059.52) (Department of Statistics Malaysia, 2019).
Measure of Effectiveness
The study effectiveness or outcome was the qualityadjusted life years (QALYs). The QALYs are the recommended measure of outcome for health technology assessment as they encapsulate the effect of a treatment on a patient's length of life and their health-related quality of life (HRQoL) (National Institute for Health and Clinical Excellence, 2013). QALYs are calculated by multiplying the duration of time spent in a health state by the HRQoL weight (i.e. utility score) associated with that health state. Therefore, the two key elements, HRQoL and survival, are incorporated.
The HRQoL or utility scores were obtained with the EuroQol EQ-5D-5L questionnaire, which is a standardised and recommended health status measurement tool to yield a measure of health and quality of life in clinical and economic appraisals (National Institute for Health and Clinical Excellence, 2013; EuroQol Research Foundation, 2019). The EQ-5D-5L measures five dimensions (mobility, self-care, usual activities, pain/ discomfort, anxiety/depression) (Gudex, 2005). Each health state can be denoted using a five-digit number. The answers for the five domains are transformed to generate a summary score, which indicates the overall utility score. The questionnaire has been validated in Malay (Shafie et al., 2011;Chen et al., 2010) with a Cronbach's alpha of 0.58 and measurement has also been validated for the Life expectancy data were obtained from a literature review (Botteman et al., 2011;Ovanfors et al., 2011;Snedecor et al., 2012). The QALYs are the product of the HRQoL (i.e. utility score) and survival (i.e. life expectancy). The imatinib users had 17.87 QALYs and the nilotinib users had 20.91 QALYs. There were no significant differences between the two treatments for both utility score and life expectancy (p = 0.508 and p = 275, respectively). Figure 1 illustrates the incremental cost-effectiveness plane of scenario analysis with the 50% GDP per capita as the CET. All three scenarios demonstrated positive incremental costs and incremental QALYs. Nonetheless, only best-and base case scenarios were below the CET.
Discussion
Good health cannot be achieved without access to pharmaceutical products. Lack of access to medicines is a health inequality that is of global concern, especially in developing countries. The WHO recommends ensuring that all people and communities receive the quality services they need and are protected from health threats without financial hardship (World Health Organization, 2017). This includes the full range of critical affordable health services, from prevention, diagnosis, recovery, and palliative care to health promotion. Regrettably, approximately 100 million people worldwide are in "extreme poverty" due to overpaid healthcare costs (World Health Organization, 2017). Nonetheless, health access inequality is both preventable and solvable. With better healthcare access following progress devoted to the poor and rural populations via fairer and more equitable financing, health inequalities have been decreasing in the past 20 years (Victora et al., 2017).
CML is a health condition that requires lifelong commitment, which is often both physical and mentally draining, time-consuming, and financially burdensome. TKI treatment was instrumental in changing the status of CML from a lethal to chronic disease. Prior to the availability of TKI treatment, the average survival of patients with CML was only 5 years (National Comprehensive Cancer Network, 2018). TKI offer potential long-term disease control or even a cure for patients with CML who adhere to the therapy (Dalziel et al., 2004;Druker et al., 2006;Reed et al., 2004;Rochau, Kluibenschaedl, et al., 2015). Unfortunately, TKI are one of the most expensive outpatient drugs available and the price is increasing every year (Chhatwal et al., 2015;Sandmann et al., 2013). Funding issues are the main concern in the choice of medication used for CML worldwide, including Malaysia. Without financial assistance, very few patients can afford any type of TKI, hence low-income countries still use other types of medication, such as interferon, which yield a much poorer outcome for treating CML (Au et al., 2009).
The present study is the first economic evaluation to pioneer assessment of the cost-effectiveness of CML treatment in Malaysia. We determined that nilotinib has a higher drug price than imatinib and a better utility score and life expectancy; therefore, it has better QALYs. Nilotinib is a second-generation TKI that many studies have reported as being the superior TKI to imatinib, where it causes fewer adverse effects and with an earlier and deeper molecular response (Kantarjian et al., 2011;Larson et al., 2012;Wang et al., 2015). However, nilotinib is costlier. Moreover, nilotinib did not demonstrate a statistically significant difference when compared to imatinib despite the better utility score. Furthermore, the life expectancy between both medications was not statistically significantly different. It is presumed that spending more on nilotinib is not worthwhile when the incremental outcome is not significant.
Previous studies on the cost-effectiveness of CML treatment have yielded mixed results. Some studies concluded that imatinib is more cost-effective than nilotinib (Rochau et al., 2015a(Rochau et al., , 2015bLi et al., 2018). Imatinib would become more cost-effective when generic imatinib becomes available, causing its price to decline; subsequently, it would become more cost-effective compared to nilotinib (Padula et al., 2016). Nonetheless, others agreed that nilotinib is highly cost-effective when compared to imatinib (Romero et al., 2014). (Mildred et al., 2012) reported that nilotinib improved survival and QALYs compared to standard treatment of first-line imatinib and is likely to be a cost-effective use of care provider resources.
An issue in the present study is that the unbalanced proportion of imatinib and nilotinib users. Approximately two-thirds (68.8%) of patients with CML were imatinib users as compared to the 31.2% of nilotinib users. This may be due to the fact that MyPAP allocates more numbers of the cheaper drug so that more patients can access the medication. This drug distribution inequality in the patient assistance programme therefore caused less allocation of the more effective nilotinib. If all allocations were assigned to the most effective drug, it would yield better and higher outcome values, which would result in nilotinib becoming more cost-effective compared to imatinib, as determined by the best-case scenario analysis in our study. Implementing national insurance coverage may overcome the issue of drug distribution inequality as it enables resource pooling to subsidise healthcare costs, thus providing the best medical care without financial strain. Countries such as South Korea, Taiwan, and Hong Kong implement national insurance schemes that cover, or at least present the option, to cover for cancer drugs (Au et al., 2009). Since TKI were introduced, insurance plans have implemented many policies to control prescription drug costs, including raising co-payments and increasing the use of co-insurance. Whereas countries such as Singapore have government-operated savings and medical insurance programmes apart from private health insurance schemes, the schemes all have the same aim of providing high-quality healthcare at a low cost.
Nonetheless, there remains a need to reduce the overall cost of managing CML. MyPAP aids the MOH as the care provider to reduce the cost of managing CML by almost half. Nonetheless, MyPAP is not ideal and its sustainability is doubtful. Spending more than USD20,000,000 annually to manage CML nationwide imposes a substantial financial burden on the country, where an enormous portion of the expenditure comes from drug purchasing. Unfortunately, the presence of a patient assistance programme prevents the purchase of alternative drugs, such as generic imatinib, that demonstrate the same efficiency but at a much lower price (Abou Dalle et al., 2019;Entasoltan et al., 2017;Lejniece et al., 2017). Additionally, the collaboration of various ministries and private agencies may reduce this financial burden by providing appropriate and affordable services specifically to patients with CML. To increase awareness of more affordable cancer drug options and for patients with cancer to understand their drug needs and financially viable drug treatment options, more patient support activities should be organised, such as patient congress, educational patient camps, patient education workshops, patient newsletters, community events, and health exhibitions (Ching, 2011).
Another issue is that the drug allocation quantities should be improved. Under the patient assistance programme, drug allocation numbers by the pharmaceutical company are limited. However, we determined that the list of company-sponsored patients is limited and patients are on waiting lists for up to several months before they can access the medication. This drug distribution inequality is unacceptable as patients are forced to wait to access the medication deemed effective for treating their condition and prolonging their life.
This study has several limitations. First, the respondents were recruited from a limited number of healthcare facilities and there was limited coverage of specific Malaysian regions. Hence, the data may not be representative of all CML cases in Malaysia. Nevertheless, the data provided meaningful insight into drug purchasing for managing CML from the perspective of the MOH as the care provider. Second, the COVID-19 pandemic affected the data collection flow whereby patients with CML are considered a high-risk community, which led to the closure of outpatient clinics. It would be beneficial to perform an economic evaluation study from the societal perspective that is more extensive, relevant, and provides a better understanding of the financial burden of treatment among patients with CML. In addition, a budget impact analysis that estimates the financial consequences and effect of adopting nilotinib as a new treatment for patients with CML would be advantageous.
Ultimately, the question of how medication access for patients with CML can be improved should be highlighted. The pandemic era has invariably affected the economic status and finances of the public. Therefore, the availability and continuation of patient assistance programmes, such as MyPAP, are necessary to ensure that the appropriate treatments are accessible and affordable. Achieving universal health coverage requires enormous efforts, where all societal levels must cooperate to implement it. A more transparent and sustainable system for the healthcare sector should be considered, where a National Social Health Insurance (NSHI) and Voluntary Health Insurance (VHI) scheme should be considered.
In conclusion, nilotinib has a higher drug price than imatinib, yet yields better life expectancy, utility score, and QALYs. Overall, imatinib is more cost-effective compared to nilotinib for treating CML in Malaysia from the care provider's perspective. Nonetheless, the ICER of nilotinib to imatinib suggested that nilotinib is cost-effective when compared to the national GDP per capita. This study suggests the importance of funding assistance for patients with CML, particularly in ensuring that the appropriate treatments are accessible and affordable. Nonetheless, the drug distribution inequality as a result of the patient assistance programme should be addressed to establish an effective, efficient, sustainable, and equitable health expenditure for treating CML in Malaysia.
Author Contributions Statement
SEWP: Study inception, study design, and liaising; critical revision of the manuscript for important intellectual content; supervisory role of study progress; administrative and material support. EMS: Conception and design; data acquisition, analysis, and interpretation; drafting the manuscript and statistical analysis. ANA: Study progress monitoring; manuscript revision for important intellectual content. NRT: Data acquisition; manuscript revision for important intellectual content. JS: Data acquisition; manuscript revision for important intellectual content. | 2022-12-31T16:10:12.813Z | 2022-12-01T00:00:00.000 | {
"year": 2022,
"sha1": "94bf6aa3a05c521dbeb680b00972ea34d5377854",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d0b9137e7f08fd6879f90dc9bcab82acc0289033",
"s2fieldsofstudy": [
"Medicine",
"Economics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259907583 | pes2o/s2orc | v3-fos-license | DNA Methylation Signatures of Response to Conventional Synthetic and Biologic Disease-Modifying Antirheumatic Drugs (DMARDs) in Rheumatoid Arthritis
Rheumatoid arthritis (RA) is a complex condition that displays heterogeneity in disease severity and response to standard treatments between patients. Failure rates for conventional, target synthetic, and biologic disease-modifying rheumatic drugs (DMARDs) are significant. Although there are models for predicting patient response, they have limited accuracy, require replication/validation, or for samples to be obtained through a synovial biopsy. Thus, currently, there are no prediction methods approved for routine clinical use. Previous research has shown that genetics and environmental factors alone cannot explain the differences in response between patients. Recent studies have demonstrated that deoxyribonucleic acid (DNA) methylation plays an important role in the pathogenesis and disease progression of RA. Importantly, specific DNA methylation profiles associated with response to conventional, target synthetic, and biologic DMARDs have been found in the blood of RA patients and could potentially function as predictive biomarkers. This review will summarize and evaluate the evidence for DNA methylation signatures in treatment response mainly in blood but also learn from the progress made in the diseased tissue in cancer in comparison to RA and autoimmune diseases. We will discuss the benefits and challenges of using DNA methylation signatures as predictive markers and the potential for future progress in this area.
Introduction
Rheumatoid arthritis (RA) affects around 0.5-1% of the global population. It is characterised by synovial joint inflammation that leads to pain, swelling and reduced mobility and, if poorly responsive to therapy, irreversible damage to cartilage and bone. The aetiology of RA is unknown, but good progress has been made in elucidating the pathogenetic mechanisms. Monozygotic twin studies show a concordance rate of~12-15% [1,2]. Strong genetic predisposition is associated with HLA-haplotypes specifically the HLA-DRB1 alleles: DR4 and DR1 [3]. Over the last 15 years, genome-wide association studies (GWAS) have also discovered over 100 different loci of RA susceptibility [4]. The overall heritability of the disease is now estimated to be around 66% [5]. Several specific environmental triggers have been associated with the development of RA including exposure to tobacco smoke and organic dust (e.g., silica), obesity and low vitamin D levels [6]. However, contradictory associations have been reported in the literature for some of these risk factors.
In the pathogenesis of RA, the combination of environmental triggers and underlying genetic susceptibility causes the activation of an autoimmune response in the synovium which becomes inflamed, resulting in synovitis. This inflammation is characterized by the activation of immune cells including T-cells, B-cells, macrophages, and the release of
Genetic and Transcriptomic Biomarkers of RA Treatment Response
Initial research into markers of MTX response focused on genetic studies with mixed results. Senapati et al., conducted the first genome-wide analysis of MTX treatment response in RA patients. From a cohort of 457 patients, they found 10 new risk loci for poor response using a significant cut-off p ≤ 5 × 10 −5 [25]. In a later genome-wide study of response to MTX in 1424 early RA patients of European ancestry, no single nucleotide polymorphism (SNP) reached genome-wide statistical significance (p = 10 −8 ) for any outcome measure [26]. The strongest association in this study was rs168201 in NRG3 (p = 10 −7 ) and this study did not replicate any findings from Senapati et al. Few genetic studies have been conducted specifically for analysing response to sulfasalazine, hydroxychloroquine, or leflunomide. For leflunomide IL-6 -174G/C polymorphism was found to be associated with response. The GG genotype confers a higher risk of therapeutic failure than GC or CC [27].
More recently, transcriptomic signatures associated with prognosis and treatment response to csDMARD have been reported. For example, Humby et al., demonstrated that specific gene expression patterns in the synovial tissue at baseline in treatment naïve RA patients can predict the extent of radiographic joint damage after 12 months following csDMARD treatment. Ongoing joint damage is considered an indicator of treatment nonresponse. The prediction model incorporates rheumatoid factor titre, and the expression level of seven genes using NanoString probes applied to synovial biopsies: SDC1 (encodes plasma cell marker CD138), CSF2 (stimulates growth and differentiation of multiple immune cell lines including granulocytes and macrophages), DENND1C (activates RAB35, which is involved in actin polymerization), CD180 (mediates innate immune response and activates NF-kappa-B), UBASH3A (induces apoptosis in T-cells), CXCL1 (chemoattractant for neutrophils), MMP10 (a matrix metalloproteinase). The predictive algorithm had an area under the curve (AUC) of 0.88 [28], which shows high specificity and sensitivity.
In a separate analysis of the same cohort of treatment naïve RA patients, Lewis et al., used RNA-Seq to show that expression levels of specific groups of associated genes (gene modules) in the synovial tissue were associated with response to csDMARDs [29]. Specifically, increased expression of the monocyte and chemokine, dendritic cell/antigenpresenting cell, B-cell, and type I IFN signature modules at baseline were associated with larger reductions in DAS-28 CRP score 6 months after csDMARD treatment. Gene expression patterns also changed over time and were associated with response. Modules for CD8+ T-cells, mast cells, and TLR signalling had a significantly higher expression level in EULAR moderate and good responders at 6 months compared to non-responders. Conversely, the CD55+ type 1 fibroblast module had a lower expression in responders [29]. These studies indicate that differences in the synovial gene expression profile of treatment naïve RA patients are associated with variations in treatment response. Lliso Ribera et al., identified in the same cohort a gene set that at baseline could predict the patients that at 12 months required biologic therapy [30].
However, csDMARD treatment in the above studies varied significantly. Patients were treated according to the British Society of Rheumatology's guidelines and received a diverse range of csDMARD treatments/combinations according to physician and patient preferences. Although these results are promising we are still far from developing a clinically useful test for predicting individual response to specific csDMARDs.
Research into biomarkers of bDMARD response also initially focused on genetics. Multiple loci and SNPs associated with response to TNFi in RA patients have been identified in recent years, but these findings have not been consistently replicated in different populations. For example, in the PTPRC gene encoding receptor tyrosine-protein phosphatase C or CD45, the rs10919563 G>A polymorphism has been associated with reduced efficacy of the TNFi: adalimumab, etanercept, and infliximab [31]. However, while this was replicated in an independent study [32], a third study failed to find the same association [33]. Another example relates to the TNF promoter SNP G308A (rs1800629) polymorphism, in which in one study of 1040 Caucasian RA patients, the TNF-308AA genotype was significantly associated with poorer response to etanercept [34]. Conversely, the GA genotype correlated with a better response to adalimumab [35]. Polymorphisms in the IL-6 promoter region have been associated with improved response to TNFi in the Spanish population [36,37]. Additionally, polymorphisms on steroid hormone-related genes CYP2C9 and CYP3A4 have also been associated with response [38]. Analysis of three large-scale GWAS studies found 12 loci associated with TNFi response [39][40][41] but this could not be replicated in a similar large-scale study of 755 RA patients [42]. These results demonstrate that the relationship between genotype and treatment response is very complicated, and it is possible that small variations in concert may produce differences in response. Additionally, genotype varies significantly between racial groups, which means there is unlikely to be one specific set of polymorphisms that affects treatment response across the entire human population.
Transcriptomic signatures associated with bDMARD response have been found [43,44]. In a large-scale, synovial biopsy-driven, randomised control trial of rituximab vs. tocilizumab, patients were classified as B-cell rich or B-cell poor by the expression levels of a specific group of genes related to B-cells. B-cell-poor patients showed a lower CDAI50 response rate to rituximab compared to tocilizumab. B-cell-rich patients had the same response rate to rituximab and tocilizumab [43]. Conversely, classifying patients as B-cell-rich or poor solely based on histology did not show any statistically significant difference in CDAI50 response rate between rituximab and tocilizumab. In a separate analysis of the same cohort of patients, 6625 differentially expressed genes (DEGs) were found between rituximab responders vs. non-responders and 85 DEGs were found between tocilizumab responders vs. nonresponders [44]. Genes upregulated in rituximab responders included leukocyte-related genes, macrophage, chemokine and cytokine-related genes and members of the immunoglobulin (Ig) superfamily. Lymphocyte and Ig genes were also upregulated in the synovial tissue of tocilizumab responders. Downregulated genes in rituximab responders and tocilizumab responders were predominately fibroblast-related genes, Hox genes and complement genes. As this was a cross-over trial, it was possible to identify a group of double non-responder individuals who failed both rituximab and tocilizumab during the study. Furthermore, since all individuals who entered the study were anti-TNF non-responders, this group of individuals truly represent a multi-drug resistant or "refractory" group who failed to respond to three biologics. Notably, these patients had significant upregulation of fibroblast and extracellular matrix-encoding genes such as fibroblast growth factor (FGF), homeobox (HOX) and NOTCH family genes, together with multiple cell adhesion molecule and collagen-encoding genes. This suggests that multidrug-resistant RA or persistent non-response is associated with a specific transcriptomic profile which is dominated by fibroblast-related genes rather than classic adaptive immune system-related genes.
Multiple gene sets have been identified in whole blood, which are associated responses to TNFi including infliximab [45,46] and adalimumab [47] and validation of these gene sets in a separate cohort of patients has been conducted, which showed a sensitivity of 71% and a specificity of 61% in predicting response to TNFi [48].
Jak inhibitors (JAKI) are a class of target synthetic DMARD that were first introduced for widespread clinical use by the approval of tofacitinib by the FDA in 2012. Research into JAKI has mainly focused on their efficacy, mechanisms of action and safety profile. Investigations into biomarkers of response to Jak inhibitors lag the other DMARDs due to the relatively shorter period they have been in use. Valli et al., found that tofacitinib treatment reduced both the DAS28 score and the levels of certain circulating pro-inflammatory markers, the most pronounced reduction being in IL-6, C-X-C motif chemokine ligand 1 and matrix metalloproteinase-1. Additionally, higher baseline circulating levels of IL-6 and lower levels of C-C motif chemokine 11 were associated with DAS reduction posttreatment [49]. However, reduction in cytokine levels is known to be a biological function of JAKI and the study did not determine whether the same reduction may also in nonresponders as well. Ciechomask et al., found that circulating levels of miRNA-19b-3p are associated with baricitinib response in RA. The levels of this miRNA are higher in RA patients compared to healthy controls and there is a statistically significant reduction in levels 3 months after treatment, which corresponded with a significant DAS28 score reduction [50]. Again, this study does not compare the differences between responders and non-responders. Both studies recruited small cohorts of patients (54 and 44 RA patients, respectively), and their results have not been replicated.
These results show that there are distinct synovial transcriptomic profiles of both response and non-response to csDMARDs and specific bDMARDs. However, genetic and transcriptomic studies only provide insight into one aspect of the biological mechanisms underlying response. Genetic variation affects epigenetic modifications including DNA methylation, which in turn affects gene transcription. Therefore, DNA methylation studies Biomedicines 2023, 11, 1987 5 of 19 should help to further elucidate the biological mechanisms of response. Many recent studies have focused on DNA methylation as a biomarker of RA treatment response with a strong, growing body of evidence that DNA methylation plays an important role in both pathogenesis of RA and treatment response.
An Overview of DNA Methylation
Epigenomics refers to modulations that influence gene expression but do not change the underlying genetic code. These include DNA methylation, histone modification, and microRNA modulation [51]. This review will primarily focus on DNA methylation as it is the most widely studied epigenetic modification.
As schematically illustrated in Figure 1, DNA methylation is the process whereby a methyl group is added to a cytosine-guanine (C-G) dinucleotide (CpG) by DNA methyltransferase enzymes. These results show that there are distinct synovial transcriptomic profiles of both response and non-response to csDMARDs and specific bDMARDs. However, genetic and transcriptomic studies only provide insight into one aspect of the biological mechanisms underlying response. Genetic variation affects epigenetic modifications including DNA methylation, which in turn affects gene transcription. Therefore, DNA methylation studies should help to further elucidate the biological mechanisms of response. Many recent studies have focused on DNA methylation as a biomarker of RA treatment response with a strong, growing body of evidence that DNA methylation plays an important role in both pathogenesis of RA and treatment response.
An Overview of DNA Methylation
Epigenomics refers to modulations that influence gene expression but do not change the underlying genetic code. These include DNA methylation, histone modification, and microRNA modulation [51]. This review will primarily focus on DNA methylation as it is the most widely studied epigenetic modification.
As schematically illustrated in Figure 1, DNA methylation is the process whereby a methyl group is added to a cytosine-guanine (C-G) dinucleotide (CpG) by DNA methyltransferase enzymes. CpG sites exist across the entire genome in both coding and non-coding regions but CpG sites then cluster in groups called islands. A total of 70% of islands are found in promoter regions of genes [52], and 50% of human genes initiate transcription from a CpG site [53]. Methylation in the promoter region appears to interfere with transcription factor interaction with the underlying DNA. Hypermethylation in the promoter region is usually associated with decreased expression of the gene or even transcription silencing [54]. Hypomethylation in contrast is associated with active transcription and increased gene expression [51]. Gene body and intergenic methylation is not associated with transcriptomic silencing and has a more context-dependent relationship that differs between genes [55]. Methylation of the CpG site in the gene body is thought to prevent spurious transcription factor binding and regulate alternative splicing [56].
DNA methylation patterns are highly tissue and cell-type-specific, which reflects its essential role in normal development. DNA methylation enables selective temporal activation of lineage-specific genes, and suppression of pluripotency genes in the early embryonic stages, which ensures proper establishment of gene expression patterns for specific tissue and cell type development [57]. DNA methylation is vital in controlling the expression of imprinted genes, enabling allele-specific expression of gene clusters, which are essential for normal development. Parent-specific methylation patterns are introduced during gamete differentiation and maintained throughout life [58]. DNA methylation plays an important role in sex development through inactivation of the second X chromosome in females, and the development of sex-specific tissues/cells. Epigenetic differences may also contribute to differences in disease susceptibility between sexes [59]. Throughout life, DNA methylation remains an essential mechanism for dynamic gene expression regulation in response to external and internal stimuli [60]. Diversity in methylation patterns between individuals is affected by a combination of genetic and environmental factors. Underlying genetic variation including sequence-specific allelic variations influence methylation patterns [61] and many environmental factors including age, early life poverty, stress, diet, smoking, obesity, and diseases affect DNA methylation patterns [61,62].
DNA Methylation in RA
There has been significant progress in recent years in DNA methylation research in RA showing that the methylome of patients displays distinct differences compared with healthy controls. Liu et al., performed the largest epigenome-wide association study to date on the whole blood samples of 354 ACPA-positive RA cases and 337 controls [63]. Their results identified genome-wide statistically significant differential methylation in nine CpG locations within the MHC region. Additionally, by analysing the interaction between genotype and differential methylation it emerged that methylation at these susceptibility loci is likely to mediate the genetic risk for the development of RA. An independent smaller study of 62 RA patients also found significant methylation differences between RA and healthy controls with methylation of peripheral blood monocytes found to be directly linked with DAS28 score in RA patients through the action of inflammatory cytokines [64].
Recent studies into the methylation pattern of specific immune cell types in RA including fibroblast-like synoviocytes (FLS), monocytes and lymphocytes have also revealed interesting insights into RA pathogenesis. For example, peripheral blood mononuclear cells (PBMCs) of RA patients were found to be globally hypomethylated compared to healthy controls [65], while an epigenome-wide association study found, again in PBMCs, 1046 different DNA methylation sites linked to disease pathogenesis [66]. Furthermore, distinct DNA methylation signatures in peripheral blood B-cells and T-cells are already seen in early RA patients compared with healthy controls [67]. In established RA, important genes involved in RA pathogenesis, which were found to be hypomethylated in the B-cells and distinguish patients from healthy individuals include: BARX2 encodes for a transcription factor, which influences cell processes involved in cell adhesion and migration, ASB1 mediates degradation of proteins including JAK2, involved in RA inflammation, ADAMTS17 a metalloprotease, MGMT a methyltransferase [68].
DNA methylation patterns in the cluster of differentiation 4 positive (CD4+) T-cells of RA patients are different compared to healthy controls. In RA patients, JUN, STAT1, PTEN, and CD44 genes exhibit hypermethylation, while KRAS and ALB show hypomethylation. Gene ontology (GO) enrichment studies indicate that the differentially methylated genes in RA are connected to T-cell biological processes, suggesting that DNA methylation plays a role in regulating CD4+ T-cell function in RA [69]. Comparisons between memory CD4+ T-cells and naive CD4+ T-cells in RA patients reveal an increased number of differentially methylated positions (DMPs) in memory cells. Most of these DMPs exhibit increased DNA methylation in RA patients with active disease. Specifically, differential hypomethylation of UBASH3A, a gene involved in antigen presentation to T-cells was found [70]. T-cells exhibit cellular heterogeneity and can differentiate into subsets. In RA, T-helper (Th) 1 and Th17 subsets contribute to inflammation, while Th2 cells can inhibit Th1 and Th17 cell function and dampen inflammation. DNA methylation sites in CD4+ naïve T-cells and memory CD4+ T-cells of RA patients indicate a shift towards Th17 cell development and there is differential hypomethylation of IFN-related genes increasing gene expression, which serves to perpetuate chronic inflammation [71,72]. Aberrant DNA methylation in T-cells of RA patients has also been reported to cause T regulatory (Treg) and Th17 imbalances leading to the amplification of inflammation, which is resolved by MTX treatment [73]. Cribbs et al., found hypermethylation in the NFAT binding site of the CTLA-4 promoter region leading to reduced production of CTLA-4, which was associated with compromised Treg activity in RA [74].
Studies of FLS in RA patients showed significant differential methylation compared with osteoarthritis (OA) patients [75]. Nakano et al., found 1859 DMPs between RA and OA patients with hypomethylation of CpG sites located on key genes in RA pathogenesis including CASP1-encodes Caspase 1 that induces cell apoptosis, MAP3K5 a MAP kinase involved in the innate immune system, STAT3 a transcription activator, activated in response to cytokine signalling, MEFV (pyrin) that generates inflammation in response to interferon-gamma signalling. Hypomethylated CpG sites were primarily located on gene pathways involved in cell adhesion/migration and extracellular matrix interactions [76], also supported by two additional studies into FLS [77,78]. Interestingly, the latter study also found that global DNA methylation patterns were also joint-specific in RA and OA FLS, providing a plausible explanation as to why RA has a propensity to attack certain joints [78]. The DMPs discovered in FLS of RA patients also showed some overlap (~20%) with DMPs found in CD4+ naïve T-cells in the peripheral blood of RA patients [79], suggesting a possible common DNA methylation change linked to RA pathogenesis. Subsequent research found the global hypomethylation in RA-FLS is likely caused by the downregulation of DNMT1 (DNA methyltransferase 1) and DNMT3A, which are important enzymes involved in DNA methylation, driven by the inflammatory environmental [80]. DNA methylation of the PTEN gene promoter region has also been shown to activate FLS in RA pathogenesis [81]. These studies show that the methylome of RA patients differs significantly from healthy controls and likely reflects the pathogenesis of RA.
Analysis of immune cells from the blood and synovium of RA and OA patients showed the largest differences in methylation were between different tissues, rather than between disease states [82]. This suggests the methylome of immune cells is different in the synovium versus the blood and it is likely that the synovium methylome more accurately reflects the biological mechanisms of pathogenesis in RA.
Single-cell RNA sequencing analysis of cells derived from synovial tissue has found distinct subpopulations of inflammatory cells and fibroblasts that are not present in the blood [83]. Within the CD4+ T helper cell population, a distinct subset marked by high expression levels of MAF (transcription factor), CXCL13 (B lymphocyte chemoattractant), and PDCD1 (immune-inhibitory checkpoint receptor PD-1) was detected, which had not been identified in previous single-cell RNA sequencing studies of human PBMCs. Within Natural Killer cells, a subpopulation expressing high levels of cytokines XCL1 (lymphotactin) and XCL2 was discovered. These cytokines regulate fibroblast production of matrix metalloproteinases and direct lymphocyte migration in synovial tissue. Analysis of fibroblasts found two transcriptomically distinct fibroblast subsets that have distinct anatomic locations within the synovium. This study shows that the synovium has a unique, diverse range of previously unknown cell subpopulations with distinct transcriptomic signatures, which likely contribute to the pathogenesis of RA and can affect response to treatment.
Histological studies of immune cells in the synovial tissue of treatment naïve early RA patients found three distinct pathotypes: a lympho-myeloid type dominated by the presence of B-cells and myeloid cells, a diffuse-myeloid type myeloid cells but very few B-cells and the pauci-immune type characterised by scanty immune cells and prevalent stromal cells [29]. The lympho-myeloid and myeloid pathotypes are associated with higher disease activity and acute phase reactants, but a better overall response to conventional RA treatment [28]. Each synovial pathotype had a distinct transcriptomic profile and specific gene expression signatures are associated with treatment response [29].
DNA methylation analysis of synovial tissue is still in its infancy but, as the methylation process is highly cell-specific, the methylation status of synovial tissue will naturally reflect its diverse cellular composition as per the above-described pathotypes. Consequently, each pathotype is likely to have a distinct DNA methylation profile and it would be interesting to establish whether specific synovial DNA methylation signatures will enhance our future ability to predict prognosis and treatment response in RA.
DNA Methylation Biomarkers in Other Autoimmune Diseases
Research into DNA methylation in other autoimmune inflammatory diseases is still in the early stages but there have been some promising results. A systematic review of methylation in inflammatory bowel disease found consistent differential methylation was identified for 256 DMPs in the peripheral blood of IBD patients compared to healthy controls [84]. DNA methylation in the whole blood can also be used to differentiate Crohn's disease from intestinal tuberculosis, which is difficult to achieve clinically [85]. These studies suggest DNA methylation could provide biomarkers for future non-invasive diagnosis. In systemic lupus erythematosus (SLE), DNA methylation has been used to identify disease subtypes [86] and methylation signatures related to prognosis and treatment response have been found [87], suggesting a future role for DNA methylation in diagnosis and treatment allocation.
DNA Methylation as a Biomarker
DNA methylation has great potential as a biomarker because it is dynamic and constantly modified in response to stimuli. It is more stable than gene expression at transcript and protein levels and is inherited between cell divisions [54]. The most successful use of DNA methylation signatures as biomarkers is in the diagnosis and treatment of cancer. Tumour cells have highly aberrant and unique DNA methylation patterns that differentiate them from normal cells. Diagnostic DNA methylation tests for early detection of cancers have been developed and show some promise. PanSeer panel detects circulating tumour methylated DNA (ctDNA) that matches 595 specific high-risk locations on the genome. The panel has high sensitivity (88%) and specificity (96%) in detecting five common cancer types from peripheral blood samples of asymptomatic patients up to four years before conventional diagnosis [88]. It is in the advanced stages of development into a clinical test though it has not been approved by the Federal Drug Administration (FDA) or the National Health Service (NHS) for clinical use yet. Two DNA methylation-based diagnostic biomarkers have been approved by the FDA for clinical use in the diagnosis of colorectal cancer [89] and DNA methylation at baseline has been shown to be a predictive response to therapy in colorectal cancer [89,90]. DNA methylation patterns have been shown to predict response to neoadjuvant therapy in specific types of breast cancer [91,92].
However, using DNA methylation as a biomarker also poses certain challenges. Age, sex, smoking, alcohol consumption, diet, stress, and exposure to environmental chemicals can all induce changes in DNA methylation patterns [62]. These confounding factors must be controlled for when analysing DNA methylation as biomarkers in diverse patient population groups. DNA methylation is highly cell-type-specific and making direct comparisons between studies investigating different cell types or tissues can be very difficult. This prevents high-quality meta-analyses and limits statistical power. In studies of heterogenous samples such as whole blood the methylation signature is a summation of all the cell types. This is challenging to correct in post hoc analysis despite the existence of targeted algorithms [93]. Therefore, DNA methylation signatures can be very useful biomarkers, but discovery requires overcoming the unique challenges posed by its biology.
DNA Methylation and Response to csDMARDs
As previously discussed, genetic and transcriptomic signatures linked with treatment response in RA have been found. As gene expression is regulated by DNA methylation, it is possible that DNA methylation signatures linked to csDMARD response also exist and might provide a more accurate biomarker as DNA methylation is more stable than gene expression levels.
So far, all DNA methylation studies exploring associations/predictability of response have been performed on whole blood or cells isolated from blood, as this is the most accessible form of tissue for investigation. Most studies investigating the relationship between DNA methylation and csDMARD response centre on MTX as a monotherapy or in combination with steroids and other csDMARDs ( Table 1). Recent studies have focused on how MTX changes the methylome both globally and in specific cell lines, and whether differences in baseline DNA methylation patterns are related to response. [94]. Quantitative PCR showed a corresponding decrease in the expression of DNA methyltransferase 1 (DNMT1), the enzyme that maintains DNA methylation patterns, in both cell types. Increased expression of enzymes involved in demethylation was also found in monocytes. In MTX-treated patients, global DNA methylation levels were the same as in healthy controls, indicating that treatment reversed the global hypomethylation [94]. The treated patients included in this study had a mean DAS28 score of 1.6 which is considered disease remission, suggesting that reversal of global hypomethylation in T-cells and monocytes may be a good indicator of disease control. These findings are supported by Liebold et al., who showed global hypomethylation in peripheral blood monocytes (PBMCs) of treatment naïve RA patients compared to controls [65]. Guderud et al., compared MTX-treated patients in remission with healthy controls and found 80% of DMPs in MTX-treated patients were hypermethylated in CD4+ memory T-cells but the proportion of hyper-versus hypomethylated sites was equivalent in CD4+ naïve T-cells [70]. A second study carried out by the same team compared DNA methylation patterns before and three or six months after initiating MTX treatment in the same group of patients. Treatment was associated with 226 significant DMPs in CD4+ naïve T-cells of which 63% were hypomethylated post-treatment, and 188 DMPs in CD4+ memory T-cells with 59% displaying hypomethylation [100]. The discrepancies between the above studies, especially in relation to the degree of hypomethylation, are likely due to the different cell types investigated, methodological differences, small sample sizes, and differences in treatment regime (MTX monotherapy versus combination therapy). These data indicate that MTX treatment generally reverses global hypomethylation in immune cells, but the exact changes are specific to the cell type and likely reflect the diverse mechanisms of action attributed to MTX. Global methylation patterns have been shown to be correlated with clinical response. Gosselt et al., investigated the global methylation of leukocytes from 181 RA patients treated with MTX or MTX and 2 other csDMARD. They discovered that higher baseline global DNA methylation was associated with smaller decreases in DAS28 CRP from baseline and with MTX non-response after 3 months of treatment [96]. Liebold et al., analysed global DNA methylation patterns of PBMCs and lymphocytes in 45 RA patients before and three months after starting treatment with either MTX, sarilumab, tofacitinib or baricitinib. The results demonstrated a strong negative correlation between methylation levels and DAS28-ESR scores at both time points [65]. Comparing methylation levels to individual components of the DAS score showed a strong correlation with swollen and tender joints, but there was no correlation with other parameters including ESR, CRP, or VAS. Although it is not possible to elucidate the individual effect of MTX from this study, it may indicate global hypermethylation in inflammatory cells at baseline is associated with response to RA treatment.
DNA methylation at specific CpG sites has been linked to csDMARD response. Glossop et al., extracted peripheral blood samples from 46 cs-DMARD naïve RA patients, who were subsequently treated with MTX, hydroxychloroquine, or sulfasalazine. Genome-wide DNA methylation profiling of T-cells found six statistically significant DMPs between responders and non-responders after FDR correction. Two specific CpG sites located on the genes: ADAMTSL2 (encodes a secreted glycoprotein that interacts with the extracellular matrix) and BTN3A2 (gene located in the MHC class 1 locus, encoded protein inhibits the release of IFN-gamma from activated T-cells) were very strongly associated with treatment response. Increased methylation at these sites in combination was the biggest predictor of response (80.0% sensitivity, 90.9% specificity) [97]. Nair et al., investigated DNA methylation in whole blood collected from 72 RA patients before and 4 weeks into MTX treatment but found no differential methylation in baseline samples between EULAR good and poor responders at 6 months [98]. However, two CpG sites showed significant methylation changes at 4 weeks associated with clinical response status by 6 months, though the significant cut-off used was a nominal p value of 1 × 10 −6 instead of standard FDR correction. Four additional DMPs at the 4-week time point predicted an improvement of swollen joint count and CRP at 6 months. Gosselt et al., found no DMPs or differentially methylated regions (DMRs) at genome-wide significance level in pre-treatment PBMCs between 68 responders and non-responders to MTX monotherapy or combination therapy [99]. The discrepancies between these studies are likely due to the different cell types investigated. Glossop et al., only analysed T-cells whereas Nair et al., analysed whole blood, which is a heterogenous mixture that may mask methylation changes in specific types of immune cells.
Although the results from these studies are interesting, there is no consensus on DNA methylation patterns of csDMARD response and more research needs to be conducted to find a reliable methylation biomarker of response. Additionally, most studies look at a combination of different csDMARDs so it can be difficult to understand the DNA methylation patterns associated with response to specific agents. Monotherapy studies primarily focus on MTX and to date, there are no specific DNA methylation studies of response to monotherapy with sulfasalazine, leflunomide, or hydroxychloroquine, most likely because there are relatively few patients on monotherapy with these agents.
DNA methylation analysis of synovial tissue is a more promising future avenue of investigation compared to blood. Transcriptomic signatures of response have been found in the synovial tissue [29], and therefore DNA methylation signatures of response may also be found.
DNA Methylation and Response to Biological and Targeted Synthetic (Jak-Inhibitors-JAKI) DMARDs
Research into DNA methylation patterns associated with biologic response is still at a relatively early stage, with only a handful of studies published confined to the peripheral blood (Table 2). To date, no specific studies investigating DNA methylation biomarkers of JAKI response have been carried out. Liebold et al., investigated global hypomethylation in the PBMCs of a mixture of patients treated with either MTX, sarilumab, tofacitinib, or baricitinib. It is impossible to separate the results to ascertain the effect of methylation on response to just the JAKI [65]. Specific DNA methylation signatures of response have been identified by Plant et al., in an epigenome-wide association study on pre-treatment whole blood samples from 72 etanercept-treated RA patients [101]. Five CpG sites were found to be significantly differentially methylated at baseline between responder groups with a false discovery rate of <5%. The top two DMPs are mapped to exon 7 of the LRPAP1 gene on chromosome 4. This gene encodes a protein that interacts with the low-density lipoprotein (LDL) receptorrelated protein and facilitates its proper folding and localization. It is not known to be involved in immune response or inflammation and the exact role of methylation at this locus in RA is still unclear. Methylation quantitative trait loci analysis was carried out on the LRPAP1 loci. The A allele of rs3468 SNP correlated with higher methylation levels at the top two DMPs and increased risk of poor response. SNP rs3468 was analysed in an independent cohort of 1,204 TNFi-treated RA patients and each stepwise increase in the A allele from GG to AA caused a 1.28-fold increased risk of being in the poor response group.
This suggests genotype affects DNA methylation at key positions, and the combination of these factors influences individual treatment response. However, a longitudinal analysis of RA patients treated with TNFi conducted by Julia et al., found no genome-wide significant DMPs at baseline between responders and non-responders [102]. This study found that TNFi significantly changed the whole blood methylome over 3 months in all patients. Methylation patterns in post-treatment samples more closely resembled that of healthy individuals. These changes were found equally in both responders and non-responders suggesting that TNFi changes the methylome irrespective of response [102]. The discrepancy between these two studies could be due to the different DNA methylation arrays used. Although both studies used Illumina arrays, Julia et al., performed their study on the EPIC array which has >850,000 probe sites, the majority of which are in the inter-genomic region (IGR), whereas Plant et al., used the 450K array which has around 450,000 probes situated mainly on promoter regions and the gene body. These two arrays have a significant overlap [104] and in theory results from these arrays are directly comparable. However, the increased number of probes from the EPIC array means that in statistical testing the p value threshold to pass FDR also increases and therefore probes that were statistically significant from the 450K array may not be significant when tested with all the probes from the EPIC array.
Tao et al., investigated gene expression and DNA methylation patterns associated with etanercept and adalimumab response [103]. Differential gene expression in PBMCs was found between response groups for TNFi, but the differentially expressed genes (DEGs) showed very little overlap (<2%) indicating response is defined by distinct gene signatures for each medication. Epigenome-wide association study in PBMCs found nominally significant (p < 0.05) DMPs between response groups but none at the genomewide significance level. Globally more hypermethylated DMPs were found in etanercept responders compared to adalimumab responders, suggesting there is also a distinct DNA methylation pattern of response for each drug. Using gene expression data from PBMCs, monocytes and T-cells and the DNA methylation data from PBMCs, the investigators built a predictive algorithm using machine learning. The model for predicting etanercept response using DNA methylation had an overall accuracy of 88%, which surpassed the accuracy of pure gene expression models (73% to 79%). The adalimumab DNA methylation model had a high overall accuracy of 84%, which was similar to the pure gene expression model using DEGs from PBMCs (85%). This suggests that DNA methylation patterns may be used to generate accurate predictive algorithms for TNFi response. Further research into integrating DNA methylation, transcriptomic and genotype data into one predictive model may produce more accurate algorithms.
Discussion
DNA methylation signatures have shown great promise as biomarkers for diagnosis and predicting treatment of cancer. In contrast, DNA methylation research in RA is still in the early stages and no reliable biomarkers of treatment response have been found. The major challenge facing this area of research is the relatively small methylation differences between treatment groups in RA. Methylation variations of <5% are the norm for noncancerous tissue [105] and large sample sizes are required to provide sufficient power to detect the subtle differential methylation. Mansell et al., estimate that using the Illumina Human Methylation EPIC array, a minimum of 200 samples are required to provide 80% power to detect a 5% mean methylation difference at 80% of CpG sites and to detect a mean methylation difference of 2%, 1000 samples are required [105]. Therefore, nearly all DNA methylation studies in RA were statistically underpowered for detecting differences in DNA methylation of <5% between groups. This explains why many RA studies could not find statistically significant results with epigenome-wide association studies [99,102], and significant findings in other studies have not been replicated. Meta-analyses for methylation studies of treatment response are very difficult to perform due to the lack of standardisation between studies. As shown in Table 1, treatment regimens used in csDMARD studies vary significantly, with some studies using data from patients treated with two or more regimes [96,97,99]. Additionally, DNA methylation is highly cell-specific [52] and it is not possible to directly compare results from studies analysing different cell types. Methods for measuring DNA methylation also vary greatly and studies with different methods cannot be directly compared. Global methylation assessments use either mass spectrometry or immunofluorescence-centred methods [65,94,96], whereas methylation microarrays allow for analysis of methylation levels at specific CpG sites across the genome [98,99].
Further challenges facing methylation research in RA treatment response relate to the lack of samples from the diseased tissue, i.e., the synovium. All DNA methylation studies of treatment response to date have used peripheral blood samples but the blood methylome is affected by many different factors, including other causes of inflammation independent of RA. In contrast, the methylation patterns of the synovium and synovium-derived cells may more accurately reflect the biological mechanisms underlying RA-specific inflammation.
Future Perspectives
DNA methylation analysis of the whole synovial tissue and/or single cells derived from synovial tissue digestion is the logical next step and is more likely to provide specific biomarkers for RA treatment response in the future. A study comparing gene expression (RNA-seq) in the synovium versus the blood of RA patients showed much greater differential gene expression in the synovium compared to the blood [29]. In that study, three different RA subtypes linked to distinct gene signatures were described but these differences were not present in the blood [29]. Most importantly, synovial transcriptomic signatures of response to both csDMARDs and bDMARDs have already been reported [29,43,44]. Therefore, analysis of DNA methylation of synovial tissue has the potential to yield similarly useful biomarkers of treatment response.
Methods for generating DNA methylation data have progressed significantly. The Illumina BeadChip arrays-450K and EPIC, are relatively new but increasingly used for epigenome-wide association studies [106]. These arrays provide a high number of probe sites, 450,000 and 850,000, respectively. Although they do not include all the known dynamically regulated CpG sites [107] and do not cover the whole genome, they provide a relatively fast and cost-effective method to generate detailed DNA methylation data from large groups of samples. There is a significant overlap between the 450K and EPIC arrays [104] and methylation measurements correlated well [108]. This allows for a direct comparison of results from studies using either of the two arrays and easier replication of previous findings. In the future, as more research using these arrays is published, it will be possible to perform accurate meta-analyses with sufficient power to detect small methylation differences between treatment response groups.
Finally, DNA methylation should not be viewed in isolation as it is only one type of epigenetic modification. Histone modifications and microRNAs also contribute to the pathogenesis of RA, in part through their regulation of DNA methylation [109]. Further research is needed to elucidate the complex role epigenetics plays in RA as part of the multi-omic regulatory landscape in conjunction with genomics, transcriptomics, and proteomics. An integrated multi-omic approach is likely to better elucidate the biological mechanisms underlying pathogenesis and treatment response in this complex condition. This combined approach has already been successfully used in tumour profiling [110] and in predicting treatment to immunotherapy in certain cancers [111]. A similar integrated multi-omic method will likely be the most fruitful course of future research to identify specific biomarkers and build predictive models of treatment response in RA.
Conclusions
In conclusion, the emerging evidence from numerous studies highlights the potential of DNA methylation as a promising biomarker for predicting the response to treatment in patients with RA. DNA methylation alterations have been observed in key genes and regulatory regions associated with immune system dysregulation and inflammatory path-ways in RA. These epigenetic modifications, particularly in genes involved in B-cell/T-cell differentiation and cytokine signalling, appear to play a crucial role in pathogenesis and treatment response.
Although substantial progress has been made, further research is required to validate the utility of DNA methylation as a reliable biomarker of treatment response in RA. Largescale prospective studies, including diverse patient populations and different treatment regimens, are warranted to establish robust associations and refine predictive models. Additionally, investigating the dynamic nature of DNA methylation patterns throughout the course of treatment will provide valuable insights into the underlying mechanisms of therapeutic efficacy.
DNA methylation holds significant promise as a biomarker for predicting treatment response in RA. Its potential to provide valuable insights into disease mechanisms and guide personalized therapeutic approaches makes it an exciting area of research in the field of rheumatology. Future advancements in our understanding of DNA methylation dynamics and its functional consequences will undoubtedly contribute to improved patient outcomes and the development of precision medicine strategies for RA.
Author Contributions: All authors contributed to the primary manuscript. All authors have read and agreed to the published version of the manuscript. | 2023-07-16T15:04:31.827Z | 2023-07-01T00:00:00.000 | {
"year": 2023,
"sha1": "826f6bb6bbc69ef63f2a5cccdb8f17924dfec0bb",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9059/11/7/1987/pdf?version=1689326184",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ff44a17fd8551758bae5c6cf33af8a895171ad99",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
238902237 | pes2o/s2orc | v3-fos-license | Monitoring segment in modeling the level of financial security of a modern entrepreneurial entity
. The article is devoted to the monitoring segment in modeling the level of financial security of a modern entrepreneurial entity, attention is drawn to the content of the problems that motivate change. The economic and mathematical model "Monitoring and correcting segment" with establishment of principles of its adaptation is offered in scientific development. The author's vision of the stages of implementation of economic-mathematical models (EMM) with detailing the actions of performers is given. The components of the five-factor model of criteria for assessing the level of financial security of entrepreneurial entities are identified and the coefficient of the monitoring and correcting segment is proposed, which provides the formation of information flow for management decisions based on the updated database. To improve the quality of management decisions in the field of financial security, a scale for assessing the level of deviations according to the criterion of the monitoring and correcting segment has been developed for entrepreneurial entities. The expediency of introduction of the offered changes in system of an estimation of financial safety of entrepreneurial entities is proved, and the directions of the further researches are established.
Introduction
Modernity proves that the entities of the entrepreneurial sector of the economy suffer from uncertainty in solving problems caused by the influence of negative factors on their activities. It is a question of existence of imperfection of the legislative regulators acting as a basis of their activity. At the same time, the issue of protection of the business sector from quarantine measures generated by COVID-19 has not been resolved at the legislative level. In addition, the level of financial security of the business sector of the economy is reduced by unregulated tariff policy, which significantly affects the financial performance of their activities. Such factors can be called many as they arise from existing global problems. Of course, the qualification level of entrepreneurial entities and their teams should not be overlooked. Their adaptation to modern challenges in the country's economy is very slow. The time has come when it is necessary to reconsider not only the system of financial security of business structures, but first of all, to turn to scientific developments, methods of economic and mathematical modeling, innovative approaches to management decisions, and to provide the possibility of implementing own initiatives based on existing business experience. Taken together, such measures should change the situation for the better, but without effective monitoring of the implementation of the planned changes, the effectiveness will not have a full return on investment. Only the control of each stage of the financial security system will allow the entrepreneurial entity to timely adapt innovations with the level of greatest utility for their own business.
This problem has not lost relevance for quite some time, but the existing proposals and developments need to be updated, especially when determining the results of business structures in comparison with previous periods.
Recently, the attention of scientists has been focused on a comprehensive study of the problems of financial security of a modern entrepreneurial entity, which has suffered from the impact of negative factors on its development. Undoubtedly, small and medium-sized businesses received significant losses from the quarantine measures imposed because of COVID-19. At the same time, a significant number of scientific developments of both domestic and foreign scientists reveal a number of other problematic aspects that affect the level of financial security of entrepreneurs.
It is fair to note that most scientific studies reveal the peculiarities of the country's financial security; but the content of the elements of the financial security system of the business sector is rather limited. Among the scientists who have devoted a number of monographic studies to the chosen subject of research should be noted M.M. Yermolenko and K.S. Horyacheva [1], A.A. Mazaraki [2], O.S. Bilousova, V.I. Harkavenko, A.I. Danylenko and others [3], Z.B. Zhyvko [4], O.V. Moroz, N.P. Karachyna and A.A. Shyian [5], O.M. Liashenko [6], V.I. Franchuk [7], N.L. Pravdiuk, Т.О. Mulyk, Ya.I. Mulyk [8] and others. In their research, domestic scientists reveal various theoretical and methodological approaches to assessing the impact of various factors on the level of financial security of enterprises, but they have limited coverage of the monitoring aspect of this system, which emphasizes the feasibility of in-depth study of the problem.
It is worth noting that foreign scientists are also concerned about the problem chosen for research. We believe that the scientific developments of such foreign scientists as James Stoner and Edwin Dolan [9], Eugene Brigham [10], Kieran Walt [11], Glen A. Welsh and Daniel G. Short [12] and others deserve attention. The value of scientific developments of foreign scientists lies in the applied aspect of the works, which reveal the financial and accounting levers of the board of enterprises of various spheres of business.
Given the interest in the problem chosen for the study, we believe that its modernity is beyond doubt.
Currently, the problem is exacerbated by the existing negative factors influencing the performance of businesses. The system of financial security does not withstand them, management decisions do not have the required level of rationality, and as a result, there is a decrease in the number of business structures, especially small and medium-sized businesses. Only these factors already indicate the existence of such a negative situation in business structures, which leads to losses on both sides. The entrepreneurial entity loses a significant share of revenue, and the country's budget will not receive a tax payment, which leads to underfunding of current socio-economic issues that need to be addressed immediately. Experience has shown that entrepreneurial entities also do not pay enough attention to the process of monitoring the planned level of financial criteria, and ultimately receive negative results of their activities.
However, it is obvious that the control system, if any, no longer has the effectiveness that was inherent in the planned economy, ie there is a question of improving the quality of supervision and control, which together can change the situation for the better. In other words, without an effective monitoring program for all parts of financial security, it is almost impossible to achieve more or less suitable development results. This raises the question of modernizing existing financial security systems based on those scientific developments that have an innovative element. Given the above, the chosen direction of research has signs of relevance and modernity.
The main objectives of scientific development are as follows: to determine and justify the feasibility of introducing a monitoring segment at all stages of assessing the financial security of the entrepreneurial entity; development of economic and mathematical model "Monitoring and correcting segment" with the establishment of the principles of its adaptation; inclusion in the criteria for assessing the level of financial security of the coefficient of the monitoring and correcting segment; development of a scale for evaluating the proposed coefficient; substantiation of expediency of introduction of the offered changes in system of an estimation of financial safety of entrepreneurial entities.
Materials and methods
Modernity proves that there is an urgent problem of changing approaches to the development of financial security for entrepreneurial entities in Ukraine. This statement emerges from the following: every year, domestic entrepreneurial entities lose their share of business, which is accompanied by a significant reduction in jobs and a decrease in the level of tax payments to the state budget; the number of negative factors that affect the development of the enterprise in Ukraine is constantly increasing; the effectiveness of existing financial security systems for small and medium-sized businesses is low, and as a result, it is almost impossible to predict future developments for a selected entrepreneur; adaptation of scientific developments on innovative segments of business development in the field of entrepreneurship is almost absent due to the lack of own financial resources and significant interest rates on borrowed resources, etc. These are just the main characteristics that outline the situation in the business sector of the country.
Undoubtedly, the scale of the decline in the number of small and medium-sized businesses in Ukraine over the past two years is impressive. One of the rather influential factors that contributed to this situation in Ukraine and in many countries around the world was COVID-19. It must be acknowledged that the business sector of the economy was not ready for such challenges. Limited business activities, a significant decline in demand for products, goods, services, financial insolvency of the population, which also suffered from quarantine restrictions, led to a significant decline in income of entrepreneurial entities. However, we believe that there are positive moments in the situation in the business sector. The negative effects of quarantine restrictions should encourage entrepreneurial entities to take more action on future development. The effectiveness of the financial security system of entrepreneurial entities did not live up to their expectations. At some point, there was a failure, which highlighted its shortcomings, and therefore entrepreneurial entities have received information on how to improve their own financial security.
Given the fact that Ukraine is actively implementing the main segments of the digital economy, which proves and justifies its feasibility at all levels of business, it becomes clear that in this area, entrepreneurial entities must actively involve existing levers of management. In particular, domestic scientists V. Kotkovskyi, V. Zaluzhny, V. Kadala, O. Huzenko and others revealed the role aspect of the digital economy, its influence on the formation of the system of financial security of entrepreneurship in their scientific developments [13]. At the same time, V. Kadala and O. Huzenko [14] in their scientific works revealed not only some aspects of financial security of enterprises, but also provided quite substantial justifications for it as a component of economic security of enterprises in general. Scientists drew attention to the existing problems in the development of economic security of enterprises and justified the directions of their solution, given the current state of development of the country in general.
In addition, as a result of research by V. Kadala, O. Huzenko and E. Pavlichenko [15], it was proposed to introduce the conceptual category of "monitoring the assessment of financial security." Its essential content is as follows: a comprehensive segment of the financial security management system of enterprises, which is able to assess the status of each representative of the management cycle based on the principles of systematic observations and timely processing of financial information for tactical management decisions.
Results
Returning to the chosen subject of research, we believe that the modern entrepreneur, regardless of the country where he does business, should reconsider approaches to the formation and development of their own financial security. We propose to adapt the economic-mathematical model "Monitoring and correcting segment" (hereinafter -EMM "Monitoring and correcting segment"). We propose to base the EMM on a number of principles: prompt response to existing financial problems; -periodicity of assessment and adjustment of changes in the value of the selected financial criteria; -market orientation of adapted corrective actions. Each of the proposed principles has its own meaning, which is based on the actions of the performers. Without prompt response to existing financial problems, the entrepreneurial entity not only loses its own income, but also minimizes the financial result, which negatively affects the tax revenue for the state budget. In turn, executors should clearly outline the frequency of studying the situation with the behavior of financial criteria, given the current business situation, and adjust the value of financial indicators based on the interests of their own business. As the results of the research show, it is almost impossible to achieve high-quality management decisions if there is no updated information flow. The principle of "market orientation of adapted corrective actions" should occupy a significant place in the development of an effective system of financial security of an entrepreneurial entity. We believe that this principle should encourage the actions of performers not only to adjust the size of the financial criteria, but above all, to establish the appropriateness of the selected changes in view of the situation covered by the market regulator in the chosen field of activity.
We believe that the EMM "Monitoring and Correcting Segment" may have the following stages of adaptation with the disclosure of the content of the actions of the direct performers: 1. The preparatory stage includes: decision-making by the entrepreneurial entity on the use of economic and mathematical model; determining the composition of specialists of the financial-analytical group, that will have the authority and be responsible for the timely adaptation of the tools of the economic-mathematical model, and on this basis will form the information base for management decisions; establishment of an information base for calculations (decisions on the choice of criteria that participate in the existing system of financial security of the entrepreneurial entity); establishing the frequency of evaluation and The criteria for assessing the level of financial security of the entrepreneurial entity is proposed to include: the coefficient of financial independence; coefficient of financial dependence; return on equity; coefficient of stability of economic growth; payback period of equity. We believe that in the conditions of limited own financial resources, these indicators can answer the question of the level of financial security of the business entity.
2. Evaluation and corrective stage includes: determining the actual value of financial criteria based on the results of reporting; development of a five-factor model for assessing the financial security of the business entity; comparison of the actual value of the financial criteria with the planned, which is set for the current period of activity of the entity; compilation of a comparative analytical table that takes into account the planning and financial criteria; calculation of the coefficient of the monitoring and correcting segment Mk (c), which is based on the comparison of the actual value of the financial criteria with the estimated development plan of the entrepreneurial entity. In economic terms, the coefficient Mk (c) can be considered a delta that reveals the existing changes. The algorithm for determining the coefficient Mk (c) can have a fairly extended species structure depending on the type of selected financial criteria for the strength of financial security.
3. The final stage includes: substantiation of the obtained results of the actual value of the criteria for assessing the level of financial security of the entrepreneurial entity; formulation of conclusions and proposals to increase the strength of financial security and future business development.
The economic and mathematical component of the proposed model "Monitoring and correcting segment" is presented in table 1. In order to more deeply assess the level of financial security of the entrepreneurial entity, we propose to expand the actions of performers through the use of the following five-factor model for assessing the level of financial security M (FBо): With this model, the entrepreneurial entity will receive additional information flow to decide whether to adapt to changes in improvement. The model transforms its appearance in the case of adapting the coefficient of the monitoring and correcting segment Mk (c). The expanded algorithm of the model can have the following form: In addition, the general type of financial security model of the entrepreneurial entity, taking into account the coefficient of the monitoring and correcting segment will be as follows: We believe that the adaptation of the EMM "Monitoring and Correction Segment" has a number of key advantages, namely: -the entrepreneurial entity receives additional information flow to make management decisions in the field of financial security; -it is possible to assess the factor impact on the level of financial security of the entity, and as a consequence, to avoid future risks and stop the decline in the size of financial criteria; -the establishment of an evaluation period and direct executors will provide an opportunity to adapt the rules of the monitoring segment, namely to exercise not only control but also constant supervision over changes in the financial security of the business.
In order to make management decisions in the field of financial security, we propose the following recommendation limits Mk (c) ( Table 2).
Discussion
A number of conclusions should be drawn from the results of the research. First, the existing problems in the activities of entrepreneurial entities motivate them to change their approaches to the development of a financial security system, which should be based on the monitoring segment. Secondly, to improve the process of assessing the level of financial security of entrepreneurial entities, an economic and mathematical model "Monitoring and corrective segment" is proposed, which should be based on a number of principles: prompt response to existing financial problems; periodicity of assessment and adjustment of changes in the value of selected financial criteria and market orientation of adapted corrective actions. Third, the criteria for assessing financial security are proposed to include the coefficient of the monitoring and adjustment segment, which will adjust the planned value of the financial criteria in relation to the actual situation in the business at the time of the report. Fourth, a five-factor model of financial security assessment based on key criteria that determine the rational use of equity has been proposed for entrepreneurial entities. In this case, to improve the quality of management decision-making, a scale for assessing the level of deviations according to the criterion Mk (c) was developed. Fifth, the adaptation of the model will provide an opportunity not only to reveal the negative factors affecting the level of financial security, but above all, will ensure a rational and sound management decision based on the generated realistic information flow. Prospects for further research should be an applied aspect of the problem in order to provide more substantive ways to solve it. | 2021-08-27T17:14:58.835Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "ef14949428be585e50b29ede69d7bafb4edf3cef",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/60/e3sconf_tpacee2021_07023.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "0b793de6e680061e024d4c009fc76de3aba7776a",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
233741700 | pes2o/s2orc | v3-fos-license | Renal Replacement Therapy in People With and Without Diabetes in Germany, 2010–2016: An Analysis of More Than 25 Million Inhabitants
OBJECTIVE Epidemiological studies have shown contradictory results regarding the time trend of end-stage renal disease (ESRD) in people with diabetes. This study aims to analyze the incidence of ESRD, defined as chronic renal replacement therapy (RRT), to investigate time trends among people with and without diabetes in Germany and to examine whether these patterns differ by age and sex. RESEARCH DESIGN AND METHODS The data were sourced from nationwide data pooled from two German branches of statutory health insurances covering ∼25 million inhabitants. We estimated age- and sex-standardized incidence rates (IRs) for chronic RRT among people with and without diabetes in 2010–2016 and the corresponding relative risks. Time trends were analyzed using Poisson regression. RESULTS We identified 73,638 people with a first chronic RRT (male 60.0%, diabetes 60.6%, mean age 71.3 years). The IR of chronic RRT among people with diabetes (114.1 per 100,000 person-years [95% CI 110.0–117.2]) was almost six times higher than among people without diabetes (19.6 [19.4–19.8]). A consistent decline in IR was observed among people with diabetes (3% annual reduction, P < 0.0001) for both sexes and all age classes. In contrast, no consistent change of IR was identified in people without diabetes. Only among women aged <40 years (P = 0.0003) and people aged ≥80 years (P < 0.0001) did this IR decrease significantly. CONCLUSIONS Incidence of chronic RRT remained significantly higher among people with diabetes. The IR decreased significantly in people with diabetes independent of age and sex. Time trends were inconsistent in people without diabetes.
patients with ESRD have diabetes when starting chronic renal replacement therapy (RRT) (6)(7)(8). Nevertheless, data comparing the RRT incidence in people with diabetes with those without diabetes are limited (9) and show wide variation, with incidence rates (IRs) among people with diabetes ranging from 59 per 100,000 person-years (PY) (10) to 678 per 100,000 PY (11). The relative risk (RR) comparing IRs in people with and without diabetes ranged from 4 (12) to 8 (11). However, different methodological approaches among the studies reduce the comparability of results. In particular, a high number of studies referred the RRT incidence in people with diabetes to the entire population (6,(13)(14)(15). Other studies solely analyzed diabetes-associated ESRD with diabetic nephropathy as the primary reason for RRT, which is the cause of ESRD in only one-half of people with type 2 diabetes (7,10). The results of those population-based studies, which analyzed the time trend of RRT IRs among people with diabetes irrespective of the underlying reason for RRT in the diabetic population, were contradictory. A decrease in incidence was seen in Hong Kong (16), whereas a stable time trend was found in Italy (17) and, indeed, an increasing trend found in Australia when solely considering type 2 diabetes (18). Moreover, some studies reported significant age differences regarding the time trend of RRT incidence (18,19). A U.S. study observed an increased incidence of ESRD as a result of diabetic nephropathy among people aged 18-45 years since 2010 but a plateaued trend among people aged $45 years (19). These results demonstrate the relevance of age-and sex-specific analysis for a correct understanding and interpretation of the temporal development of RRT incidence in people with and without diabetes.
In another recent study, we analyzed the RRT incidence in people with and without diabetes in 2002-2016, using data from one regional German dialysis center (8). In this study, the IR did not change during the observation period either in the population with or in the population without diabetes. The incidence was $4.5 times higher among people with diabetes than among those without. However, the study population was too small to analyze age-and sex-specific time trends. Furthermore, the generalizability of those results to a nationwide population was limited. The aim of the current study, therefore, was 1) to analyze the IR of RRT in Germany among people with and without diabetes as well as the corresponding RR and attributable risk to diabetes and 2) to investigate time trends for the period 2010-2016 and analyze whether these patterns differ by age class and sex.
Study Design, Study Population, and Data Assessment
We pooled anonymized nationwide data of people who were insured at the two German branches of statutory health insurance companies-Allgemeine Ortskrankenkasse (AOK) (87% of the study population) and Betriebskrankenkasse (13%)-between 1 January 2009 and 31 December 2017. These data cover $25 million inhabitants (i.e., 30% of the German population) who were continuously insured in this period (i.e., #90-day gap) for at least 1 year, a prerequisite for defining the insured person's diabetes status. In Germany, health insurance is mandatory. Approximately 90% of the population in Germany are insured by statutory health insurance funds, while the remaining 10% are privately insured. Although there are several differences between the statutory and private system, both provide full-coverage health insurance, and German citizens have the same access to medical services, such as RRT.
Using an established algorithm (20), all people included in the study were classified as having diabetes if at least one of the following criteria was met: 1) diagnosis of diabetes (ICD-10 codes E10-E14) in at least three of four consecutive quarters, 2) at least two prescriptions of antihyperglycemic medications (Anatomical Therapeutic Chemical code A10) within 1 year, or 3) at least one diagnosis of diabetes and prescription of an antihyperglycemic medication and one measurement of blood glucose or HbA 1c in the same quarter (to avoid false-positive cases as a result of data errors). We also included people with new-onset diabetes. These people were classified as having diabetes from the first quarter in which the diabetes criterion was fulfilled and retained their status throughout the study.
We identified all people with a first RRT between 1 January 2010 and 31 December 2016. Data from the years 2009 and 2017 were used only for the definition of RRT and diabetes (see below).
All cases of RRT among people with and without diabetes were recorded independently of the underlying reason for RRT. Chronic RRT was defined as chronic dialysis or preemptive kidney transplantation as indicators for treated ESRD. In line with a previous study (5), occurrence of dialysis was defined as at least one relevant physician service (i.e., hemodialysis, hemofiltration, peritoneal dialysis, hemodiafiltration) or related consultation fee arising from hospital or outpatient treatment. Dialysis was recorded as chronic dialysis if one of the following criteria was fulfilled: 1) dialysis claims were documented at least once per week over a period of 12 consecutive weeks; 2) dialysis < 12 weeks was documented before a person died with an ESRD-relevant diagnosis in at least three subsequent quarters.
People with a "condition after transplantation" diagnosis were excluded if there had been no documented kidney transplantation during the observation period. Furthermore, we excluded people with RRT in 2009 or in the first year of their insurance period since only incident RRT in 2010-2016 was assessed.
Statistical Analysis
We conducted all analyses for the entire population as well as stratified by sex and age class using the age strata 0-39, 40-49, 50-59, 60-69, 70-79, and $80 years. IRs for chronic RRT were estimated by taking the number of first chronic RRT per person for each year of the observation period as the numerator and dividing by the cumulative PYs at risk from all insurance quarters of all insured people in the respective year minus those with a prevalent chronic RRT as the denominator. Stratum-specific and age-and sex-standardized IRs of chronic RRT were calculated with a 95% CI in the population with and without diabetes for each calendar year, using the German population of 2013 as a standard population with the aforementioned age strata. Furthermore, the standardized IR of the population with and without diabetes was divided to calculate the IR ratios (IRRs) for each calendar year. We also calculated attributable risk among the population with diabetes and the population-attributable risk as a result of diabetes for each year to determine the percentage of people in whom RRT could theoretically be avoided if there were no exposure (i.e., diabetes).
To examine time trends, Poisson regression models were fitted with the IR of chronic RRT as the dependent variable for people with and without diabetes, both in the population as a whole and in the age and sex strata.
Year of RRT (difference from 2010) was used as an independent variable to estimate the effect of calendar time. All models that were not stratified for age and/or sex were adjusted for these variables using the youngest age-group (<40 years) and female sex as a reference group. Furthermore, analogous Poisson models were fitted for the entire population, including a variable presence of diabetes (yes vs. no) and an interaction term "diabetes * year of RRT" to ascertain whether time trends differed significantly between the populations with and without diabetes.
To account for overdispersion of the outcome, we adjusted all models for descale on the basis of cumulated data on the covariate strata. We performed all analyses using SAS 9.4 TS1M1 for Windows (SAS Institute, Cary, NC).
Ethics
Neither individual written consent by patients nor ethical approval was required because the data were anonymous and no link to primary data was intended (21).
Study Population
The description of all insured people is presented in Table 1. Diabetes prevalence remained nearly constant (12.6% in 2010, 12.4% in 2016), with a somewhat higher proportion in women (13.2% vs. 12.5%).
We identified 73,638 people (44,196 men, 29,442 women) with a first chronic RRT in the period 2010-2016. About three-fifths of these people (men 59.3%, women 62.6%) had diabetes at the start of first chronic RRT, with proportions remaining stable throughout the study period.
The mean age at the start of chronic RRT was 71.3 years. People with diabetes were markedly older at the start of chronic RRT (73.0 years) than those without diabetes (68.6 years). The age at the start of chronic RRT increased slightly in people with diabetes from 72.6 to 73.5 years between 2010 and 2016 but decreased in people without diabetes from 69.2 to 67.6 years.
IR, RR, and Attributable Risk
Age-and sex-standardized IR, IRR, and attributable risks are shown for each calendar year in Table 2 comparing the RRT incidence between people with and without diabetes was 6.3 (95% CI 5.8-6.9) in 2010, 6.0 (5.5-6.5) in 2013, and 5.4 (5.0-5.8) in 2016. More than four-fifths of the incidence of chronic RRT in people with diabetes was attributable to diabetes. In the total population, almost onehalf of chronic RRT incidence was attributable to diabetes. The IR was twice as high in men than in women, with greater differences in the subpopulation without diabetes than with diabetes. In contrast, the RR and attributable risk were higher in women. However, all observed trends were quite similar in both sexes.
Analysis of Time Trend in the Entire Population and Stratified by Age and Sex
The results of the incidence time trend from the fully adjusted Poisson models are shown in Table 3. The effect of calendar year in the population with and without diabetes is shown in models 1 and 2. We found a significant decrease in chronic RRT
CONCLUSIONS Main Findings
This study is one of only few large population-based studies to analyze the time trend of chronic RRT incidence in the population with and without diabetes, using the population with diabetes as a population at risk and recording all cases of chronic RRT irrespective of the underlying reason. IRs among people with diabetes were almost six times higher than in people without diabetes. A significant decrease in the incidence of chronic RRT was found during the observation period in people with diabetes independent of age and sex. However, no consistent time trend was observed in people without diabetes regarding age and sex.
Comparison With National Studies
In the current study, the IR in the populations with and without diabetes is in line with and somewhat higher, respectively, than the findings of another German study that analyzed claims data of one small insurance company in 2005/ 2006-2008 (5). Compared with our recent regional study analyzing data from Data are n, n (%), or, for age, mean (SD). *Measured at the start of RRT.
one regional German dialysis center (8) where no change of time trend was found, the IR of RRT in the current study is $1.3 times higher in both populations with and without diabetes, and thus, the RRs were very comparable. One possible explanation for the different findings between the two studies could be disparities between German health insurance funds and German regions with regard to insurant structures, health behaviors, and prevalence of diseases such as cardiovascular disease and diabetes (22)(23)(24). In particular, the increased RRT incidence in our study compared with the two previous German studies is due to the large proportion of study participants insured by the AOK. AOK is known to have a high proportion of insured persons with cardiovascular diseases and diabetes, a high number of people with migratory backgrounds, and a high number of people who smoke (24), all of which are known factors for the development of kidney disease and ESRD.
Comparison With International Studies
An international comparison with other studies is difficult since different methodological approaches were used among the studies. We found only a few epidemiological population-based studies with comparable study design regarding 1) outcome (all cases of RRT and not only diabetic nephropathy as a primary reason for RRT) and 2) denominator (population with diabetes as a population at risk [i.e., diabetes prevalence known or at least estimable]). The age-and sex-standardized IR of chronic RRT in people with diabetes included in our study (114.1 per 100,000 PY) was fairly comparable with findings from Australia (93 per 100,000 PY) (18) and with results from Italy (17). Likewise, the age-adjusted RR comparing people with and without diabetes in our study was well in line with those observed in Italy (17).
The crude IR of chronic RRT among people with diabetes in our study was considerably lower (i.e., three to seven times) than those estimated in studies from Taiwan (11,25) and Hong Kong (16). However, comparability was limited by different definitions of outcomes and study populations. Interestingly, unlike our study, neither of the Asian studies identified a sex difference regarding incidence of chronic RRT (11,25).
Three studies reported time trend with contradictory results. The decrease in chronic RRT incidence in people with diabetes of 4% per year observed in the Hong Kong study between 2000 and 2012 was very comparable with our findings (16). In contrast, the 2004-2013 Italian study observed stable IRs in both populations with and without diabetes (17). Likewise, a study from Australia covering the years 2002-2013 Despite only having analyzed diabetic nephropathy as a reason for chronic RRT among people with diabetes, the findings of a large U.S. study are worthy of mention (7). Although diabetic nephropathy is the reason for chronic RRT in only 50% of people with type 2 diabetes, the age-and sex-adjusted IRs in the U.S. study were considerably higher than in the current study (260.2 per 100,000 PY in 2000, 173.9 per 100,000 PY in 2014). The reported decrease in IRs during the study period (2.8% reduction per year) was well in line with the declining incidence in the population with diabetes found in our study. It is remarkable that another U.S. study analyzing time trends of incidence of hospitalizations for ESRD found an increasing trend among young people with diabetes aged 18-45 years while the incidence has been plateauing in the age-groups >45 years since 2010 (19). An Australian study analyzing the age-specific time trend of chronic RRT incidence found a strong annual increase of 4.2% in the <50 and >80 age-groups among nonindigenous people, with no consistent time trend observed in the interim age-groups (18). In contrast, the decrease in incidence among people with diabetes identified in our study was more prominent in the younger age-groups, with the steepest decrease in women <40 years (12% reduction per year) and men aged 40-49 years (8% reduction per year). The observed decrease of chronic RRT incidence among people with diabetes in all age classes might partially be apportioned to improvements in diabetes care: better control of blood glucose, among others, therapy with sodium-glucose cotransporter 2 inhibitors, as well as early, adequate, and consistent therapy of hypertension with renin-angiotensin-aldosterone system blockers, and early diagnosis and treatment of kidney disease at an early stage in people with diabetes. This suggestion is confirmed by increasing age at the start of RRT during the study period (72.6 years in 2010, 73.5 years in 2016). Moreover, the increased number of people with diabetes participating in the disease management programs for type 1 and type 2 diabetes, which aim to prevent complications of diabetes, including ESRD, could also contribute to this favorable trend. A further explanation of this decline could be that more people with diabetes were detected at earlier stages of disease and, thus, had a lower risk of late complications of diabetes, including ESRD. In contrast, the time trend of incidence in the population without diabetes was age and sex dependent, with a significant decrease only in women aged <40 years and in men and women aged >80 years. The decrease observed among younger women could be a result of better compliance and regulation of blood pressure than in young men. The inconsistent time trend of RRT incidence in the middle-age-group might be explained by a late and insufficient treatment of hypertension, which leads to deterioration of renal function and, as a consequence, to vascular nephropathy. The declining rates among the elderly population could be explained by improvements in medical care for nephrological disorders. Another explanation could be that older patients with ESRD, who are often multimorbid, are treated without dialysis. Indirect support for the latter could be the decreasing age of patients without diabetes at the start of dialysis from 69.2 to 67.6 years during the study period.
Limitations and Strengths
Our study has some limitations. First, the claims data used potentially did not clearly distinguish between acute and chronic dialysis, particularly among people who died within 3 months of starting dialysis. To account for this, we developed an algorithm using a combination of physician services data for dialysis and clinical diagnosis relevant to chronic terminal renal disease. Second, we were unable to analyze important clinical variables, such as diabetes duration; clinical markers, such as glomerular filtration rate and blood pressure; and lifestyle factors, such as smoking. These variables are known prognostic factors for the development of ESRD among people with diabetes. Because of the highly sensitive nature of these personal data and current data protection legislation, physicians are not permitted to transfer such data to insurance companies. However, this data source does offer the advantage of providing a large number of cases, which allows for a population-based approach. Besides, the investigation of potentially explanatory factors was not the main objective of this study. Third, we were unable to distinguish between type 1 and type 2 diabetes with the current data set. However, since the majority of people with diabetes starting chronic RRT can be assumed to be people with type 2 diabetes, our findings are primarily true for a population with type 2 diabetes. Fourth, our study population is confined to two large statutory health insurance branches constituting $30% of the German population. Because of sociodemographic and health-related differences between health insurance companies, the insured people included in the study may vary from those of other public and (22)(23)(24). Therefore, the results can only be partially generalized to the entire German population. However, the estimated IRs in the populations both with and without diabetes and the corresponding RRs were comparable with those of both previous German studies (5,8). Finally, we analyzed the incidence of chronic RRT, which only counts cases of treated ESRD. It cannot be ruled out that some patients did not receive dialysis because of severe comorbidities, such as a threat of heart failure, or because of decisions against dialysis for religious or other personal reasons. However, all patients in Germany with an existing medical indication for dialysis have a statutory entitlement to dialysis. A number of strengths should also be considered. First, our study is the first nationwide population-based study in Germany, covering almost one-third of the German population, to analyze the time trend of chronic RRT incidence among people with and without diabetes. Second, we were able to record all cases of chronic RRT in people with diabetes independently of their primary cause (i.e., not only diabetic nephropathy as a primary cause for chronic RRT). This is of note because especially in patients with type 2 diabetes, it is not always easy to distinguish between diabetic nephropathy as a main reason for ESRD and diabetes as a comorbidity when people with diabetes have coexisting diseases (e.g., hypertension or renal disease with nondiabetic pathogenesis) (26,27). Finally, we were able to estimate diabetes prevalence in the study population using an established algorithm. This methodological approach considers the increasing prevalence of diabetes in the population at risk (in contrast to IRs calculated in the general population) and, thus, allows a correct interpretation of results concerning the time trend.
In conclusion, the IR of RRT was six times higher in people with diabetes than in those without diabetes during the study period. The incidence of chronic RRT significantly decreased during the observation period in people with diabetes in all age and sex classes. In contrast, no consistent time trend was seen in people without diabetes with divergent age-and sex-specific results. | 2021-05-06T06:16:11.604Z | 2021-05-01T00:00:00.000 | {
"year": 2021,
"sha1": "3f01fe132a71685890e40c148354112bd7411482",
"oa_license": null,
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "63910c996a34a84b7765eed15b1064429385695f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219967508 | pes2o/s2orc | v3-fos-license | Impact of BMI on the outcome of metastatic breast cancer patients treated with everolimus: a retrospective exploratory analysis of the BALLET study
Introduction: Reliable biomarkers of response to mTOR inhibition are yet to be identified. As mTOR is heavily implicated in cell-metabolism, we investigated the relation between BMI variation and outcomes in metastatic breast cancer (mBC) patients treated with everolimus. Results: we found a linear correlation between everolimus exposure duration and BMI/weight decrease. Patients exhibiting >2 kg weight loss or >3% BMI decrease from baseline at the end of treatment (EOT) had a statistically significant improvement in PFS. Interestingly, a similar BMI/weight decrease within the first 8 weeks of therapy identified patients at higher risk of progression. Patients and methods: we performed a retrospective analysis of patients enrolled in the BALLET trial who progressed during the study. Primary end-point was progression-free survival (PFS). Secondary end-point was the identification of other predictors of response. Conclusion: A >3% weight loss at EOT is associated with better outcome in mBC patients treated with everolimus. On the contrary, a significant early weight loss represents a predictor of poor survival and could therefore be used as an early negative prognostic marker. As PI3K-inhibition also converges onto mTOR, these findings might extend to patients treated with selective PI3K inhibitors and warrant further investigation
INTRODUCTION
Activation of phosphatidylinositol 3-kinase (PI3K)-Mammalian Target of Rapamycin (mTOR)-is a with resistance to endocrine therapies [1,2] and targeting PI3K-mTOR reverses this resistance. The BOLERO-2 study showed the efficacy of the combination of mTORinhibitor everolimus plus exemestane in patients with Research Paper www.oncotarget.com ER-positive/HER2-negative advanced breast cancer (BC) resistant to non-steroidal aromatase inhibitors (NSAIs), with a significant improvement in progression-free survival (PFS) in comparison to exemestane monotherapy (7.8 months versus 3.2 months) [3]. These findings led to FDA approval of the dual-blockade for the treatment of advanced or metastatic hormone-receptor positive BC which progressed after NSAIs.
Unfortunately, reliable biomarkers of response to mTOR targeted therapy are yet to be identified.
The BALLET study is a phase IIIb, expanded access, multicentre trial evaluating the safety of everolimus plus exemestane in patients with hormone receptor-positive, advanced or metastatic BC who progressed on prior NSAIs [20].
Here, we present the results of a retrospective, exploratory analysis evaluating the impact of BMI and weight variation on the outcome of a subgroup of patients participating in the BALLET trial and whose disease progressed during the study.
RESULTS
As we wanted to investigate the correlation between BMI variation during treatment and risk of progression, only six-hundred-eighty-seven patients who progressed during the trial were included in this analysis. 635 patients (92.43%) had at least a BMI screening/baseline measurement and a corresponding post-baseline or EOT measurement.
We observed a statistically significant decrease in BMI at EOT in comparison to the baseline values (median BMI values, respectively, 24.29 versus 25.31, Figure 1A).
There was no correlation between BMI at baseline and PFS (P = 0.38, Figure 1). When we further stratified the patients by BMI categories, we observed an increased PFS in the group of women with lower BMI ( Figure 1B). However, only 14 patients had a baseline BMI <18.5 kg/m 2 and therefore this result should only be interpreted as a trend.
Correlation between weight and exposure to everolimus
We found a linear correlation between everolimus exposure time and weight variation (Figure 2A and 2B). With the increase of the drug exposure time, we found a statistically significant increase in absolute weight loss in kg (rho = 0.27, p < 0.001) or percentage (rho = 0.26, p < 0.001).
After patient stratification according to the "Cancer-Associated Weight Loss" classification (see Materials and Methods), we found an association between everolimus exposure duration and weight loss severity (Supplementary Figure 1). The difference in exposure time according to the grade of weight loss was statistically significant (Kruskall Wallis test, p < 0.001). Median exposure time values increased proportionally with the increase of weight loss severity grade from 0 to 4.
Correlation between BMI/weight changes and PFS
We found a positive correlation between a weight loss >2 kg or 3.17% from baseline and the outcome, with a median PFS of around 70 (95%CI 55-86, P = 0.009) days versus 57 days on average for a weight loss of < 2 kg or weight gain ( Figure 3). PFS at 6 months was also statistically increased in the two groups recording the highest weight loss: 18.1% (95%CI 12.3-24.7) for the patients who lost more than 6.90% and 13.4% for the ones who lost between 3.17% and 6.90%, Figure 3C). In particular, after a post-hoc analysis, the two groups which showed the more significant difference in terms of PFS were <−6.90% versus −3.17% and 0% (see Supplementary Tables 1 and 2).
This tendency was confirmed after further stratification of patients according to the "Cancer-Associated Weight Loss" classification (Supplementary
Correlation between BMI variation at 4/8 weeks and PFS
To investigate the potential predictive value of BMI decrease in this patient cohort, we analysed "early" weight variation, at 4 and 8 weeks of treatment. After exclusion of patients who progressed within 4 (190 patients) or 8 weeks (304 patients), the number of patients analysed was respectively 440 and 318.
Patients who recorded significant BMI variation at 4/8 weeks of treatment had worse prognosis, with this tendency being clearer at the 4-weeks time point (Figure 4). On the contrary, patients who gained weight showed a statistically significant increase in median PFS (p = 0.02, Log-Rank test). In particular, after a posthoc analysis, the two groups which showed the more significant difference in terms of PFS were <−3.17% and 0% versus >0% (see Supplementary Tables 3 and 4).
Furthermore, patients who recorded limited or no weight loss at 4 or 8 weeks, but lost >3.17% of their initial www.oncotarget.com weight by EOT showed a statistically significant increase in both median-and 6-months PFS in comparison to all other 3 categories (median PFS 115 days, 95%CI 103-134, P < 0.001) ( Figure 5). On the contrary, patients recording a significant weight loss at 4/8 weeks, but limited or no weight loss at EOT had the worst prognosis (median PFS 73.5 days, 95%CI 62-90).
DISCUSSION
Weight loss is amongst the most commonly reported side effects of treatment with PI3K and mTOR inhibitors. 26.8% of patients randomized in the experimental arm of the SOLAR-1 study showed some degree of weight loss in comparison to only 2.1% of patients in the control group [4], with other studies reporting similar results [21,22]. In our patient population, who progressed during the study, a much higher percentage of subjects treated with the drug experienced weight loss (76.85%), with 23.3% of our patients recording a loss of more than 6.90% of their baseline weight. This finding is in line with the results of another recent observational study investigating the predictive role of fasting glucose and BMI in breast cancer patients treated with the combination everolimusexemestane [23].
Martin et al. developed a BMI-adjusted weight loss grading system for cancer patients, which also provides prognostic indications [24]. We used this classification as a tool to clearly identify significant differences in our population. However, while it helps defining the degrees of weight loss which have clinical implications for cancer patients, the classification does not differentiate according to cancer type and stage, nor does take into account other causes of weight loss beside cancer, such as co-morbidities and cancer-related treatments. Instead, it assumes that the cause is irrelevant as weight loss irremediably translates into metabolic dysregulation and worse prognosis, irrespective of the mechanisms behind it. This assumption is simply not confirmed in our analysis, where everolimus-related weight loss seems to correlate with better prognosis and a PFS benefit.
Once the relationship between drug exposure time and weight was established, even in the presence of many confounding factors, the correlation between weight loss and progression free survival was explored. Women losing more than 3.17% of their initial BMI/weight during treatment with everolimus showed statistically significant improvement of 6 months-PFS in comparison to the others. Median PFS was also significantly higher in these patients compared to the ones who recorded limited or no weight loss (Figure 3).
While previous reports showed worse outcomes in patients with advanced cancer who developed significant weight loss [24,25], we postulate that the weight loss observed in our patient population may be an on-target toxicity of everolimus and mTOR inhibitors, rather than expression of tumour-associated cachexia.
Of note, while cachexia and sarcopenia are extremely common in lung, gastrointestinal, prostate and head and neck cancers, the percentage of metastatic breast cancer patients who develop cancer-related wasting syndromes is reportedly small [26,27]. However, diagnosis of cachexia cannot be made in the absence of muscle wasting [28,29] and such occurrence is not normally investigated in the clinical setting. Instead, anthropometric measures are used as surrogates of muscle mass measures. Furthermore, a role for mTOR inhibitors in preventing and/or reversing tumour-associated cachexia through restoration of autophagy or reduction of IL-6 levels has been previously shown [30][31][32]. To investigate the predictive value of BMI decrease during everolimus treatment, we explored the relationship between early-stage weight loss and outcomes. Interestingly, patients who recorded a weight gain at 4 or 8 weeks, also showed an increased median PFS in comparison to all the others (Figure 4). This increase is more accentuated at 4 weeks, but still visible after 8 weeks of treatment. On the contrary, patients who recorded a significant weight loss in the first 4 weeks of treatment showed the worst prognosis. Notably, the number of patients who recorded a weight loss >6.90% from baseline was extremely low at both time points and therefore the results for this group of patients may not be as reliable. Nevertheless, these results were significantly strengthened by further analyses. We used the weight loss cut-off previously identified to stratify patients according to
Figure 5: Correlation between weight/BMI decrease at 4/8/EOT and PFS.
On the basis of the weight loss distribution at 4 or 8 weeks and at the end of treatment, we identified 4 categories of patients: A patients who lost more than 3.17% of their initial weight at 4 or 8 weeks as well as at the end of treatment; B patients who lost less than 3.17% of their initial weight at 4 or 8 weeks, but more than 3.17% by EOT; C. patients who lost more than 3.17% of their initial weight at 4 or 8 weeks, but less than 3.17% at EOT; D. patients who lost less than 3.17% of their initial weight at 4 or 8 weeks as well as by the EOT. We then correlated these 4 groups with the outcome expressed as PFS. (A) 4 weeks: EOT weight loss and PFS. (B) 8 weeks: EOT weight loss and PFS. www.oncotarget.com weight loss at 4 or 8 weeks and at EOT: patients who lost limited amount of weight (< 3.17%) or gained weightby 4 or 8 weeks, but also recorded a weight loss > 3.17% by EOT, showed a statistically significant increase in median PFS in comparison to all other patients ( Figure 5). Patients who recorded significant weight loss in the early stages and then limited weight loss or weight gain at EOT showed the worst prognosis.
Everolimus reaches steady state by 7 days [33] and noticeable changes in markers of activity of the drug are detected at least after 4 weeks of treatment [34]. Also, fast, unexplained weight loss is the hallmark of cancer cachexia, while weight variation by other causes is a metabolic multi-factorial response which requires time. Therefore, it is conceivable to think that any significant weight loss occurring between baseline and 4 weeks is synonymous of cancer-associated weight loss and not drug-induced, especially considering the patient population and progression risk. And in fact, in our analysis, the groups of patients who lost significant weight within 4 weeks of treatment ( Figure 5A and 5C) showed the worst prognosis in terms of median PFS (54 days A and 39 days C versus 78 days for group B, P = 0.00118).
Between 4 and 8 weeks of treatment, it becomes more difficult to distinguish between cancer related catabolism and drug effect, as the PFS curves and HRs tend to overlap ( Figure 4B and 4D). This could be because a higher rate of patients may be experiencing wasting syndrome symptoms. In fact, 47% of the patients who recorded a weight loss of >3.17% at EOT had already reported a quantitatively similar weight loss by week 8 (Supplementary Figure 3). These same patients do not do well if compared to other patients (median PFS of 115 days (B) versus 97 days (A), P = 0.0000906) ( Figure 5B).
Significant decrease of BMI/weight in the early stages of everolimus treatment is associated with higher risk of progression and worse prognosis in our analysis, in accordance with previous reports [24,35]. On the other hand, everolimus-associated weight loss recorded at EOT identifies a patient population gaining a clinical benefit from the mTOR inhibitor, which translates into a better outcome. As PI3K signalling converges onto mTOR, it is possible to hypothesize similar effects of PI3K selective inhibitors, such as Alpelisib.
Our study has some limitations: first of all, the retrospective nature of the data is prone to bias. Secondly, patients enrolled on the Ballet study were administered a combination of everolimus plus exemestane: metabolic effects of the aromatase inhibitor and pharmacological interaction cannot be excluded, as both drugs are metabolised in the liver. Also, the impact of tumour associated weight loss on this analysis cannot be accurately quantified despite all our effort, as it is difficult to distinguish amongst causes of weight loss. Furthermore, treatment interruptions length and adverse events seriousness varied across patients and could have affected the results. Finally, the impact of other potential confounding factors, such as other concurrent therapies or comorbidities cannot be excluded. Strengths of our analysis include the applied methodology and the use of stratification tools which account for both weight and BMI variations and are internationally recognised. Also, significant difference in survival outcomes is traditionally hard to demonstrate in heavily treated and advanced-stage patients.
Nevertheless, our study identified significant everolimus-associated weight loss as a positive prognostic factor which predicts PFS benefit in patients with advanced hormone-positive BC. On the other hand, early weight loss is associated with increased risk of disease progression and could be used as an early negative prognostic marker.
To our knowledge, our study is the first to report these findings. Our results underline the utility of BMI and weight information for cancer patients management. Stratification on the basis of these factors may help monitor treatment response in the clinical setting.
Prospective studies are needed to confirm and validate our results.
MATERIALS AND METHODS
The BALLET study (EudraCT#2012-000073-23) recruited 2131 post-menopausal women with advanced or metastatic hormone receptor-positive breast cancer which recurred or progressed with non-steroidal aromatase inhibitors (NSAIs). The patients were enrolled irrespective of the number of prior lines of chemotherapy or other targeted treatments and NSAIs were not necessarily the last treatment these patients received.
Everolimus treatment was given in a 28-days cycle at a dose of 10 mg/day in combination with exemestane (25 mg/day). Treatment stopped in the case of disease progression, unacceptable toxicities, death, or local reimbursement of everolimus. Primary objective of the study was the assessment of the safety of the combination of everolimus plus exemestane. Secondary objectives included the evaluation of the grade 3 and 4 AEs severity.
As we were interested in the presence of a correlation between BMI variation and risk of disease progression, only patients who progressed during treatment were included in our analysis. EOT was always synonym of disease progression. 687 patients were evaluated. Weight measurements were recorded at baseline and in successive clinical assessments till the end or discontinuation of the study. The BMI was calculated as weight in kilograms divided by the square of height in meters (Kg/m 2 ). A BMI between 18.5 and 24.9 was considered normal and a BMI ≥ 24.99 defined "overweight" [36]. As the height remains constant over time, we used weight or BMI interchangeably for our analyses.
We defined everolimus exposure time as the total number of days of administration, including restart after suspension. Weight variation was calculated by the formula ∆W= End of therapy (EOT) weight -BASELINE weight/BASELINE weight and expressed as percentage of weight loss.
In order to better classify weight loss severity in cancer patients, we used the classification of cancerassociated weight loss developed by Martin et al. [24]. Briefly, the grading system takes into account both weight loss percentage and BMI: the weight loss is expressed as function of the BMI measure at EOT and 5 degrees of increasing severity are identified as both percentages decrease (Table 1).
Progression free survival (PFS), defined as the time between the start of everolimus and progression or death, was calculated in relation to BMI variation from baseline.
Statistical analysis
Continuous variables were expressed as median and range (minimum-maximum), according to data distribution, after performing the Shapiro-Wilk test for normality. Categorical variables were expressed as absolute frequency and percentages and compared with Chi-Squared analysis. Baseline and EOT measurements of continuous variables were compared with Wilcoxon matched-pairs signed rank test. The relationship between everolimus exposure time and delta (∆) weight loss was evaluated using the Spearman rhocoefficient. Kruskal-Wallis test was applied to analyse everolimus exposure time stratified into 5 categories of weight loss severity according to Martin et al. [24]. Post-hoc tests, pair-wise comparisons using Mann-Whitney test were conducted and corrected using the Holm method. PFS was estimated using the Kaplan-Meier approach and comparisons between survival distributions were performed with Log-Rank test. Since we were interested in the effect of weight/BMI loss on PFS and this is a time-dependent covariate, we used landmark analysis as the method of choice to avoid immortal time bias. Two separate analyses were performed at weeks 4 and 8, after beginning of therapy. Only patients who were alive and progression free at the time of the landmarks were included to avoid confounding factors. In order to evaluate the prognostic role of weight loss according to the "Classification of Cancer-Associated Weight Loss", we used the univariate Cox regression model, with estimation of the Hazard Ratio (HR), after the proportional hazards assumption had been verified. All statistical analyses were performed using commercially available softwares (Stata/ SE 14.1, Stata Corp LP, USA) and the R software version 3.5.0. All P values were calculated from 2-sided tests with 0.05 used as the significance level.
ACKNOWLEDGMENTS
We thank the patients who participated in the BALLET trial; the investigators, study nurses and clinical research associates from the individual trial centers who provided ongoing support.
CONFLICTS OF INTEREST
The authors have no conflicts of interests to declare.
FUNDING
This study was sponsored by Novartis Pharmaceuticals Corporation. No grant number is applicable. | 2020-06-11T09:06:51.588Z | 2020-06-09T00:00:00.000 | {
"year": 2020,
"sha1": "ef7a51c7dbbd055cced82c7dcd8ef35de8d32f5c",
"oa_license": "CCBY",
"oa_url": "https://www.oncotarget.com/article/27612/pdf/",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2949af2fe522aa038940cc97413a72bccd92b146",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253733755 | pes2o/s2orc | v3-fos-license | Enzymatic Modification of the 5’ Cap with Photocleavable ONB‐Derivatives Using GlaTgs V34A
Abstract The 5’ cap of mRNA plays a critical role in mRNA processing, quality control and turnover. Enzymatic availability of the 5’ cap governs translation and could be a tool to investigate cell fate decisions and protein functions or develop protein replacement therapies. We have previously reported on the chemical synthesis of 5’ cap analogues with photocleavable groups for this purpose. However, the synthesis is complex and post‐synthetic enzymatic installation may make the technique more applicable to biological researchers. Common 5’ cap analogues, like the cap 0, are commercially available and routinely used for in vitro transcription. Here, we report a facile enzymatic approach to attach photocleavable groups site‐specifically to the N2 position of m7G of the 5’ cap. By expanding the substrate scope of the methyltransferase variant GlaTgs V34A and using synthetic co‐substrate analogues, we could enzymatically photocage the 5’ cap and recover it after irradiation.
Introduction
The 5' cap is a hallmark of eukaryotic transcripts produced by RNA polymerase II, such as mRNAs and long non-coding RNAs (lncRNAs), but also short RNAs, like snRNAs and miRNAs. The 5'-5'-triphosphate linkage between a methylated guanosine and the first transcribed nucleotide is formed co-transcriptionally. [1] The 5' cap plays a critical role in processing of pre-mRNA, including splicing and polyadenylation, in nuclear export and also in protection from exoribonucleases. [2] One fundamental and highly conserved cap-dependent process is translation in eukaryotes, which requires direct interaction with the eukaryotic initiation factor 4E (eIF4E). [3] Modifications of the 5' cap that impact the binding affinity to eIF4E also affect translation efficiency. [4] These findings promoted the idea of introducing modifications to the 5' cap to modulate translation.
The regulation of translation plays a major role in cell differentiation and proliferation. [5] For external control, photocleavable protecting groups are of interest as they can be removed by light. Light is a bioorthogonal trigger and can be controlled to act only in a defined location at a specific time. Therefore, the ability to manipulate bioactive molecules by light provides access to spatiotemporal control of their activity. [6] The "photocaging" of nucleosides, nucleotides and oligonucleotides is an established method. [7] These photocaged bioactive molecules show the potential to improve drug delivery, bioavailability or reduction of side effects in biomedical applications. [8] Recent advances have enabled the use of mRNA as a new modality in medicine, e. g. as vaccine against infectious diseases, for personalized cancer treatment, and for treatment of autoimmune diseases. [9] However, methods to control gene expression at the mRNA level are scarce.
Recent advancements in the direction of post-transcriptional control of gene expression have led to the development of optochemical tools. Among those, many rely on short regulatory RNAs, like siRNAs, miRNAs, morpholinos or aptamers. [10] Other strategies act directly on the mRNA itself by introducing multiple photocleavable groups into the backbone, [11] or structural motifs into the 5' UTR. [12] While this approach is versatile and straightforward compared to the addition of other oligonucleotides, it still requires multiple photolabile groups or the introduction of structural motifs into the 5' UTR of mRNA. [11,12] In a recent study, we introduced FlashCaps, i. e. 5' cap analogues that carry a photolabile group connected via a carbamate linker to the exocyclic N 2 position of the m 7 G. This approach yielded promising results, showing efficient optical control of mRNA translation by introducing a single modification and not altering the sequence. However, an elaborate setup is required to synthesize, purify and analyze these FlashCaps. [13] Here we report an enzymatic route to synthesize 5' cap analogues bearing photolabile groups at the N 2 position of m 7 G. Using a one-step synthesis procedure to produce Sadenosyl-l-methionine (AdoMet) analogues, we were able to transfer ortho-nitrobenzyl (ONB)-derivatives with a methyltransferase to the N 2 position of the 5' cap. The Giardia lamblia trimethylguanosine synthase variant V34A (GlaTgs V34A) efficiently transferred the ONB-derivatives and irradiation led to partial recovery of the native cap analogue in vitro. [14] The 5' cap analogues were successfully incorporated into mRNA during in vitro transcription. After transfection of mammalian cells with modified reporter-mRNA, a luciferase assay validated higher protein expression after irradiation.
Results and Discussion
We aimed to develop a fast and efficient method to enzymatically photocage the 5' cap as an alternative to the elaborate chemical synthesis and purification of light-activatable Flash-Caps. To achieve this, the natural 5' cap analogue, m 7 Gp 3 G (1), was modified with a methyltransferase variant (GlaTgs V34A) previously engineered for improved promiscuity. [14,15] We prepared a set of synthetic analogues of the co-substrate AdoMet bearing derivatives of the photocleavable ONB group at the sulfonium center. Specifically, AdoMet analogues with the ortho-nitrobenzyl (AdoONB 2 a), [16] 4-bromo-2-nitrobenzyl (AdoPBr 2 b) or 4,5-dimethoxy-2-nitrobenzyl (AdoDMNB 2 c) (Supporting Information section 4) were synthesized and tested in enzymatic reactions with 1 (Scheme 1a). The steric demand of the photocleavable group on the resulting products 3 a-c was anticipated to interfere with the availability of these modified 5' caps for proteins responsible for further processing or binding of mRNA. The photocaged caps 3 a-c should then become activated by irradiation with UV-light (365 nm) to reconstitute the native 5' cap (1) available for natural binding partners (Scheme 1b).
In a previous approach to photocage the 5' cap, we had focused on the N7 position of the guanosine. [4b] However, at this position, the photolysis of ONB-derivatives was not successful in the sense that the native 5' cap structure was not reconstituted. In contrast, the enzymatic modification at the N 6 position of adenosine with ONB and nitropiperonly groups at internal sites of RNA was successful and could also be removed by light. [16a] In the current study, we chose the exocyclic N 2 position of the 5' cap m 7 G as modification site. We reasoned photolysis should be possible without extensive side-product formation and lead to the reconstitution of the native 5' cap structure.
First, we tested whether GlaTgs V34A is able to transfer the sterically demanding groups, as it was only reported to transfer small aliphatic or para-substituted benzylic residues before. [4a] The enzymatic reactions were incubated and analyzed via HPLC in order to evaluate the formation of modified 5' cap analogues 3 a-c ( Figure 1C). The HPLC analysis showed that transfer of aromatic rings is possible, even with substituents in the ortho position. Importantly, the ONB group from 2 a and even the red-shifted DMNB group from 2 c were transferred, demonstrating the applicability of GlaTgs V34A to transfer photocleavable Scheme 1. General concept of the enzymatic approach for photocaging the 5' cap. a) Enzymatic transfer of a photocleavable group from different AdoMet analogues (2 a-c) to the exocyclic N 2 position of the m 7 G of cap 0 (1) using GlaTgs V34A as promiscuous methyltransferase. b) Scheme illustrating photo-uncaging of the N 2 modified cap analogues by irradiation with 365 nm for 30 s in order to retrieve the unmodified 5' cap analogue (1) again. PG: photocleavable group.
Figure 1.
Enzymatic 5' cap modification. a) Enzymatically modified 5' cap analogues made in this study. b) Absorbance spectra of 1 and 5' cap analogues made in this study. c) HPLC chromatograms of the enzymatic modification reaction with GlaTgs V34A with indicated AdoMet analogues. The reaction was stopped either immediately (0 h) or after 8 h at 37°C (8 h) by incubation at 65°C for 10 min. Colored peaks correspond to indicated compounds (as validated by LC/MS) and areas were used to calculate conversions. The HPLC chromatograms show a representative of n = 3 independent replicates. groups to 5' cap analogues and generate 3 a-c as well as 4. However, we observed that each additional substituent on the aromatic ring lowered the conversion ( Table 1). Purification of 3 a-c and 4 was performed via HPLC to remove residual reactants and degradation products as the AdoMet analogues are instable in neutral pH. [17] After purification of the photocaged 5' cap analogues via HPLC, their absorbance spectra were measured, confirming the presence of the photocleavable group at the m 7 Gp 3 G ( Figure 1B).
To test whether irradiation of 3 a-c with UV-light leads to an efficient photolysis reaction and re-formation of 1, we irradiated the purified 5' cap analogues and analyzed the resulting mixture via HPLC (Figure 2). After irradiation for 30 s, the photolysis reaction was completed, and no starting material remained detectable. At the same time, a new peak corresponding to the native 5' cap was observed (Figure 2, S5). As a control, compound 4 with a benzyl ring instead of a photocleavable group was tested but did not lead to formation of 1 or another new product upon irradiation ( Figure 2). These data show that enzymatically produced 5' cap analogues 3 a-c with photolabile ONB-derivatives at the N 2 position of the m 7 G are in principle suitable for releasing the native 5' cap (1) by irradiation.
However, upon irradiation of 3 a-c, in addition to the desired peak for the native 5' cap 0 (1), we observed several smaller peaks, indicating the formation of side-products during the photolysis reaction ( Figure 2). Although the irradiation of the photocaged 5' cap analogues 3 a-c does not yield quantitative conversion to 1, a significant fraction was successfully photo-deprotected (Table 2). This supports our hypothesis that photocaging of an exocyclic position is favorable for native 5' cap reconstitution.
As the ONB-caged 5' cap 3 a showed the most efficient recovery of 1 among the tested photolabile groups, we wanted to find out whether this modified 5' cap could be incorporated into a reporter mRNA (Figure 3a). In vitro transcription using T7 polymerase was performed in the presence of purified 3 a for transcriptional priming. The ONB-cap 3 a was as well incorporated as cap 0 during in vitro transcription and similar amounts of capped mRNA were obtained after digestion of uncapped mRNA (Figure 3a). Irradiation of the mRNA at 365 nm for 30 s did not cause degradation of mRNA capped with 1 or 3 a (Figure 3a). To evaluate the effect of modifications at the N 2 position of m 7 G on translation, we subsequently transfected HeLa cells with cap-caged-mRNA and cap 0-mRNA as control and measured the resulting luminescence ( Figure 3b).
Compared to cap 0-mRNA, luciferase activity was reduced to 49 %, when the 5' cap was modified with the ONB group (Figure 3a), confirming that interaction with eIF4E and translation are impaired. The irradiation of the mRNA prior transfection led to a 1.3-fold increase in luciferase activity compared to the sample without irradiation. These data show that the photocleavable group can also be removed from cap-caged Table 1. Conversion of m 7 Gp 3 G (1) with different photocleavable groups by enzymatic modification. The values result from an integration of peaks in HPLC analysis and represent the average and standard deviation. of n = 3 independent replicates.
AdoMet analogue Conversion [%]
AdoBenzyl 69 � 7 2 a 49 � 3 2 b 38 � 6 2 c 40 � 9 (1) Table S1) capped with 1 or 3 a, respectively, with or without irradiation before loading (365 nm, 30 s). b) Incell translation studies based on luminescence measurements performed with non-irradiated and irradiated (365 nm, 30 s) GLuc-mRNA transfected into cells. The bars represent the average of n = 6 independent experiments and the error bars represent the standard deviation. Statistical analysis: twotailed t-test. p < 0.1:*, p < 0.05:**,p < 0.01:*** mRNA and that the released cap 0-mRNA can be translated. The relatively low increase suggests that the small photocleavable group does not result in complete inhibition of translation. In addition, we observed already in vitro that the native cap structure is only partially restored (43 %). These results show that enzymatic transfer of photocleavable groups to the 5' cap and subsequent photo-deprotection to the native 5' cap are possible. This work provides a first proof-of-concept that enzymatic 5' cap modification is an alternative route to obtain mRNAs whose translation can be activated by light.
Conclusion
We expanded the substrate scope of a promiscuous methyltransferase to photocleavable groups. We showed that ONBbased photocleavable groups can be efficiently transferred to the 5' cap and yield photoactivatable bioactive compounds. This enzymatic approach offers advantages compared to the synthetic route, as it is fast and cost-efficient, starting from commercially available 5' cap. Several photocleavable groups have been tested and analyzed in terms of absorbance, enzymatic conversion and recovery of the native 5' cap after irradiation. The irradiation experiments revealed that 33-43 % of the native 5' cap are restored after irradiation, which is a reasonable improvement compared to the enzymatic N7 modification of G with ONB-groups, which did not yield the native compound at all. [4b] This confirms that the electronic situation of the imidazolium ion upon N7G modification of the 5' cap plays a critical role for side-product formation with ONBderived groups. However, compared to our recent publication, in which the photocleavable group is connected via a carbamate linker to the N 2 position of the m 7 G of the 5' cap, the recovery is reduced. [13] This is likely due to the self-immolative property of the carbamate linker, releasing CO 2 , as well as the radical cleavage mechanism of ONB-derivatives. [18] Possibly, the close proximity of the photocleavable group to the purine ring system promotes side-product formation, which would explain the observations made in this and other studies.
Additional cell experiments to analyze protein production support the partial recovery of the native 5' cap after irradiation. The amount of protein produced from the photocaged mRNA after irradiation results in 61 % relative to cap 0-mRNA. This is ã 1.3-fold increase in translation when irradiated mRNA was transfected.
Taken together, the enzymatic transfer of ONB-derivatives to the N 2 position of the 5' cap offers an alternative to the synthesis of 5' cap analogues. In fact, the preparation is faster and potentially more widely applicable, if the synthetic facilities for full cap synthesis are not available. This work also showed that the N 2 position of m 7 G of the 5' cap is more applicable to gain optical control over 5' cap interactions than the N7 position, as it was possible to partially recover the native cap after irradiation. Luciferase assays revealed enhanced protein production after irradiation of the N 2 modified 5' cap, supporting the hypothesis that the native cap is recovered after irradiation. However, the amount of recovered 5' cap and the relatively low translation inhibition elicited by ONB-derivatives result in a rather minor turn-on effect after irradiation. As such, this study provides a proof-of-concept that the exocyclic N 2 position is a suitable target for the modification with photolabile groups. The reported results form the basis for future approaches to gain optical control over mRNA via enzymatic modification. One limitation is the promiscuity of GlaTgs V34A. Other methyltransferases or further protein engineering of GlaTgs would enable the transfer of modern photolabile groups like coumarins or BODIPYs yielding efficient inhibition due to steric demand and a more effective turn-on as other photolysis mechanisms produce less side-products.
Experimental Section
Enzymatic modification of the m 7 GpppG cap analogue using GlaTgs V34A: For enzymatic modification, m 7 Gp 3 G (0.4 mM), AdoMet analogue (1.2 mM), MTAN (4 μM), and LuxS (4 μM) were incubated for 8 h at 23°C in a final volume of 25 μL with GlaTgs V34A (50-70 μM). The reaction was stopped by heating to 65°C for 10 min. The denatured enzymes were removed by centrifugation for 10 min at 21,130 g and 4°C and the reaction mixture was analyzed by analytical HPLC. The modified cap analogues were isolated by analytical HPLC. The collected product fractions were lyophilized. The product was taken up in 10 μL double distilled water (ddH 2 O). The resulting solution was used for in vitro transcription.
Irradiation of samples:
The respective cap analogues were dissolved in ddH 2 O to give a solution with a final concentration of 500 μM. The solution (10 μL) was transferred into a PCR-tube and irradiated. LEDs (LED Engin) were used to irradiate mRNA and cap analogues. The UVÀ A-LED (λ max = 365 nm) was operated with 5 V and 600 mA in a custom-made LED setup at 23°C. The samples were irradiated in a PCR-tube. The mRNA and cap analogues were irradiated at 365 nm (142 mW/cm 2 ) for 30 s, unless otherwise noted. Cap analogues were analyzed by HPLC. Irradiated mRNA was transfected into HeLa cells (Merck). Luminescence was measured by Tecan (Tecan Infinite © M1000 PRO plate reader).
Absorbance spectra analysis: The analysis of the absorbance properties of the photocaged 5' cap analogues was performed using a quartz cuvette (Hellma) together with the FP-8500 fluorescence spectrometer (Jasco). The respective cap analogues were dissolved in water at a final concentration of 100 μM. For the absorbance measurement, 20 μL of the solution were further diluted in water to yield a final volume of 100 μL (20 μM), which was transferred into the cuvette and measured. The values were normalized to the highest measured value of each measurement.
In vitro transcription of mRNA: The DNA template required for the in vitro transcription was synthesized by PCR, in which the DNA sequence coding for Gaussia luciferase (GLuc), were amplified from plasmids containing the respective sequence. After purification (NucleoSpin Gel and PCR Clean up, Macherey-Nagel), the resulting linear dsDNA was used as template (200 ng). The concentration was measured at 260 nm with the Tecan Infinite © M1000 PRO. The in vitro transcription was performed with T7 polymerase (Thermo Scientific) in transcription buffer (40 mM Tris/HCl, 25 mM NaCl, 8 mM MgCl 2 , 2 mM spermidine (HCl) 3 ) by adding a A/C/UTP (0.5 mM) mix, GTP (0.25 mM), the respective cap analogue (1 mM), T7 RNA polymerase (50 U) (Thermo Scientific) and pyrophosphatase (0.1 U) (Thermo Scientific) for 4 h at 37°C. After the reaction, the DNA template was digested in presence of 2 U DNase I for 1 h at 37°C and then mRNAs were purified using the RNA Clean & Concentrator™-5 Kit (Zymo Research). To digest non-capped RNAs, 10 U of the RNA 5'-polyphosphatase (Epicentre) as well as the supplied reaction buffer were added to purified mRNAs. After an incubation period of 30 min at 37°C, 0.5 U of the 5'-3' exoribonuclease XRN-1 (NEB) and MgCl 2 (5 mM) were added. The reaction mixture was incubated for 60 min at 37°C. Subsequently, capped mRNAs were purified using the RNA Clean & Concentrator TM À 5 Kit (Zymo Research).
In-cell luminescence assay: One day prior to transfection, HeLa cells were seeded in a 96-well plate (30,000/well) and cultured in minimal essential medium (MEM) with antibiotics. The cells were transfected with mRNA (100 ng) in Opti-MEM (10 μL) using Lip-ofectamine™ MessengerMAX™ Transfection Reagent (0.3 μL) in Opti-MEM (9.6 μL). The cells were incubated with the mRNA/ Lipofectamine™ MessengerMAX™ mixture for 6 h at 37°C in a total volume of 100 μL. Subsequently, the cells were incubated overnight at 37°C in media. At 24 h post transfection the supernatant was collected. To perform the luminescence measurement the Gaussia-Juice Luciferase Assay-Kit (PJK GmbH) was used. The supernatant of the previously prepared samples was transferred to a 96-well plate (5 μL supernatant per well). Afterwards, 50 μL of a reaction mixture (Reconstruction buffer and Coelenterazine) were added to the wells and the luminescence activity was measured using a Tecan Infinite © M1000 PRO plate reader. The activity in relative light units (RLU) was determined with an integration time of 3 s. Differently capped mRNAs were used. Ap 3 G-capped mRNA represents cap-independent translation and was subtracted as background from the other samples. All values were normalized to m 7 Gp 3 G capped mRNA. | 2022-11-22T06:17:25.268Z | 2022-11-21T00:00:00.000 | {
"year": 2022,
"sha1": "5008b50894b70bfb80059af1af66c5592b753c6a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Wiley",
"pdf_hash": "109f4098b671046cdd53e76f73dfcebfff2d7939",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
45426899 | pes2o/s2orc | v3-fos-license | Hematopoietic Differentiation from Embryonic Stem Cells
Since the development of human embryonic stem cells (hESCs) in 1998, the potential of stem cell-based manufacturing of tissues or organs as a form of regenerative medicine has drawn broad interest because hESCs are pluripotent and can proliferate infinitely without losing their pluripotency (Thomson et al., 1998). More recently, induced pluripotent stem cells (iPSCs) were generated from human fibroblasts (Takahashi et al., 2007) as well as other cell sources (Stadtfeld and Hochedlinger, 2010), thus accelerating the goals of research to realize regenerative medicine. Theoretically, any organ can be generated from ESCs, but the obstacles to manufacturing solid organs in vitro remain great. Solid organs such as kidney and liver require wellfunctioning three-dimensional structures consisting of different kinds of cells as well as formation of and communication with blood vessels. Considering this, hematopoietic progenitors or mature blood cells derived from ESCs may be among the most attractive applications because blood cells can operate as single cells without forming a multicellular structure. Here, we describe methods of hematopoietic differentiation from ESCs, particularly focusing on hESCs, and some problems that need to be resolved before hESCderived blood cells can be applied in the clinical setting.
and those that use coculture with feeder cells (Gutierrez-Ramos and Palacios, 1992). In addition, genetic manipulation was also adopted for derivation of hematopoietic cells from mESCs (Kyba et al., 2002;Perlingeiro et al., 2001). Embryoid bodies are cystic structures obtained by culturing ESC colonies in floating conditions in liquid or semisolid media. Hematopoietic differentiation in EBs is induced effectively by appropriate cytokine stimulation (Johansson and Wiles, 1995;Nakayama et al., 2000). In the early reports, only erythroid cells were detected in EBs (Doetschman et al., 1985;Lindenbaum and Grosveld, 1990). In 1991, it was reported that macrophages, neutrophils, and mast cells were also differentiated by semisolid culture of EBs in the presence of interleukin (IL)-3 (Wiles and Keller, 1991). The same group reported that bone morphogenic protein-4 (BMP-4) mediated formation of ventral mesoderm and hematopoietic precursors from EBs (Johansson and Wiles, 1995). A later study revealed that BMP-4 promoted generation of both myeloid and lymphoid precursors from EBs, and that this effect of BMP-4 was enhanced by addition of vascular endothelial growth factor (VEGF), although VEGF alone did not have a positive effect on hematopoietic differentiation from EBs (Nakayama et al., 2000). Although EB formation is a useful method for generating hematopoietic cells with myelopoietic and lymphopoietic potentials, early methods using EB formation did not succeed in generating lymphoid progenitors. The first report of simultaneous generation of both myeloid and lymphoid lineages from mESCs adopted a coculture system. In this report, mESCs were cultured on OP9 feeder cells derived from calvaria of newborn (C57BL/6×C3H)F2-op/op mice without addition of exogenous cytokines (Nakano et al., 1994). This simple culture system enabled mESCs to differentiate into hematopoietic progenitors that could differentiate into cells of various myeloid lineages as well as of B lymphocyte lineage. Thereafter, this cell line has been the standard feeder for hematopoietic differentiation not only from mESCs but also from hESCs. Other than OP9 cells, some cell lines, such as aorta-gonad-mesonephros (AGM) region-derived stromal cells (Weisel et al., 2006) or bone marrow-derived ST2 cells (Yamane et al., 2009), were also used but the OP9 cell-based method seems to be the most commonly used. To advance the understanding of regulation of hematopoietic differentiation from mESCs, genetic manipulation of mESCs was frequently used. To demonstrate strictly that a single mESC-derived hematopoietic stem cell (HSC) could produce all lineages of mature blood cells in vivo, clonal analysis was performed using a gene transfer protocol (Perlingeiro et al., 2001). The chronic myeloid leukemia-associated oncogene bcr/abl was transferred to EBderived hematopoietic progenitors. The cells were further cultured on OP9 cells and thereafter cloned and transplanted into irradiated recipient mice. These cloned bcr/ablexpressing cells differentiated into multiple myeloid lineages as well as into T and B lymphocytes in vivo, indicating that definitive HSCs could be generated from EB-derived progenitors by transformation using bcr/abl. A homeotic selector gene, HoxB4, proved to be a key factor in transforming EB-derived hematopoietic progenitors into definitive HSCs (Kyba et al., 2002). Like transformation by bcr/abl, ectopic expression of HoxB4 switched EB-derived primitive progenitors into definitive HSCs capable of long-term multilineage hematopoiesis. These approaches may provide further understanding of the mechanisms of HSC emergence from ESCs as well as from primordial cells during embryogenesis, although ESCs generated by means of genetic manipulation may confront further difficulties when clinical application is directly considered. www.intechopen.com
Lineage-specific differentiation of mature blood cells from murine ESCs
In addition to HSC generation, lineage-specific differentiation of mature blood cells from mESCs has also been an important issue. Homogeneous populations of mESC-derived mature cells can be used in functional analyses and could form the basis of future hESCbased transfusion medicine. Studying the process of differentiation from mESCs to mature cells can lead to understanding of normal hematopoiesis as well. Similar to HSC induction, mature blood cells are usually generated by EB formation, coculture with feeder cells, or a combination of both. Lieber et al. described an effective three-step protocol for differentiating mESCs into mature neutrophils (Lieber et al., 2004). First, EBs were formed and cultured in Iscove modified Dulbecco medium (IMDM)-based fetal calf serum (FCS)containing medium for 8 days. Second, the EBs were disaggregated and replated onto OP9 cells in IMDM containing fetal bovine serum (FBS) and horse serum supplemented with oncostatin M, basic fibroblast growth factor (bFGF), IL-6, IL-11, and leukemia inhibitory factor (LIF) for 3 days. Finally, the cells were terminally differentiated on OP9 cells in IMDM containing platelet-depleted serum, granulocyte colony-stimulating factor (G-CSF), granulocyte-macrophage colony-stimulating factor (GM-CSF), and IL-6. During 7 to 14 days of terminal differentiation culture, 6×10 6 neutrophils were obtained from 8×10 4 mESCs.The purity of the mature neutrophils during this period reached 75% to 96%. These mESCderived mature neutrophils expressed neutrophil-specific markers (Gr-1 and others) and contained gelatinase-and lactoferrin-positive granules. In the functional assays, mESCderived neutrophils showed superoxide production and chemotaxis comparable to those of normal neutrophils harvested from murine bone marrow. Interestingly, neutrophils differentiated from MEKK1-deficient mESCs displayed impaired migratory ability. This result indicated that mESC-derived neutrophils could be used to study the genetic control of neutrophil differentiation and functions. As regards the application of hESC-derived mature blood cells to transfusion medicine, the successful treatment of anemic mice by transfusion of mESC-derived erythroid progenitors was of great impact . For the differentiation, mESCs were cultured on OP9 cells in IMDM-based medium with VEGF and insulin-like growth factor-II. On day 4, dexamethasone was added, and stem cell factor (SCF), erythropoietin (EPO), and IL-3 were substituted for the cytokines, although IL-3 could be omitted. After long-term culture in these conditions, immortalized erythroid cell lines were obtained. These mESC-derived erythroid progenitor lines expressed adult type αand -globins but did not express -, -, and -globins, indicating that they were adult-type erythroid cells. The erythroid progenitors could be differentiated in vitro into mature and enucleated cells. The erythroid progenitors could proliferate and differentiate in vivo, and when transplanted into anemic mice in which acute anemia was induced by hemolysis, the anemia ameliorated and the mice showed greater survival rates. Notably, no tumors were observed in the erythroid progenitor-transplanted mice for at least 6 months. These results are encouraging for future transfusion medicine using hESC-derived cells, although thorough investigation into the possibilities of tumorigenesis is needed. Megakaryocytes and platelets were effectively produced by combination of EB formation and coculture with OP9 cells . After EB culture for 6 days, megakaryocyte progenitors expressing both c-kit and integrin αIIb were sorted and further cultured on OP9 cells with TPO. For terminal differentiation, a mixture of TPO, IL-6, and IL-11 was substituted for the cytokines after 3 days. Using this method, 2×10 5 mESCs produced 1×10 6 megakaryocyte progenitors on day 6 and 2.5×10 6 megakaryocytes on day 12. After 2 www.intechopen.com to 8 days of coculture with OP9 cells, the culture supernatants contained proplatelets and platelets. Electron microscopy analysis revealed that they contained alpha and dense granules, same as platelets from adult mice. However, these mESC-derived platelets showed low levels of glycoprotein (GP) Ibα expression, and in the in vitro thrombus formation model, mESC-derived platelets had impaired ability to participate in thrombogenesis, which is triggered by binding of von Willebrand factor to GPIbα. Interestingly, shedding of GPIbα was prevented by addition of metalloproteinase inhibitors to the culture medium during differentiation, and this inhibition improved the thrombogenetic ability of the mESCderived platelets. The effect of inhibition of metalloproteinase activity was further examined using an in vivo model. Murine ESC-derived platelets with or without metalloproteinase inhibition were transfused into irradiated mice with severe thrombocytopenia. Addition of metalloproteinase inhibitors increased the percentage of mESC-derived platelets in the peripheral blood of the transfused mice. A simple, well-established method is used for T cell differentiation from mESCs: coculture with OP9 cells ectopically expressing the Notch ligand Delta-like 1 (OP9-DL1) (Schmitt et al., 2004). On day 14 of the coculture, mESC-derived cells contained CD4 and CD8 doublenegative (DN) T lymphocyte progenitors, and on day 20, these cells contained doublepositive (DP) cells. When mESC-derived CD25 + DN progenitors were differentiated using deoxyguanosine-treated fetal thymic organ culture, they generated DP T cells and CD4 or CD8 single-positive (SP) T cells. Furthermore, when these thymic lobes containing mESCderived T cells were implanted under the skin of sublethally irradiated Rag2-null mice, which are devoid of T and B lymphocytes, reconstitution with mESC-derived CD4 or CD8 SP cells was observed. Infection of these mice with lymphocytic choriomeningitis virus (LCMV) induced LCMV-specific cytotoxic T lymphocyte activity, indicating that mESCderived mature T cells are capable of producing an effective antigen-specific immune response. As for B cells, coculture with OP9 cells induced B lineage development (Nakano et al., 1994), and this was enhanced by addition of Flt-3 ligand (FL) from day 5 of the coculture (Cho et al., 1999). After 4 weeks, more than 90% of the cells were CD45R + CD19 + B cells. In another report, knock down of PU.1 by small interfering RNA in CD34 + cells produced by EB formation induced CD19 + CD43 + CD45 + progenitor B (pro-B) cells (Zou et al., 2005). These mESC-derived pro-B cells produced precursor B (pre-B) cell colonies after a week of culture in a semisolid medium with IL-7 and IL-10, and a further 3-weeks culture enabled the pre-B cells to differentiate into mature B cells coexpressing immunoglobulin (Ig) M and CD19. These B cells produced by coculture with OP9 cells or by PU.1 knock down in EB cells showed up-regulation of CD80 and secretion of IgM by stimulation with lipopolysaccharide. Further detailed functional analyses, however, such as globulin class-switching, using mESC-derived B cells have so far not been performed.
Hematopoietic stem cells derived from human ESCs
Derivation of HSCs from hESCs, once successful, would have a great impact in both the clinical and basic research fields, given the wide range of potential applications. An unlimited amount of HSCs with various HLA and ABO blood types could be an ideal graft source in HSC transplantation, a starting material for manufacture of mature blood cells for blood transfusion, a gene transfer target for both clinical and experimental purposes, and so forth.
Besides mESCs, EB formation and coculture with feeder cells are the two major strategies used to produce hematopoietic cells from hESCs, although most of the protocols developed for mESCs cannot be applied for hESCs without significant modifications (Bhatia, 2007). For example, LIF is a critical factor for mESCs to be maintained in an undifferentiated state, but hESC maintenance is not dependent on LIF. For the maintenance of hESCs, bFGF is used, whereas bFGF induces neural differentiation from mESCs (Ying et al., 2003). As for the markers of the undifferentiated state, specific embryonic antigen (SSEA)-3 and SSEA-4 are used for hESCs, while SSEA-1 is used for mESCs. For hematopoietic differentiation, longer cultures are needed for hESCs than for mESCs. Given that species-adjusted hESC-derived cell transplantation experiments are impossible, evaluation of hESC-derived HSCs largely depends on phenotypic assays, colony-forming assays, and in vivo transplantation models using animals such as immunodeficient mice. Non (Ueda et al., 2000). Kaufman et al. cultured hESCs on the murine bone marrow cell line S17 or the yolk sac endothelial cell line C166 in a medium containing FBS without any cytokines (Kaufman et al., 2001). This culture enabled hESCs to differentiate into progenitors capable of producing colonies with multiple hematopoietic lineages. As with somatic HSCs, these colony-forming cells were enriched in CD34 + cells. Vodyanik et al. demonstrated that, as well as mESCs, hESCs could also be differentiated into CD34 + hematopoietic progenitors by coculture on OP9 cells (Vodyanik et al., 2005). When hESC-derived CD34 + cells were cultured on the murine bone marrow-derived cell line MS-5 in the presence of SCF, FL, IL-7, and IL-3, they could generate both myeloid and lymphoid cells. Chadwick et al. formed EBs from hESCs and cultured them in the presence of SCF, FL, IL-3, IL-6, and G-CSF, with or without BMP-4 (Chadwick et al., 2003) and found that BMP-4 increased the number of hematopoietic progenitors from hESCs. The same group found that in these culture conditions including BMP-4, the primitive cells with ability to differentiate into both hematopoietic and endothelial cells would appear between day 7 and day 10 of the EB culture (Wang et al., 2004). These primitive cells expressed PECAM-1, Flk-1, and VEcadherin, but not CD45 (CD45 -PFV). In a later report, they cultured CD45 -PFV cells for 7 days in serum-containing medium supplemented with SCF, FL, G-CSF, IL-3, and IL-6 and differentiated them into CD45 + cells with SRC activity (Wang et al., 2005). These hESCderived CD45 + cells were transplanted directly into the femurs of sublethally irradiated NOD-SCID mice. Even at 8 weeks after transplantation, hESC-derived SRCs were detected, indicating that HSCs with reconstituting ability were obtained from the hESCs. But these hESC-derived HSCs could not repopulate in NOD-SCID mice when they were transplanted intravenously. Furthermore, hESC-derived HSCs showed lower levels of chimerism in the transplanted bone than did the somatic HSCs from human umbilical cord blood, and the same pattern was seen in the contralateral femur and other long bones. These results indicate that hESC-derived HSCs obtained by using this method are distinct from somatic HSCs in terms of the ability of proliferation and migration. Notably, the authors also mentioned that unlike in mESCs, ectopic expression of HoxB4 in hESCs accelerated proliferation of hematopoietic progenitors but had no effect on the repopulating capacity of hESC-derived cells. The methods using coculture with feeder cells are also capable of generating hESC-derived HSCs. Tian et al. showed that hESCs cultured on S17 cells for 7 to 24 days differentiated into hematopoietic cells with SRC activity even when they were transplanted intravenously (Tian et al., 2006). They also performed a secondary transplantation from the bone marrow of the primary recipient of hESC-derived HSCs into secondary donor mice and confirmed the long-term repopulating ability of hESC-derived HSCs. As mentioned above, hESCs can differentiate into hematopoietic cells with SRC activity, but this activity of hESC-derived cells remains low when compared with that of somatic HSCs such as cord blood CD34 + cells. We can thus conclude that no bona fide methods have been established that reproducibly generate true HSCs from hESCs. Recently, derivation of HSCs with higher SRC activity using a cell line derived from the AGM region was reported (Ledran et al., 2008). In that report, cell lines from the AGM region or fetal liver or primary cells from those organs were used as feeder cells. All hESC-derived hematopoietic cells differentiated on these feeders were capable of repopulating in immunodeficient mice when transplanted into the femurs of the recipient mice, and among the feeders the AGM-derived cell line AM20.1B4 was the best in terms of SRC activity of the hESC-derived cells. When this cell line was used, the chimerism of the hESC-derived cells in the peripheral blood of the recipient mice reached 16%. Notably, this chimerism is higher than that in previous reports. Considering these results, it may be important to place hESCs in an environment that closely mimics a hematopoietic niche in order to obtain true HSCs from them.
Neutrophils
Neutrophil transfusion can be beneficial for severe neutropenic patients with congenital diseases or who have undergone chemotherapy if a sufficient number of neutrophils are transfused at appropriate intervals. The current blood donation system, however, is incapable of providing sufficient amounts of neutrophils on schedule, given that the half-life of neutrophils ex vivo is less than 10 hours and thus, that multiple transfusions per day are necessary to ensure effectiveness. Human ESC-derived neutrophils might provide a solution to these difficulties. They could also offer a new tool for drug discovery, drug toxicity monitoring, and so on. Recently, a method to obtain mature neutrophils with high purity from hESCs was developed (Yokoyama et al., 2009). The culture system consisted of 2 phases: EB formation and OP9 coculture with different combinations of cytokines at each phase. For the formation of EBs, hESC colonies were detached from the mouse embryonic fibroblasts used as feeder cells to maintain the hESCs, using collagenase. The removed colonies were then cultured in the IMDM-based medium for HSC expansion (Suzuki et al., 2006) in a serum-free condition, which resulted in the initial formation of EBs within 24 hours. The resulting EBs were collected and cultured for 17 days in IMDM containing 15% FBS supplemented with BMP-4, SCF, FL, IL-6/IL-6 receptor fusion protein (FP6), and TPO. For the preparation of feeder cells, irradiated OP9 cells were next plated onto gelatin-coated 6-well tissue culture plates at a density of 1.5×10 5 /well 24 hours before use. The EBs were dissociated into single cells and suspended in IMDM containing 10% FBS and 10% horse serum supplemented with a combination of SCF, FL, FP6, IL-3, and TPO. Then, up to 5×10 5 EB-derived cells were seeded in a well with the OP9 cell layer. After 7 days, floating cells were collected, suspended in IMDM containing 10% FBS and G-CSF, and transferred onto the newly irradiated OP9 cells. Terminally differentiated cells were harvested 6 or 7 days later.
www.intechopen.com
As determined by morphology, most of the hESC-derived cells at day 7 of the final coculture with OP9 cells were myeloblasts and promyelocytes. On days 9 through 11, myeloblasts and metamyelocytes were dominant. On days 13 and 14, 70% to 80% of the total cell population were differentiated mature neutrophils and the remaining 10% to 20%, metamyelocytes. Transmission electron microscopic observation also revealed characteristic segmented nuclei and cytoplasmic granules. On days 13 and 14, Wright-Giemsa staining also revealed that up to 10% of the cells were monocyte-or macrophage-like cells, but no other cell lineages such as erythrocytes, megakaryocytes, or lymphocytes were observed throughout the culture. Thus, this differentiation protocol made it possible to obtain hESC-derived neutrophils at a high purity. These hESC-derived neutrophils were positively stained for myeloperoxidase, which is a major constituent of the primary granules of neutrophils. Biosynthesis of lactoferrin, which is a major constituent of the secondary granules, was analyzed by comparison of mRNA expressions of the hESC-derived cells with those of mature neutrophils from the peripheral blood and mononucleated cells from the bone marrow of healthy volunteers. Lactoferrin mRNA was expressed in hESC-derived cells as early as day 7 of the final induction culture on OP9 cells, peaked at day 10, and declined at days 13 and 14. This pattern was consistent with the documented pattern of lactoferrin biosynthesis (Rado et al., 1984). These patterns of morphological maturation and lactoferrin mRNA expression during the culture indicated that hESC-derived cells differentiated into mature neutrophils by a process similar to physiologic neutrophil production, and thus, this method could be used to investigate the differentiation process of neutrophils. Surface antigen expression of hESC-derived cells was analyzed at different time points by flow cytometry. The pattern of antigen expression was almost consistent with that of normal neutrophil differentiation, except for some G-CSF-related changes. Almost all the cells expressed the common blood cell antigen CD45 from days 7 through 13. A small population expressed the markers of immature hematopoietic cells such as CD34, CD117, and CD113 at day 7 but lost the expression by day 10. The common myeloid antigens CD33 and CD15 were highly expressed from days 7 through 13, whereas CD11b expression increased as maturation proceeded. CD13 is also a common myeloid antigen, but only fewer than 20% of the cells expressed CD13 throughout the final culture. CD16 (Fc receptor [Fc R] III) is frequently used as the marker of mature neutrophils; it was already found on hESC-derived cells at day 7 and increased with maturation, which is consistent with the physiologic neutrophil maturation process. However, the proportion of CD16 + cells was lower than that of the morphology-defined mature neutrophils on day 13. Other Fc receptors, CD32 (Fc RII) and CD64 (Fc RI), were also expressed on hESC-derived neutrophils. CD14 was expressed in 20% to 25% of the cells on days 10 and 13. In normal peripheral blood, mature neutrophils express CD16 but not CD64 and CD14 (van de Winkel and Anderson, 1991;van Lochem et al., 2004), but some of the hESC-derived mature neutrophils expressed CD14, but not CD16, and most of the cells expressed CD64. This aberrant expression pattern is similar to that of the neutrophils harvested from healthy donors who received G-CSF (Carulli, 1997; and of the neutrophils derived from bone marrow CD34 + cells in vitro by G-CSF stimulation , and thus, hESC-derived neutrophils were thought to be also affected by G-CSF during the final culture. The high purity and yield enabled subsequent functional analyses of the hESC-derived neutrophils. As seen in the expression of surface antigens, G-CSF used in the induction culture might affect the functions of hESC-derived neutrophils. Therefore, hESC-derived neutrophils were restimulated with G-CSF before the assay and compared with peripheral blood neutrophils with and without G-CSF stimulation. Chemotaxis is the first step in innate immune system by neutrophils and important for neutrophils to be able to move to the inflammatory site effectively. Chemotaxis was analyzed using a modified Boyden chamber method (Harvath et al., 1980). In this method, reaction medium with or without chemotactic factor formyl-Met-Leu-Phe (fMLP) was placed into each well of a 24-well plate, and a semipermeable membrane with 3.0-μm pores was placed into each well to divide the well into upper and lower sections. Neutrophils were added to the upper section and allowed to migrate from the upper to the lower side of the membrane. After incubation, the number of neutrophils on the lower side of the membrane was counted. The neutrophils that migrated to the lower side without fMLP were considered to have migrated randomly. This random migration of peripheral blood neutrophils was accelerated by G-CSF, but, despite the stimulation by G-CSF, the hESCderived neutrophils showed an extent of random migration that was only similar to that of the peripheral blood neutrophils without G-CSF stimulation. The number of cells that showed chemotaxis to fMLP was calculated by subtracting the number of migrated cells without fMLP from that of migrated cells with fMLP. This chemotaxis was not significantly different between hESC-derived neutrophils and peripheral blood neutrophils with or without G-CSF stimulation. The next step in innate immune system by neutrophils is phagocytosis, and subsequently, killing of ingested microorganisms occurs mainly depending on superoxide production. We adopted a unique assay that simultaneously visualizes phagocytosis and superoxide production. Autoclaved baker's yeast was suspended in 0.5% nitroblue tetrazolium (NBT) solution (0.5% NBT and 0.85% sodium chloride in distilled water). When these NBT-coated yeasts are ingested by neutrophils, the yeasts change their color from brown to purple or black because of reduction of NBT and formation of formazan in response to superoxide produced by neutrophils. We incubated these NBT-coated yeasts with hESC-derived and peripheral blood neutrophils. Ingested yeast cells that changed color in the cells were NBTreaction positive. The difference in the number of positive yeasts yeilded by the hESCderived neutrophils and peripheral blood neutrophils was not significant. G-CSF stimulation had no effect on the peripheral neutrophils in this assay. Superoxide production by oxidative burst is the most important function for neutrophils to perform efficient bactericidal activity. In addition to the above-mentioned NBT reduction, we used dihydrorhodamine123 (DHR) to evaluate superoxide production. In the test, DHR was added to the neutrophil suspension with or without stimulation by phorbol myristate acetate (PMA), and rhodamine fluorescence from the oxidized DHR was detected by flow cytometry (Richardson et al., 1998). When DHR was added to the neutrophil suspensions, rhodamine-specific fluorescence was detected, indicating basal production of superoxide without PMA stimulation. Stimulation by PMA strongly increased rhodamine fluorescence in hESC-derived neutrophils and peripheral blood neutrophils, indicating that hESCderived neutrophils had sufficient capability of superoxide production and adequate response to stimulation. Finally, we evaluated actual bactericidal activity in vitro using viable Escherichia coli (Decleva et al., 2006). Opsonized E coli were added to the neutrophil suspension at a neutrophil/bacteria ratio of 2:1 or to the control medium. After 1 hour of incubation, the neutrophils were lysed, and the samples were added to molten tryptic soy broth with 1.5% agar and plated on dishes. The colonies derived from the surviving E coli were counted after overnight incubation. When the E coli were incubated with hESC-derived neutrophils and peripheral blood neutrophils with or without G-CSF stimulation, the numbers of the colonies were similarly reduced to approximately 40% those of the control, indicating that the hESC-derived neutrophils had bactericidal activity against E coli comparable to that of normal neutrophils. Generation of functional neutrophils using a feeder-free culture system was also reported by another group (Saeki et al., 2009). In this method, EBs were cultured in IMDM supplemented with FBS, insulin-like growth factor II, VEGF, SCF, FL, TPO, and G-CSF. After 3 days, the EBs were transferred onto gelatin-coated dish and cultured in the same medium as that of the EB culture. After 2 weeks of adherent culture on the gelatin-coated dish, sac-like structures (SLSs) emerged, and within a few days, round cells appeared in the sacs. These round cells had the potential to produce granulocyte, macrophage, or erythroid colonies. After 4 to 6 weeks of adherent culture, immature and mature myeloid cells were obtained, including mature neutrophils, although the purity of the mature neutrophils was relatively low (30%-50%). These hESC-derived neutrophils showed chemotaxis to fMLP and IL-8, phagocytosis of zymosan, and NBT-reduction. Interestingly, the authors of this report evaluated the chemotactic activity in vivo using a zymosan-induced air pouch inflammation model (Doshi et al., 2006). In this model, neutropenia was induced in immunodeficient NOD-SCID/ c null (NOG) mice by injection of 5-fluorouracil, and a subcutaneous air pouch was formed on the back of the NOG mice. After 3 days, 2×10 6 hESC-derived or human cord blood CD34-positive cells were transfused. Injection of both zymosan and IL-1 into the air pouch caused inflammation of the pouch, and accumulation of neutrophils in the pouch was observed. Among the massive murine neutrophils, hESC-derived neutrophils accounted for 0.54% of the total cells that were accumulated in the pouch. This percentage was the same as that for cord blood CD34 + cells. For the establishment of fundamentals for clinical application, in vivo analysis of neutrophil functions, especially the bactericidal activity and prolongation of survival of infected mice by neutrophil transfusion, is needed. www.intechopen.com
Erythrocytes
Adult-type erythrocytes derived from hESCs could be a new and ideal transfusion source if large-scale production can be achieved, given that they could be free from infectious organisms. Furthermore, hESC-derived erythrocytes from rare blood-type donors might resolve the difficulty of availability of such types of RBCs. In normal human erythroid development, the expression pattern of hemoglobin subunits in erythrocytes changes according to the developmental stage. In primitive yolk sac erythropoiesis, embryonic-type -and -globin are expressed. In definitive erythropoiesis, -globin and -globin switch to fetal-type α-globin and -globin, respectively, and -globin further switches to adult-type -globin (Peschle et al., 1985). When evaluating erythrocytes derived from hESCs, in addition to the efficiency of the induction culture, it is important to examine the globin expression pattern to determine the erythrocyte type. As described in section 3, culturing EBs in the presence of SCF, FL, IL-3, IL-6, G-CSF, and BMP-4 accelerates the generation of hematopoietic progenitors (Chadwick et al., 2003), and when VEGF was added to these basal cytokines, both the number and the frequency of erythroid colonies derived from the EBs were augmented . Evaluation of globin expression by detection of mRNA of each globin revealed that the cells from EBs treated with only basal cytokines expressed only -globin, but addition of VEGF to the basal cytokines promoted expression of both -and -globins. -globin expression was not proven in either culture condition. Thus, the erythropoiesis in the EBs cultured with this combination of cytokines was thought to recapitulate primitive erythropoiesis with embryonic globin expression. However, the erythrocytes picked up from the EB-derived erythroid colonies in a semisolid culture expressed -globin in addition to -globin, but not -globin, indicating the possibility of globin switch during the colony-formation culture. Expression of embryonic and fetal globins, but not adult -globin in hESC-derived erythrocytes was also reported by different groups (Chang et al., 2006;Olivier et al., 2006). Other groups showed successful expression of -globin in hESC-derived erythrocytes. Ma et al. developed an efficient method of inducing erythrocytes using coculture with feeder cells (Ma et al., 2007;Ma et al., 2008). In this method, the hESC colonies were cultured on irradiated primary murine fetal liver stromal cells without any cytokines. At days 11 to 12, hESC-derived cells formed SLSs containing hematopoietic cobblestone-like cells. On day 14, 1×10 4 original hESCs had given rise to 1×10 6 total cells including 5×10 3 cobblestone-like cells.
When the mixture of stromal cells and all hESC-derived cells were prepared as a single cell suspension and cultured in a semi-solid medium with EPO, SCF, IL-3, IL-6, TPO, and G-CSF, they generated mainly erythroid colonies including erythroid bursts, although approximately 25% were non-erythroid colonies. Erythroid bursts accounted for about 5% of the total colonies, and each large erythroid burst contained approximately 2×10 5 erythroid cells. Importantly, about 60% of the hemoglobin-containing erythroid cells in each erythroid burst derived from hESC after 12-day co-culture on murine fetal liver stromal cells expressed adult -globin, and the proportion reached nearly 100% when the coculture was extended to 18 days. In contrast, the proportion of -globin-expressing erythroid cells in each erythroid burst decreased from 100% to 60%. Globin switch could also be observed when the day 12-erythroid bursts were transferred to a suspension culture for an additional 6 days; the expression of -globin decreased, whereas -globin expression increased to about 100%, and, notably, -globin-expressing enucleated RBCs were observed. The hESC-derived erythroid cells could function as oxygen carriers showing oxygen dissociation curves similar to those of human cord blood RBCs, although their curves were left-shifted when compared www.intechopen.com to those of adult peripheral blood RBCs. The hESC-derived erythroid cells had higher glucose-6-phosphate dehydrogenase activity than did the adult peripheral blood RBCs. Lu et al. showed two methods of producing erythrocytes using hemangioblasts derived from hESCs as starting materials: one, the massive production of nucleated erythrocytes without adult -globin expression, and the other, induction of enucleation of hESC-derived erythrocytes with some -globin expression (Lu et al., 2008). By the method for massive production, they generated 10 10 to 10 11 erythrocytes from one 6-well plate of hESCs. In the first step, EBs were formed and cultured in serum-free medium containing BMP-4, VEGF, and bFGF. After 48 hours, half the medium was exchanged for fresh medium with the same cytokines and additional SCF, TPO, and FL, and cultured for a further 36 hours. In the second step, EBs were then dissociated into single cells, which were cultured for 10 days in blast-colony growth medium (BGM) consisting of IMDM, 1.0% methylcellulose, bovine serum albumin, insulin, iron-saturated transferrin, GM-CSF, IL-3, IL-6, G-CSF, EPO, SCF, VEGF, and BMP-4. Dependent on the hESC lines, TPO and FL were added to the cytokine combination. This culture condition induced and expanded the hESC-derived hemangioblasts that had been described in a previous report . To optimize the method, they used a fusion protein consisting of HoxB4 and triple protein-transduction domains (tPTD-HoxB4). The PTD used here was a modified form of PTD embedded in the transactivator of transcription protein of the human immunodeficiency virus (Ho et al., 2001;. Maximum efficiency was achieved when tPTD-HoxB4 and bFGF were added to the BGM. In the third step, equal volumes of BGM containing additional EPO were added to the existing BGM, and the cells were further cultured and differentiated into erythroid cells for 5 days. The erythroid cells were then transferred to serum-free medium containing SCF, EPO, and 0.5% methylcellulose, and expanded for 7 days. In the final step, for the purification of the erythroid cells, the resulting cells were plated in tissue culture flasks overnight to allow nonerythroid cells to attach to the flasks, and the nonadherent cells were collected. By this method, numerous erythroid cells (10 10 to 10 11 cells from one 6-well plate of hESCs) could be obtained; however, these hESC-derived erythroid cells were nucleated and contained embryonic -and -globins, and fetal G -globin, but neither fetal A -globin nor adult -globin. Nevertheless, the hESC-derived erythroid cells showed an oxygen equilibrium curve comparable to that of normal adult RBCs. A modification of this method allowed enucleated hESC-derived erythrocytes with adultglobin to be obtained. The protocols in the first step and up to day 7 in the second step were the same. After 7 days of culture in the second step, the cells were cultured in serum-free medium containing bovine serum albumin, inositol, folic acid, transferrin, insulin, ferrous nitrate, and ferrous sulfate, and supplemented with hydrocortisone, SCF, IL-3, and EPO. After 7 days, SCF and IL-3 were removed. In these conditions, 10% to 30% of the hESCderived erythrocytes were enucleated. Importantly, considering that the hESCs were maintained without feeder cells, the enucleated erythrocytes were generated in completely feeder-free conditions. By this method, however, hESCs showed expansion of only 30-to 50fold. Furthermore, even after enucleation, the hESC-derived erythrocytes expressed mainly embryonic -and -globins, and fetal -globin, but not -globin. However, survival and enucleation of the erythrocytes were enhanced when they were cocultured on OP9 cells, and long-term culture of the cells induced adult -globin expression from 0% at day 17 to 16% at day 28, indicating the potential of globin switch of hESC-derived erythrocytes. Dependent on the methods, the expression patterns of the hemoglobin subunits were different. Comparison of the methods would be useful to understanding the mechanisms of erythroid development and globin switch. If hESC-derived fetal erythrocytes could be successfully changed to adult erythrocytes and high efficiency achieved, it would open up the way to clinical use.
Megakaryocytes and platelets
Platelet derivation from hESCs is also of concern for transfusion medicine. Platelets can be stored for only 3 to 4 days, and more donors are needed to secure sufficient amounts of platelet concentrates than are needed for RBCs. Two groups so far reported specific methods for megakaryocyte/platelet derivation from hESCs, and both used coculture with feeder cells (Gaur et al., 2006;Takayama et al., 2008). In the first report, small clumps of hESCs were cultured on OP9 cells in the presence of 100 ng/mL TPO. The cells were transferred onto fresh OP9 cells on days 7 and 11. After 15 to 17 days of culture, 20% to 60% of the hESC-derived cells were positive for both CD41a and CD42b, which are representative markers of the megakaryocyte lineage. In this culture, 1×10 5 starting hESCs yielded 1 to 4×10 4 CD41a + CD42b + cells. These cells showed megakaryocytic morphology with www.intechopen.com polyploidy. The hESC-derived megakaryocytes showed substantial increase of fibrinogenbinding capacity compared to baseline in response to thrombin receptor-activating agonists or adenosine di-phosphate. This result indicated the presence of appropriate inside-out signaling of integrin IIb 3 in hESC-derived megakaryocytes, which controls affinity and avidity of integrin IIb 3 for fibrinogen. Moreover, when hESC-derived megakaryocytes were plated on fibrinogen-coated glass cover slips, they showed extensive lamellipodia formation, F-actin formation, and vinculin localization, indicating proper outside-in signaling of integrin IIb 3. However, these apparently functional megakaryocytes rarely differentiated to proplatelets. These data imply that terminal differentiation to mature platelets might not be observed in this culture system. On the other hand, Takayama et al. confirmed the first report of the derivation of megakaryocytes from hESC using coculture with OP9 cells, and developed a new method of generating megakaryocytes capable of releasing platelets (Takayama et al., 2008). Coculture of hESCs on either C3H10T1/2 or OP9 cells without transfer to new feeders for 2 weeks led to emergence of SLSs. Addition of VEGF to the culture medium increased the number of the SLSs. These SLSs contained hematopoietic progenitors with multilineage colony-forming potential, and those progenitors could be further differentiated into mature proplatelet-forming megakaryocytes when transferred onto new feeder cells and cultured in the presence of TPO for an additional 7 to 9 days. CD41a + CD42b + platelets were then detected in the culture supernatants. The maximum yield was achieved when the medium was supplemented with SCF and heparin in addition to TPO, resulting in approximately 5×10 6 platelets produced from 10 5 hESCs. The hESC-derived platelets had appropriate inside-out and outside-in
Natural killer cells
Natural killer (NK) cells have cytotoxic enzymes and play a major role in innate immunity. They also have antitumor activity, and the possibility of safe and efficaciously adoptive immunotherapy using NK cells has been shown in the setting of allogeneic hematopoietic stem cell transplantation or studies of NK cell transfusion for malignancies (Ljunggren and Malmberg, 2007;Miller et al., 2005;Ruggeri et al., 2002). Derivation of NK cells with antitumor activity from hESCs could be a possible means of immunotherapy. The most functional hESC-derived NK cells so far were generated by sequential coculture on different feeder cells (Woll et al., 2005;Woll et al., 2009). Firstly, hESCs were cultured on S17 or a murine bone marrow stromal cell line M210-B4 for 17 to 20 days. After the first coculture, CD34 + CD45 + cells were sorted and transferred to a murine fetal liver-derived stromal cell line, AFT024, and cocultured in medium containing human AB blood-type serum with a cytokine cocktail consisting of IL-3, SCF, IL-15, FL, and IL-7. At 3 to 5 weeks of culture, approximately 70% of the hESC-derived cells were CD45-and CD56-positive NK cells, with expression of receptors typically found on adult NK cells such as CD16, CD94, NKp46, and killer-cell Ig-like receptors (KIR or CD158). Interestingly, hESC-derived NK cells showed higher cytolytic activity against various tumor and leukemia cell lines than did NK cells derived from cord blood progenitors under the same conditions. Higher antileukemic activity in vivo with hESC-derived NK cells was also demonstrated in a mouse model for human leukemia using the human erythroleukemia cell line K562. These results indicate that hESC-derived NK cells are potentially a good source for immunotherapy.
T and B lymphocytes and other lineages
T and B cells have central roles in acquired immunity, but derivation of these cells from hESCs could be more difficult than that of cells of other lineages. As described in section 2. 2, mESCs can be easily differentiated into T cells using OP9-DL1 cells. However, Martin et al. reported that hESC-derived CD34 + progenitors could not be differentiated into the T-cell lineage in vitro even by co-culture with OP9-DL1 cells or by fetal thymic organ culture (Martin et al., 2008). The first successful specific derivation of mature T cells from hESCs was achieved by an in vivo procedure using SCID-hu mice (Galic et al., 2006). The SCID-hu mice were constructed by insertion of human fetal thymus and liver under the renal capsule of SCID mice, and provide the environment for T lineage differentiation (Akkina et al., 1994;McCune et al., 1988). Human ESC-derived CD34 + or CD34 -CD133 + hematopoietic progenitors, obtained by coculture with OP9 cells for 7 to 14 days, were injected into thymus/liver implants in sublethally irradiated SCID-hu mice. After 3 to 5 weeks, biopsy of the thymus/liver implants demonstrated repopulation of hESC-derived cells in the implants accounting for up to 6.2% of the total cells. Phenotypic analysis revealed differentiation of hESCs into immature CD4 + CD8 + T cells and mature CD4 + CD8 -and CD8 + CD4 -T cells. Later, the same group modified the methods and adopted EB formation instead of coculture with OP9 cells (Galic et al., 2009), and they showed normal V(D)J recombination during differentiation of hESC-derived T cells and CD25 expression on the cells in response to stimulation. However, complicated and cumbersome in vivo procedures, particularly the use of human fetal thymus and liver, obviously hamper the further progress of the study of hESC-derived T-cell development.
Contrary to the previous report by Martin et al., Timmermans et al. reported an in vitro method of T cell differentiation using coculture with OP9-DL1 (Timmermans et al., 2009). In this method, hESCs were cocultured on OP9 cells. After 10 to 12 days, endothelium-lined cell clumps emerged that resembled the hESC-derived SLSs described in Takayama's method of megakaryocyte differentiation. These structures were transferred onto OP9-DL1 cells and cultured in medium supplemented with FL, IL-7, and SCF. After 14 days of coculture on OP9-DL1 cells, CD4 SP cells and CD4 CD8αα DP cells were detected within the cytoplasmic CD3 + CD5 + cell population. After 21 days, CD4 CD8α DP cells appeared, and on day 28, DP cells accounted for 25% of the cells. After 30 days of culture, 15% to 50% of hESC-derived cells were T lineage cells expressing surface CD3 and TCRα . In addition to the CD3 + TCRα + cells, CD3 + TCR + cells also emerged. These results suggested that hESCderived T cells differentiated phenotypically in a way similar to that in thymic development.
In response to stimulation, hESC-derived T cells showed a 2,500-fold increase, and all surface CD3 + T cells had the mature CD27 + CD1a -phenotype. Restimulation of the expanded T cells induced interferon-production. These results indicated that phenotypically and functionally mature T cells could be generated from hESCs, although detailed functional analyses have yet to be performed. B cell differentiation from hESCs is also challenging compared with that of other lineages. No effective methods for achieving B cell differentiation from hESCs have so far been devised. Martin et al. reported that hESC-derived CD34 + hematopoietic progenitors lacked B lineage differentiation capability when cocultured with MS-5 cells that support B cell differentiation from cord blood CD34 + progenitors (Martin et al., 2008). Thus, an additional cue is required to establish an environment sufficient for B cell differentiation, in addition to the cytokines and feeders that have been used so far. Given the success in B cell differentiation from mESCs, differences between mESCs and hESCs or species specificities of the feeder cell-expressed proteins may explain this hurdle. Other lineages of blood cells, such as macrophages (Anderson et al., 2006) and dendritic cells (Slukvin et al., 2006), can also be generated from hESCs. As described in this section, hESC-derived mature blood cells including neutrophils, erythrocytes, megakaryocytes, and NK cells are commonly very similar to their normal counterparts in morphology, phenotype, and function. Therefore, if sufficient amounts of mature blood cells derived from hESCs can be obtained, they can be expected to be used for a variety of purposes, for example, as substitutes for normal blood cells for in vitro drug screening and as blood transfusion sources.
Future directions
Coculture with feeder cells and EB formation are the two major strategies for hematopoietic differentiation from hESCs commonly used to generate both progenitors and mature blood cells. However, no methods for generating bona fide HSCs from hESCs have yet been established, despite the fact that feeder cells derived from bone marrow, fetal liver, and AGM should provide a hematopoietic microenvironment similar to the physiologic one. As regards the preparation of an ideal microenvironment for inducing HSCs from hESCs, the combined use of an in vitro culture system with an animal body may prove a powerful method. Recently, a sensational report of the generation of rat pancreas in mouse was published (Kobayashi et al., 2010). Injection of rat wild-type iPSCs into blastocysts of a Pdx1null mouse, which is devoid of pancreas and dies soon after birth, resulted in the development of a compensatory pancreas entirely derived from rat iPSCs. This result indicated that when a developmental niche for a certain organ is empty, pluripotent stem cell-derived cells can occupy the niche and compensate for the missing organ. Considering application of this finding for hematopoiesis, it may be possible to obtain pluripotent stem cell-derived HSCs using a mouse that is devoid of HSCs, for example, GATA2- (Tsai et al., 1994), SCL/Tal1- (Porcher et al., 1996), Runx1/AML1- (Okuda et al., 1996), or Notch1- (Kumano et al., 2003) null mice. This approach of interspecific blastocyst complementation might contribute to overcoming the issue of yield, which still represents a high barrier to reaching clinical applications. If large animals, such as pigs, without hematopoietic ability become available, injection of human ESCs or iPSCs into their blastocysts might make it possible to obtain massive amounts of human HSCs and mature blood cells, although contamination with xenogeneic constituents is still a problem, and ethical arguments must be addressed before proceeding to the generation of human-animal hybrid embryos, particularly given that human cells could be differentiated into mature cells other than blood cells in the animal. To achieve magnitudes of increase in cell number yields, which is necessary for virtually all the protocols hitherto reported, one potential goal is generation of progenitor cell lines that can proliferate infinitely and produce mature blood cells. As described in section 2. 2, mESC-derived erythroid progenitor lines could differentiate into functional mature red blood cells both in vitro and in vivo and ameliorate anemia in mice . Although these erythroid progenitor lines were generated by coculturing with feeder cells under cytokine stimulation, genetic manipulation of hESCs or their progenies can also be considered. Gene manipulation has a risk of causing tumorigenesis; however, this concern is much smaller in the case of RBCs and platelets, given that these are unnucleated cells.
With the accumulated findings of hematopoietic differentiation from hESCs, hESC-derived HSCs and mature blood cells are now or will soon be good resources for functional analyses, drug-screening tests, research into the differentiation process, and so forth. Remarkable progresses in this field are continuously being made, which is encouraging for the achievement of clinical application of hESC-derived blood cells in the not-too-distant future. | 2017-08-27T06:54:21.139Z | 2011-04-26T00:00:00.000 | {
"year": 2011,
"sha1": "683636bd12a7fb4a3187f61902de02fa92700611",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.5772/16086",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "0697bdbbf270558513f7797080485fb9d6ea76da",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
245722842 | pes2o/s2orc | v3-fos-license | Electrical transport properties of atomically thin WSe2 using perpendicular magnetic anisotropy metal contacts
Tungsten diselenide, WSe2 shows excellent properties and become very promising material among two dimensional semiconductors. Wide band gap and large spin-orbit coupling along with naturally lacking inversion symmetry in the monolayer WSe2 make it efficient material for spintronics, optoelectronics and valleytronics applications. In this work, we report electrical transport properties of monolayer WSe2 based field effect transistor with most needed multilayer Co/Pt ferromagnetic electrodes exhibiting perpendicular magnetic anisotropy. We studied contacts behaviour by performing I-V curve measurements and estimating Schottky barrier heights (SBHs). SBHs estimated from experimental data are found to be comparatively small, without using any tunnel barrier. This work expands the current understanding of WSe2 based devices and gives insight into the electrical behaviour of Co/Pt metal contacts, which can open great possibilities for spintronic/valleytronic applications.
Scaling issues and short channel effects in silicon-based electronics and exfoliation of graphene in atomically thin layer form has led the quest of two-dimensional (2D) semiconducting materials 1,2 .Transition metal dichalcogenides (TMDs) is a large family of 2D layered materials, which show various physical properties ranging from semiconducting to superconducting 3 .Among these materials, semiconducting materials such as MoS2 and WSe2 has have drawn considerable interest due to their direct band gap semiconducting nature in monolayer form associated with attracting electronic, mechanical, optical, and valleytronic properties and therefore proposed for various technological applications 2,3,4 .These materials are among the best candidates for valleytronic applications because in monolayer form they naturally lack inversion symmetry and enable easy dynamical control of valley degree of freedom.Although all of them show a significant amount of spin-orbit splitting in their valence bands, however splitting for WSe2 is larger compared to other TMDs due to possession of heavier atoms 5 .Bands with large spin splitting in WSe2 can make it a promising candidate to study valley dependent spin properties in spintronics/valleytronics. Due to semiconducting nature of WSe2 and formation of metal-semiconductor junction during device fabrication, these devices show large Schottky barriers 6 .Large Schottky barrier height (SBH) impedes injection of charge carriers and could make it difficult to observe valley dependent spin signals.In order to understand/improve electrical properties of WSe2 based field effect transistor (FET), researchers have used metal electrodes with different work functions, intentionally doped WSe2 channel or electrodes, used 2D electrodes and tunnel barriers 7,8,9,10 .Since spin polarization in WSe2 is perpendicular to the sample plane, perpendicular magnetic anisotropy (PMA) electrodes are the pre-requisite to detect spin-valley signals and integrate WSe2 material in valleytronic/spintronic devices.There are very limited reports in the literature, which discuss transport properties of monolayer WSe2 FET using ferromagnetic electrodes (especially PMA electrodes) 9 .As transport properties of WSe2 materials depend on growth method, quality of interface, metal work function and other fabrication procedure, it is very important to explore these properties more with different metal electrodes.In this paper, we fabricated FET using salt-assisted CVD grown single layer WSe2 channels and multilayer Co/Pt metal electrodes, which show PMA character.We studied electrical properties of back-gated monolayer WSe2 at different temperatures.
Atomically thin WSe2 material was grown via salt-assisted chemical vapor deposition (CVD) technique on n-type doped Si/SiO2 substrate, where thickness of thermally oxidized SiO2 was ~285 nm.TMDs grown using this method was found to show excellent physical properties 11,12,13,14 .To create WSe2 channels (6x4 µm 2 ), we spin-coated TGMR resist (negative tone) and performed electron-beam (EB) lithography.Protecting WSe2 channel by TGMR, we etched unwanted WSe2 from substrate using O2 plasma etching.Later, TGMR was lifted by wet etching using N-Methyl-2-pyrrolidone and rinsed by acetone and isopropyl alcohol (IPA).Once channel is formed, we deposited Co/Pt multilayer (PMA) electrodes via magnetron sputtering at room temperature 15 .We confirmed PMA of electrodes on SiO2 substrate 15 and on WSe2 sample by the Hall measurement with a Hall bar structure.Ti(3nm)/Au(60nm) was evaporated via EB deposition technique to form pads in our FET devices.Confirmation of single layer WSe2 channels was done by Raman spectroscopy experiment using a laser light of wavelength of 488 nm.Multi-terminal electrical measurements were carried out using helium-free cryostat.
A Hall bar structure was patterned using EB lithography on single layer WSe2 grown on SiO2/Si substrate to confirm PMA.Multilayer [Co(0.5)/Pt(3.4)]2(numbers in parentheses are the layer thickness in nm) structure was sputtered at room temperature and Hall measurement was carried out using physical property measurement system (Quantum Design, USA).Hall bar and measurement set-up is shown in Fig. 1 16 .Negligible increase in IDS with back gate voltage reflects possible Fermi level pinning.Strong electrostatic gating is required to make it clear whether device is n type, p type or showing ambipolar behaviour.Usually, monolayer WSe2 have been reported to show ambipolar behaviour, however in the case of heavy metal electrodes such as Pd, it was found to exhibit p-type behaviour as heavy metals have large work function, which can inject hole carriers in valence band of WSe2 16 .For low work function metal contacts such as Al, WSe2 was reported to show n-type device characteristics 17 .
To shed light on the Schottky barrier formed at the metal semiconductor interface, we performed IDS -VDS curves measurements as a function of temperature in our back-gated device.SBH was calculated by employing thermionic emission model modified for 2D materials 18 give by is anticipated that when contacts are multilayer structures, the work function at the interface is modified and is given by the effective work function 19 .This can affect the SBH at the interface.It is worth to note that the value of SBH is almost same at 0 and 10 V, there is hardly any modulation using gate, which was also reflected in I-V curves in Fig. 3.
(a).Fig.1(b) shows perpendicular field (H⊥) dependence of Hall resistance Rxy at 5 K.The plot shows sharp magnetic transition with field sweep in up and down direction with clear hysteresis, which confirms PMA of multilayer Co/Pt superlattice structure.
Fig. 2
Fig.2shows four-probe I-V curve measurements performed at room temperature to calculate channel resistance of monolayer WSe2.A bias voltage of 2V was applied at the outer electrodes, current flowing through the outer electrodes was recoded as a function of voltage drop between two inner electrodes.Four-probe I-V curve shows linear behaviour in the entire range of applied bias voltage.The channel resistance estimated from results in Fig.2is of the order of several hundred killoOhm.High channel resistance in WSe2 is the indicative of large bandgap in these materials.
A, A*, kB are the contact surface area, 2D equivalent Richardson constant and Boltzmann constant, respectively.Here ideality factor is denoted by n and Schottky barrier height (in eV) by .Equation (1) can be re-arranged in the form given below .(2) Where EA is an activation energy and is given as .To contribute current in the FET device, charge carriers should overcome activation energy to cross the barrier and move through the WSe2 channel.SBH can be estimated experimentally by Arrhenius plot, vs. 1/T as shown in Fig. 4(a).The slope of plot was estimated by fitting equation (2) to the experimental data for various values of VDS at Vg = 0 V.The slope is given by -EA/1000kB and is plotted as a function of VDS in Fig. 4 (b) and (c) for Vg = 0 and 10V, respectively.The intercepts from the linear fits to the experimental data gives SBH.The error for SBH at 10 V is larger due to scattering points.The value of SBH was found to be 36.7 and 38.3 meV for Vg = 0 and 10V, respectively.The observed value of SBH is smaller than various metal contacts used before to fabricate WSe2 based FET.The previously reported SBH values for Pt 9 and Pd 6 contacts are ~239 meV and Pd ~325 meV, respectively.We used multilayer [Co/Pt] metal contacts.It
Fig. 3 .
Fig. 3. (a) IDS vs VDS curves as a function of back gate voltage (Vg) recorded at room temperature.(b) Vg dependence of IDS at different bias voltages (VDS). | 2022-01-06T16:09:35.489Z | 2022-01-03T00:00:00.000 | {
"year": 2023,
"sha1": "a96f69d46538255356cc4530b48819db9ba27ff8",
"oa_license": null,
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "Arxiv",
"pdf_hash": "65ede9319e56bf57261f8485b5c33b4480b6bd09",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
244168434 | pes2o/s2orc | v3-fos-license | De Novo Reconstruction of Transcriptome Identified Long Non-Coding RNA Regulator of Aging-Related Brown Adipose Tissue Whitening in Rabbits
Simple Summary Brown adipose tissues (BATs) undergo the conversion to white adipose tissues (WATs) with age. Long non-coding RNAs (lncRNAs) were widely involved in adipose biology. Rabbit is an ideal model for studying the dynamics of the transformation from BATs to WATs. However, our knowledge of lncRNAs that mediate the transformation remains unknown in rabbits. By histological analysis and sequencing, we found rabbit interscapular adipose tissues (iATs) from BATs to WATs within two years and identified a total of 631 differentially expressed lncRNAs (DELs) during the transformation process. Several signal pathways were involved in the transformation from BAT to WAT. A novel lncRNA that was highly expressed in iATs of aged rabbits was validated to impair brown adipocyte differentiation in vitro. Our study provided a comprehensive catalog of lncRNAs involved in the transformation from BATs to WATs in rabbits, facilitating a better understanding of adipose biology. Abstract Brown adipose tissues (BATs) convert to a “white-like” phenotype with age, which is also known as “aging-related BAT whitening (ARBW)”. Emerging evidence suggested that long non-coding RNAs (lncRNAs) were widely involved in adipose biology. Rabbit is an ideal model for studying the dynamics of ARBW. In this study, we performed histological analysis and strand-specific RNA-sequencing (ssRNA-seq) of rabbit interscapular adipose tissues (iATs). Our data indicated that the rabbit iATs underwent the ARBW from 0 days to 2 years and a total of 2281 novel lncRNAs were identified in the iATs. The classical rabbit BATs showed low lncRNA transcriptional complexity compared to white adipose tissues (WATs). A total of 631 differentially expressed lncRNAs (DELs) were identified in four stages. The signal pathways of purine metabolism, Wnt signaling pathway, peroxisome proliferator-activated receptor (PPAR) signaling pathway, cyclic guanosine monophosphate (cGMP)/cGMP-dependent protein kinase (cGMP-PKG) signaling pathway and lipid and atherosclerosis were significantly enriched by the DELs with unique expression patterns. A novel lncRNA that was highly expressed in the iATs of aged rabbits was validated to impair brown adipocyte differentiation in vitro. Our study provided a comprehensive catalog of lncRNAs involved in ARBW in rabbits, which facilitates a better understanding of adipose biology.
Introduction
The mammals contain white adipose tissues (WATs) that function as energy depositories and brown adipose tissues (BATs) that function in energy expenditure [1]. BAT is densely packed with mitochondria that express high levels of uncoupling protein 1 (UCP1), which allows proton leakage to uncouple respiration from ATP synthesis, playing an important role in thermogenesis for temperature homeostasis [1,2]. The BAT content is negatively correlated with the total fat deposition of the body [3,4]. In humans, BAT activity has beneficial metabolic effects on obesity, insulin resistance and atherosclerosis [5]. Although BAT can be found in most new-born animals, there is a difference in BAT development among different species [6][7][8]. The hibernating animal and some rodents persist in their BATs into adult life [9,10]. In most mammals such as ruminants, rabbits and humans, BATs undergo a poorly elucidated conversion to a "white-like" phenotype with age, which is known as "aging-related BAT whitening (ARBW)" [11]. However, the regulatory mechanisms underlying ARBW remain unknown.
Rabbit (Oryctolagus cuniculus) is an economically important domestic animal due to its high-quality meat, fur and hair [22,23]. Rabbits have low-fat deposition than other mammals, such as swine, cattle and sheep [24]. The BAT content and activity may account for the natural low-fat deposition of rabbits. Thus, rabbit, as an animal model, can be used in studying the dynamics of ARBW. The BAT development is important for the temperature homeostasis of newborn rabbits, especially for those were born in the cold ambient temperature [10]. Investigation of the molecular mechanisms underlying the process of ARBW could contribute to improving the welfare and survival rate of newborn rabbits. However, our knowledge of lncRNAs that mediate ARBW remains largely unknown in rabbits.
Interscapular adipose tissue (iAT) is the major BAT depot of rabbits [10]. In this study, we carried out a histological analysis to determine the process of ARBW of iATs in rabbits. We performed strand-specific RNA sequencing (ssRNA-seq) to identify lncRNAs involved in the process. Our data revealed the development of ARBW in rabbits and a total of 2281 novel lncRNAs were identified in our samples. Furthermore, classical rabbit BAT was a low lncRNA transcriptional complexity tissue. We identified lncRNAs that were significantly correlated with their flanking or reference genes. Clustering analysis of differentially expressed lncRNAs (DELs) revealed that lncRNAs with different expression patterns in stages played different roles in a cis-regulating way during ARBW. One DEL, MSTRG.2316.1, was validated to impair brown adipocyte differentiation in vitro. Our work is the first report of the dynamics of lncRNA regulatory mechanisms underlying the ARBW in rabbits and facilitates a better understanding of adipose biology.
Ethics Approval
All surgical procedures involving rabbits were performed according to the approved protocols of the Biological Studies Animal Care and Use Committee, Sichuan Province, China. Rabbits had free access to food and water under normal conditions and were humanely sacrificed as necessary to ameliorate suffering.
Tissue Sample Preparation, Histological Analysis and Immunohistochemistry (IHC)
In this study, the Tianfu Black rabbits (native species in Sichuan province of China) were raised at the breeding center of Sichuan Agricultural University, Ya'an, China. These rabbits were given ad libitum access to a standard diet and water as described previously [20]. To determine the lncRNA dynamics during BAT development in rabbits, a total of 12 samples were collected from iATs at four growth stages of 0 days (D0, infant stage), 15 days (D15, early whitening stage), 85 days (D85, puberty stage) and 2 years (Y2, aged stage) under sterile condition and 3 individuals were set at each stage. The samples for ssRNA-seq were snap-frozen in liquid nitrogen and stored at −80 • C until RNA extraction. The samples for histological assay were fixed using 4% paraformaldehyde at 4 • C overnight, embedded in paraffin and sliced. Haematoxylin and eosin (H&E) staining was carried out after deparaffination according to standard protocols as described in our previous study [25]. For IHC, the slices were incubated in the primary UCP1 antibody (1:100, Sangon Biotech, Shanghai, China) at 4 • C overnight and washed three times using phosphate buffer saline (PBS). Then, the slices were incubated in the secondary antibody (1:500, rabbit anti-mouse IgG, Sangon Biotech, Shanghai, China) for 45 min. All slices including in the assays of H&E staining and the IHC, were observed using an Olympus BX-50F light microscope (Olympus Optical, Tokyo, Japan).
SsRNA-seq and lncRNA Identification
The ssRNA-seq was done following our previous study [20]. Briefly, the total RNA of these samples was extracted using TRIzol reagent (Invitrogen, Hong Kong, China) and 1 µg RNA was used to construct a strand-specific library using the deoxyuridine triphosphate (dUTP) method. All purified libraries were sequenced on an Illumina NovaSeq 6000 platform. Finally, 150 bp oriented paired-end reads were generated. The quality of ssRNA-seq reads was checked using the Fastqc program (v0.11.8) [26]. Sequencing adapters and low-quality reads were removed using Cutadapt (v3.2) [27] software with parameters of '-a AGATCGGAAGAGCACACGTCTGAACTCCAGTCAC -A AGATCG-GAAGAGCGTCGTGTAGGGAAAGAGTGT -e 0.1 -m 100 -cut 0 -O 13'. Clean reads were aligned to the rabbit reference genome (OryCun2.0, Ensembl release 101) using HISAT2 (v2.1.0) [28] with the strand-specific parameter of '-rf' and the parameter of '-dta' for the downstream transcriptome reconstruction. Reconstruction of the transcriptome was conducted using Stringtie (v2.1.1) [29]. All transcripts generated from all samples were merged using Stringtie with a parameter of 'merge' to generate a consensus transcriptome. The transcripts with the length size of <200 bp were discarded and only transcripts containing multiple exons and having a value of transcripts per kilobase of exon model per million mapped reads >0.1 (TPM > 0.1) at least in one sample were collected to downstream analyses. To identify novel transcripts, the consensus transcriptome was compared to the rabbit reference transcriptome (OryCun2.0, Ensembl release 101) using Gffcompare (v0.11.2) [30]. The transcripts were located in the intergenic regions or a single intron, overlapping on an opposite strand of annotated transcripts and inadequately overlapping with an annotated exon were retained. For the retained transcripts, CPC2 (v0.1) [31], CPAT (v2.0.0) [32] and CNCI (2.0) [33] were used to check their protein-coding capacity and PfamScan [34] was used to align them to the Pfam database to remove potential protein-coding transcripts (http://pfam.xfam.org/, accessed on 10 November 2021). Only the non-coding transcripts identified by CPC2, CPAT, CNCI and PfamScan were considered credible lncRNAs. Most of the rabbit lncRNAs deposited in current databases had not yet been functionally annotated and lncRNAs shared poor conservation across species, which raised the challenges in inferring the functions of lncRNAs [35]. Recent studies demonstrated that lncRNAs could regulate the flanking genes to play their roles by cisregulation [36]. We performed lncRNA category-based cis-regulation prediction. For AS lncRNA, SO lncRNA and intronic lncRNA, the protein-coding genes located at the loci of the lncRNA were considered potential cis-regulated target genes. For lncRNA, the protein- coding genes located within 100 kb of lincRNA were considered potential cis-regulated target genes. The potential cis-regulated targets were used to predict lncRNA functions.
Transcriptomic Quantification and Differential Expression Analysis
To identify DELs during ARBW of the rabbit iAT, we quantified gene expression levels of lncRNAs. The stringtie '-eB' and '-A' were used to estimate raw read counts and TPM. The raw read counts were used as inputs to identify DELs in pairwise comparisons over the time courses of D0, D15, D85 and Y2 using DESeq2 [37]. The P values of hypothetical tests were adjusted using the method of false discovery rate (FDR). The lncRNAs with the thresholds of |log2(fold-change)| > 1.5 and FDR < 0.01 were considered DELs. Based on the TPM value, all DELs were clustered using R software (v4.4.1) and the clustering result was visualized using R package complexHeatmap [38]. Kyoto Encyclopedia of Genes and Genomes enrichment (KEGG) pathway analysis was conducted using R package clusterProfiler [39] and the enriched pathways that had a p value < 0.05 were considered significant.
Cell Culture and Plasmid-Mediated Overexpression
The brown preadipocytes (BPAs) were isolated from iATs of new-born Tianfu black rabbits using the collagenase I (Gibco, Carlsbad, CA, USA) method as described in our previous study [21]. The BPAs were placed into 12-well plates at a density of 3 × 10 5 cells per plate in the complete medium (Dulbecco's Modified Eagle Medium (DMEM) with high glucose, supplemented with 10% fetal bovine serum (FBS)) (Gibco, Carlsbad, CA, USA) and BPAs were placed in a humidified incubator at 37 • C and 5% CO 2 . Upon reaching approximately 80% confluence, the cells were induced using differentiation medium I (DMEM with high glucose, supplemented with 10% FBS, 500 µM 1-methy1-3-iosbutylxanthine [IBMX], 10 µg/mL insulin, 50 nM T3 and 5 µM dexamethasone) for 2 days. The cells were cultured in differentiation medium II (DMEM with high glucose, supplemented with 10% FBS, 500 µM IBMX, 2.5 µg/mL insulin, 50 nM T3 and 1 µM rosiglitazone) for 2 days. Finally, cells were cultured in differentiation medium III (DMEM with high glucose, supplemented with 10% FBS, 500 µM IBMX, 50 nM T3 and 1 µM rosiglitazone) for an additional 1 day. The cells were treated with 10 µM isoproterenol for 4 h before harvesting. The IBMX, insulin, T3, dexamethasone and rosiglitazone were purchased from Sigma-Aldrich (Shanghai, China). The accumulated lipid droplets were measured by Oil red O staining. The lipid-combined Oil red O dye was extracted by 2.5 mL isopropanol. The optical density (OD-510 nm) of the Oil red O elution was used to quantify the degree of lipid accumulation.
For lncRNA overexpression (OE), the vector of pcDNA3.1(+) loaded MSTRG.2316.1 and empty vector were purchased from Sangon Biotech, Shanghai, China. BPAs growing in complete medium at approximately 80% confluence were transfected with 1.6 µg/mL concentration of MSTRG.2316.1 vector or empty vector using Lipofectamine 3000 reagent (ThermoFisher, Carlsbad, CA, USA) according to the manufacturer's instructions. After eight hours, the cells were induced to differentiate (set day 0). The cells were harvested to detect OE efficiency at 48 h.
Quantitative Real-Time PCR (qRT-PCR)
Total RNA was extracted using Trizol reagent according to the manufacturer's instructions. The qRT-PCR primers used were designed using online Primer3 software (Table S1). Total RNA was reversed transcribed to complementary DNA (cDNA) using PrimeScripts RT Reagent Kit containing gDNA Eraser (TAKARA, Dalian, China). Then, the cDNA template used for qPCR was determined using the SYBR II master mix kit (TAKARA, Dalian, China). The qPCR was performed on a Bio-Rad CFX manager according to the manufacturer's instructions. The amplification reaction was conducted under the following program: pre-denaturation at 95 • C for 10 s, followed by 40 cycles of denaturation at 95 • C for 5 s and annealing/extension at 59 • C for 20 s. The melting curve analysis was performed Biology 2021, 10, 1176 5 of 15 from 65 to 95 • C with an increment of 0.5 • C. All the qRT-PCR Ct-values were normalized to the Ct-value of the ACTB gene and group D0 using the 2 −∆∆Ct method.
Statistical Analysis
Statistical analyses, including t-test and one-way analysis of variance (ANOVA), were conducted on R software. The p value < 0.05 was considered significant.
Histological Dynamics of ARBW in Rabbits
According to the histological analysis, the brownish color of iATs gradually faded during the development of rabbit iAT ( Figure 1A upper panel). There was an increase in cell diameter from D0 to Y2 and obvious heterogeneity of adipose tissue was found at D15 and D85. The cells in D0 were composed of multiple small triglyceride droplets (multilocular adipocytes), while cells in Y2 were composed of a single large lipid droplet (unilocular adipocytes) ( Figure 1A middle panel). The ratio of multilocular adipocytes to unilocular adipocytes gradually decreased from D0 to Y2 and the iATs in D85 contained a very low proportion of multilocular adipocytes ( Figure 1A middle panel). The IHC assay showed that the expression levels of UCP1 protein gradually declined during the development of iATs ( Figure 1A bottom panel). Furthermore, the results of the qRT-PCR assay showed that transcriptional copy numbers of the mitochondrial genes, including CYTB, COX2 and ND1, were dramatically decreased from D0 to D15 and then gradually decreased from D15 to Y2 in the tissue level ( Figure 1B). the cDNA template used for qPCR was determined using the SYBR II master mix kit (TAKARA, Dalian, China). The qPCR was performed on a Bio-Rad CFX manager according to the manufacturer's instructions. The amplification reaction was conducted under the following program: pre-denaturation at 95 °C for 10 s, followed by 40 cycles of denaturation at 95 °C for 5 s and annealing/extension at 59 °C for 20 s. The melting curve analysis was performed from 65 to 95 °C with an increment of 0.5 °C. All the qRT-PCR Ctvalues were normalized to the Ct-value of the ACTB gene and group D0 using the 2 −ΔΔCt method.
Statistical Analysis
Statistical analyses, including t-test and one-way analysis of variance (ANOVA), were conducted on R software. The p value < 0.05 was considered significant.
Histological Dynamics of ARBW in Rabbits
According to the histological analysis, the brownish color of iATs gradually faded during the development of rabbit iAT ( Figure 1A upper panel). There was an increase in cell diameter from D0 to Y2 and obvious heterogeneity of adipose tissue was found at D15 and D85. The cells in D0 were composed of multiple small triglyceride droplets (multilocular adipocytes), while cells in Y2 were composed of a single large lipid droplet (unilocular adipocytes) ( Figure 1A middle panel). The ratio of multilocular adipocytes to unilocular adipocytes gradually decreased from D0 to Y2 and the iATs in D85 contained a very low proportion of multilocular adipocytes ( Figure 1A middle panel). The IHC assay showed that the expression levels of UCP1 protein gradually declined during the development of iATs ( Figure 1A bottom panel). Furthermore, the results of the qRT-PCR assay showed that transcriptional copy numbers of the mitochondrial genes, including CYTB, COX2 and ND1, were dramatically decreased from D0 to D15 and then gradually decreased from D15 to Y2 in the tissue level ( Figure 1B). The expression levels of mitochondrial genes were detected at four growth stages during rabbit iAT whitening using qRT-PCR. The expression was normalized to the ACTB gene and D0. The data shows the means of three independent experiments. Two technical replicates were set for one individual experimental replicate. The "Rel." represents "Relative".
Identification and Characterization of lncRNAs in Rabbits iATs
To better define active lncRNAs during the ARBW of iATs, we reconstructed credible ted the transcriptome of iAT at the growth stages of D0, D15, D85 and Y2 in rabbits (n = 3 The expression levels of mitochondrial genes were detected at four growth stages during rabbit iAT whitening using qRT-PCR. The expression was normalized to the ACTB gene and D0. The data shows the means of three independent experiments. Two technical replicates were set for one individual experimental replicate. The "Rel." represents "Relative".
Identification and Characterization of lncRNAs in Rabbits iATs
To better define active lncRNAs during the ARBW of iATs, we reconstructed credible ted the transcriptome of iAT at the growth stages of D0, D15, D85 and Y2 in rabbits (n = 3 per stage). We performed paired-end ssRNA-seq for each tissue and obtained an average of 122.33 million clean reads from each sample, of which an average of 94.01% was properly mapped to the rabbit genome (Table S2). A total of 2281 novel lncRNAs were identified using different machine-learning models (Figure 2A). The novel lncR-NAs were then classified into 4 categories, including lincRNAs (1058), SO lncRNA (291), intronic lncRNAs (548) and AS lncRNAs (384) ( Figure 2B upper panel). By integrating the 1640 Esembl annotated lncRNAs (all were classified lincRNAs category), which were expressed in our samples, we obtained a total of 3921 lncRNAs expressed in the rabbit iATs ( Figure 2B bottom panel). Analysis of rabbit iAT lncRNA structure found that most lincRNAs, intronic lncRNAs and AS lncRNAs contained one or two exons, especially for intronic lncRNAs, while more SO lncRNAs contained multiple exons ( Figure 2C). When compared to protein-coding genes, lncRNAs were less expressed ( Figure 2D), which were in line with our previous lncRNA study in rabbits [20,21] and other animals [2]. Analysis of lncRNA transcriptional complexity of the iATs, visceral white adipose tissues (vWATs) and skeletal muscle tissues revealed that the complexities of the of iATs and skeletal muscle tissues were lower than those of vWATs. The iATs in D0, which represented the classical BAT, were the least complex tissue (top 10 expressed lncRNAs, accounting for approximately 80% of total lncRNA expression). Our data showed that the top 10 expressed lncRNAs in BATs were novel lncRNAs, such as MSTRG.11968.1 (an intronic lncRNA at ZNF777), MSTRG.17113.1 (an AS lncRNA of MAPK8) and MSTRG.14188.65 (a lincRNA located at an unplaced genome scaffold) ( Figure 2E). We characterized a catalog of credible novel lncRNAs expressed in the rabbit iATs and indicated that classical BATs were low lncRNA transcriptional complexity tissues.
We explored the expression correlation between lncRNAs and the protein-coding genes located at the corresponding lncRNA loci. For the AS lncRNAs, the expression patterns of 58 ones were significantly positively correlated with their anti-sense proteincoding genes across all samples (Pearson's correlation coefficient, p < 0.05). For instance, the AS lncRNAs located at the loci of FTX3, ENSOCUT00000058972, ID2, KCNA7, ENSO-CUT00000034065 were the top five lncRNA that had the highest correlation coefficients with their corresponding protein-coding genes ( Figure 2F). On the other hand, 10 AS lncRNAs were significantly negatively correlated with their anti-sense protein-coding genes, such as the AS lncRNAs located at the loci of Ubiquitin regulatory X (UBX) domain protein 2B (UBXN2B), sex determining region (SRY) transcription factor 5 (SOX5) and zinc finger and BTB domain containing 21 (ZBTB21) ( Figure 2G). For the intronic lncRNAs, the expression patterns of 91 and only 2 were significantly positively and negatively correlated with their corresponding protein-coding genes, respectively ( Figure S1A,B). For the SO lncRNAs, the expression patterns of 77 and only 2 were significantly positively and negatively correlated with their corresponding protein-coding genes, respectively ( Figure S1C,D). Alluvial diagram analysis of our AS lncRNAs, intronic lncRNAs and SO lncRNAs showed that approximately 25% of these lncRNAs were significantly correlated (p < 0.05) with their corresponding protein-coding genes. AS lncRNA that were negatively correlated with their corresponding protein-coding genes were prone to enrich in the negative strand of the genome ( Figure 2H). We allocated the lincRNAs to the flanking regions of annotated protein-coding genes and found that 2388 lincRNAs resided in the flanking regions of 7733 annotated protein-coding genes. The expression patterns of 731 lincRNA were significantly positively correlated with their flanking protein-coding genes and those of 92 lincRNA were significantly negatively correlated with their flanking protein-coding genes (Table S3). We identified that approximately 27% of total expressed lncRNAs were significantly correlated with their flanking (for lincRNAs) or reference (for intronic lncRNAs, SO lncRNAs and AS lncRNAs) genes. The Pearson's correlation coefficients and P values of significance tests were marked following protein-coding gene symbols. The "*" represents p < 0.05 and "**" represents p < 0.01. (H) AS lncRNA that were negatively correlated with their corresponding protein-coding genes were prone to enrich in the negative strand of genome. The "pos" represents "a positive correlation with protein-coding genes" and "neg" represents "expression of lncRNAs negatively correlated with protein-coding genes" and n.s. represents "a negative correlation with protein-coding genes". Most of the lncRNAs in "neg" came from AS lncRNAs that transcribed from negative strand (red stream). The left heatmap depicts the lncRNA expression pattern and the right heatmap depicts the anti-sense protein-coding gene expression patterns across samples. The Pearson's correlation coefficients and p values of significance tests were marked following protein-coding gene symbols. The "*" represents p < 0.05 and "**" represents p < 0.01. (H) AS lncRNA that were negatively correlated with their corresponding proteincoding genes were prone to enrich in the negative strand of genome. The "pos" represents "a positive correlation with protein-coding genes" and "neg" represents "expression of lncRNAs negatively correlated with protein-coding genes" and n.s. represents "a negative correlation with protein-coding genes". Most of the lncRNAs in "neg" came from AS lncRNAs that transcribed from negative strand (red stream). Figure 3C and Table S4). Therefore, the results of reconstruction of transcriptome and quantification of lncRNA expression were reliable.
Dynamics of lncRNA Expression during ARBW of Rabbit iATs
significantly enriched pathways of DELs in BATR3 and BATR4, respectively. The DELs in BATR1, BATR2 and BATR8 were upregulated from D0 to D15 and downregulated in D85. KEGG pathway enrichment showed that DELs in BATR1, BATR2 and BATR8 were significantly enriched in the white adipose development-related pathways, such as insulin resistance, glucagon signaling pathway and peroxisome proliferator-activated receptor (PPAR) signaling pathway. The DELs in BATR5 and BATR6 were upregulated from D0 to D85. KEGG pathway enrichment showed that the cyclic guanosine monophosphate (cGMP)/cGMP-dependent protein kinase (cGMP-PKG) signaling pathway and sphingolipid metabolism were the most significantly enriched pathways of DELs in BATR5 and BATR6, respectively. cGMP-PKG was significantly enriched by the DELs in BATR6. The DELs in BATR7 were constantly expressed from D0 to D85 but dramatically upregulated in Y2. The KEGG pathway enrichment showed that the top3 significantly enriched pathways of DELs in BATR7 were adrenergic signaling in cardiomyocytes, Wnt signaling pathway and lipid and atherosclerosis ( Figure 3D). The fatty acid metabolism and PPAR signaling pathway were also enriched by lncRNAs in BATR7. K-means clustering approach based on the TPM was used to sort all DELs, resulting in eight clusters, namely, BATR1 to BATR8, respectively ( Figure 3D). The DELs in BATR3 and BATR4 were expressed in D0 and downregulated in D15. The KEGG pathway enrichment showed that purine metabolism and Wnt signaling pathway were the most significantly enriched pathways of DELs in BATR3 and BATR4, respectively. The DELs in BATR1, BATR2 and BATR8 were upregulated from D0 to D15 and downregulated in D85. KEGG pathway enrichment showed that DELs in BATR1, BATR2 and BATR8 were significantly enriched in the white adipose development-related pathways, such as insulin resistance, glucagon signaling pathway and peroxisome proliferator-activated receptor (PPAR) signal-Biology 2021, 10, 1176 9 of 15 ing pathway. The DELs in BATR5 and BATR6 were upregulated from D0 to D85. KEGG pathway enrichment showed that the cyclic guanosine monophosphate (cGMP)/cGMPdependent protein kinase (cGMP-PKG) signaling pathway and sphingolipid metabolism were the most significantly enriched pathways of DELs in BATR5 and BATR6, respectively. cGMP-PKG was significantly enriched by the DELs in BATR6. The DELs in BATR7 were constantly expressed from D0 to D85 but dramatically upregulated in Y2. The KEGG pathway enrichment showed that the top3 significantly enriched pathways of DELs in BATR7 were adrenergic signaling in cardiomyocytes, Wnt signaling pathway and lipid and atherosclerosis ( Figure 3D). The fatty acid metabolism and PPAR signaling pathway were also enriched by lncRNAs in BATR7.
Selection of lncRNA Candidates and Functional Validation of lncRNA MSTRG.2316.1
To focus on the efforts of lncRNA function validation, we ranked candidate lncRNAs by their abundance, differential expression during ARBW and significant correlation with their flanking or reference genes. A total of 10 lncRNA candidates were selected (Table S5). When searching the 10 lncRNAs to NONCODE database using BLAST, we found that 7 out of the 10 lncRNAs had significant sequence hits (Evalue < 1 × 10 −6 ) in human and mouse lncRNAs, which might indicate the sequence conservation of lncRNAs between rabbits, mice and humans (Table S6). The histological analysis in this study ( Figure 1A) and the BAT master marker UCP1 read coverage ( Figure 4A) during the ARBW indicated that the BATs existed from D0 to D85. Only the samples in Y2 completed the whitening process, which spurred our interest in the cluster BATR7 and contained DELs expressed from D0 to D85, but dramatically upregulated in Y2 ( Figure 3D). One lncRNA candidate, MSTRG.2316.1, in BATR7 with the highest TPM value in our lncRNA candidates was validated and its expression changed little from D0 to D85 and markedly upregulated 35 folds in Y2 in the qRT-PCR validation ( Figure 4B). The 1449 bp length lincRNA MSTRG.2316.1 was located in chromosome 12, contained 2 exons on the positive strand of DNA and its read coverage dramatically increased in Y2 ( Figure 4C). MSTRG.2316.1 was upregulated from D85 to Y2 during the ARBW of iATs and visceral white adipocyte differentiation, suggesting a potential negative regulator of brown adipocyte development ( Figure 4D).
To validate the MSTRG.2316.1 functions, we first established the BPA differentiation model. Our results showed that BPAs isolated from iAT of rabbits in D0 were fusiform or triangular ( Figure 4E upper panel). The Oil red O staining of the induced mature BATs showed that many lipid droplets had accumulated after differentiation for 5 days ( Figure 4E bottom panel). The expression levels of common adipose markers, such as peroxisome proliferator activated receptor gamma (PPARG), CCAAT enhancer binding protein alpha (CEBPA), adiponectin C1Q and collagen domain containing (ADIPOQ) and fatty acid binding protein 4 (FABP4) and BAT-specific markers of cell death inducing DFFA like effector a (CIDEA), ELOVL fatty acid elongase 6 (ELOVL6), PPARG coactivator 1 alpha (PGC1A), peroxisome proliferator activated receptor alpha (PPARA) and UCP1 were significantly upregulated during differentiation of rabbit BPAs ( Figure 4F). The expression levels of MSTRG.2316.1 were significantly downregulated after the induced differentiation of rabbit BPAs ( Figure 4F).
To validate the function of MSTRG.2316.1, we performed gain-of-function analysis using plasmid vector-mediated overexpression (OE) of MSTRG.2316.1 in cultured BPAs. OE led to a significant 6 times increase of MSTRG.2316.1 level, compared to the cells treated with empty vector ( Figure 4G). Oil red O results showed that OE of MSTRG.2316.1 impaired brown adipose differentiation, compared to transfection of empty plasmid vector ( Figure 4H). Quantification of Oil red O of cells indicated that OE of MSTRG.2316.1 significantly decreased the lipid accumulation (p < 0.01 and Figure 4I). Additionally, the qRT-PCR analysis showed that OE of MSTRG.2316.1 significantly decreased FABP4, UCP1, CIDEA, ELOVL6, PCG1A, AIPOQ and PPARA expression (p < 0.05, Figure 4J). Our data showed that MSTRG.2316.1 was a negative regulator of BPA differentiation in vitro. Figure 4B,F,G,J show the means of three independent experiments. Two technical replicates were set for one individual experimental replicate. The "*" represents p < 0.05 and "**" represents p < 0.01.
Discussion
The BAT histological characteristics had been depicted in some species, such as humans, mice and goats [6,11,40]. Our histological assays of rabbits suggested that the characteristics of rabbit BATs were in line with those of species showing the characteristics, The qRT-PCR data in (B,F,G,J) show the means of three independent experiments. Two technical replicates were set for one individual experimental replicate. The "*" represents p < 0.05 and "**" represents p < 0.01.
Discussion
The BAT histological characteristics had been depicted in some species, such as humans, mice and goats [6,11,40]. Our histological assays of rabbits suggested that the characteristics of rabbit BATs were in line with those of species showing the characteristics, including containing multiple small triglyceride droplets, the smaller size of cells and the highly expressed UCP1 and mitochondrial genes. BAT of most mammals undergoes ARBW. Previous studies have revealed the difference in the speed of ARBW among different species, such as hibernating animals persisting BAT into an adult, humans persisting race amounts of BAT into an adult and ruminants persisting BAT in 30 days after birth [6,10,40,41]. Our data revealed that the puberty rabbit contained a low proportion of multilocular adipocytes, suggesting that the ARBW of rabbits was slower than that of hibernating animals and ruminants and similar to that of humans. The long-time persisting BATs may explain the less fat-deposition in rabbits.
We identified 2281 credible novel lncRNAs by reconstructing the transcriptome, indicating the importance of lncRNAs in regulating the ARBW of rabbits. Intronic lncRNAs were transcribed from a single intron of protein-coding genes. Our data showed that intronic lncRNAs contained a few exons, which suggested that the exon number of lncRNAs might be affected by genomic elements of annotated protein-coding genes. Transcriptional complexity, expressed as the fraction of total RNAs, accounted for 10 or 100 most frequently expressed genes [42]. Our analysis revealed that classical BAT (iAT in D0) was a low lncRNA transcriptional complex tissue. On the other hand, the lncRNA transcriptional complexity between iATs and skeletal muscle tissues was similar and vWATs had a higher lncRNA transcriptional complexity than iATs and skeletal muscle tissues, which might suggest the transcriptomic difference between energy expenditure-and storage-related tissue.
Except for lincRNA, the AS lncRNA, intronic lncRNA and SO lncRNA were all transcribed from the loci of annotated protein-coding genes. Previous studies indicated that lncRNA transcribed from the loci of protein-coding genes might regulate the corresponding protein-coding genes [43,44]. For lincRNAs, we predicted their flanking protein-coding genes. By analyzing the expressional relationship between these three types of lncRNA and their corresponding protein-coding genes, we provided a valuable resource of the lncRNAs that were significantly correlated with their corresponding protein-coding genes, which can be used to identify functional lncRNAs efficiently.
In this study, analyses of lncRNA dynamics during rabbit iAT whitening were conducted. By comparing different stages during rabbit iAT development, we detected 631 DELs in the paired comparisons, which demonstrated that lncRNAs were widely involved in the BAT whitening in rabbits. We classified DELs into different clusters using the k-means method. The BAT is specialized for energy expenditure and heat generation, which depends on the function of mitochondria [45]. Purine nucleotides proved a constitutive inhibitor of UCP1 mediated proton conductance of the mitochondrial inner membrane and constituted the default shut-off mechanism in the absence of thermogenic demand [46]. Wnt signaling inhibits brown adipogenesis [47]. Our KEGG enrichment showed that the most significant pathway enriched by lncRNAs in BATR3 and BATR4 was purine metabolism and Wnt signaling pathway, which indicated that rapidly downregulated lncRNAs from D0 to D15 might mediate the ARBW through regulating their cis-regulated genes that were related to purine metabolism. A previous study indicated that a high-fat diet-induced BAT whitening and insulin resistance in mice [48]. In our study, the most significant KEGG pathway enriched by lncRNAs in BATR1, BATR2 and BATR8 were white adipose development-related pathways, such as insulin resistance and PPAR signaling pathway, suggesting the similarity pathway between obesity-related BAT whitening and ARBW and the lncRNA involvement. The cGMP-PKG pathway can promote the browning process of WATs. Our data indicated that the constantly upregulated lncRNAs from D0 to D85 were involved in this pathway, suggesting that these upregulated lncRNAs might play inhibitory roles during rabbit ARBW [49]. The KEGG enrichment showed that lncRNAs in BATR7 were widely involved in lipid metabolism pathways, such as lipid and atherosclerosis, fatty acid metabolism and PPAR signaling pathway, which indicated that this lncRNA might play crucial roles in white adipocyte development in the iATs of aged rabbits.
We characterized lncRNA MSTRG.2316.1 as a potential inhibitor that was upregulated in the iATs of aged rabbits and during differentiation of WATs. During the rabbit BPA differentiation, the common adipose markers and BAT-selective makers were all upregulated [50,51], which was similar to the previous mice BPA differentiation models. We established a robust rabbit BPA differentiation model. OE of MSTRG.2316.1 dramatically impaired the lipid accumulation in mature brown adipocytes and decreased the common adipose marker of FABP4 and the thermogenic marker genes (UCP1, CIDEA, ELOVL6, PGC1A and PPARA), indicating that MSTRG.2316.1 acted as an inhibitor during BAT development and might mediate the ARBW in aged rabbits. The molecular mechanisms underlying MSTRG.2316.1 regulating ARBW still need further investigation.
Conclusions
In summary, we provided comprehensive histological dynamics and detected a total of 2281 novel lncRNAs during ARBW of rabbit iATs. We revealed that classical rabbit BATs were low lncRNA transcriptional complex tissues. Dynamic analyses for lncRNAs identified 631 DELs during the ARBW. The purine metabolism pathways, Wnt signaling pathway, PPAR signaling pathway, cGMP-PKG signaling pathway and lipid and atherosclerosis were significantly enriched by the potential cis-regulated targets of the DELs with unique expression patterns during the ARBW. The novel identified lncRNA MSTRG.2316.1 can play an inhibitory role during brown adipocyte differentiation. Our work provides evidence that lncRNAs were widely involved in ARBW in rabbits, facilitating a better understanding of adipose biology. | 2021-11-17T16:12:10.765Z | 2021-11-01T00:00:00.000 | {
"year": 2021,
"sha1": "30f01fc598b73a200b077478f9bc19974e1083b6",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-7737/10/11/1176/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dbc1c6ae77a85a81e5ef750305bbd18def580e04",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
226753336 | pes2o/s2orc | v3-fos-license | Military Women: Changes in Representation and Experiences
Research on women in the military is becoming an increasingly important area of inquiry in the social sciences. Women are essential to the operation of contemporary armed services, and this has led to recent changes in organizational policies leaning toward equalizing the status of men and women in uniform. As the decades of the seventies and eighties witnessed an expansion in the role women would play in national defense, the twenty-first century is experiencing a movement toward parity between men and women in the military. This is the case not only in the United States, but in other nations as well. This movement toward gender equality is slow as it goes against the traditional social norms of most societies and is met with a number of obstacles. Although this chapter focuses primarily on historical and contemporary changes in the representation and B. L. Moore (*) Department of Sociology, University at Buffalo, SUNY, Buffalo, NY, USA e-mail: socbrend@buffalo.edu © The Author(s) 2020 A. Sookermany (ed.), Handbook of Military Sciences, https://doi.org/10.1007/978-3-030-02866-4_80-1 1 experiences of women in the United States armed forces, there is also a brief examination of armed forces of other societies.
Introduction
The character of gender relations in the armed services has always been reflective of that which exists in civilian society. In many ways, armed services reflect the cultures of societies in which they exist. This is largely due to the fact that the military, like other organizations, is dependent upon the larger society for resources. Thus it is only effective to the extent that it meets the demands of various interest groups concerned with its activities. On the other hand, the military is unique in terms of its primary goal, national defense. History has revealed that cultural norms are sometimes overlooked in times of national crises (consider the large number of women who served in the British and US militaries and those who fought in the Soviet Union during World War II). Today, we are witnessing a change in gender norms; women have developed a strong political voice that has moved societies more toward gender equality. The military institution's coercive compliance structure has compelled service members to adhere to newly implemented gender policies over the last four decades. In the United States, legislative changes have led to the opening of all military positions, including combat, to women. Arguably these policies would not have been enacted had they not been consistent with changes taken place in the civilian society. This chapter discusses changes in representation and experiences of women in the US armed services and, to a lesser degree, women in other countries.
Since the 1970s, percentages of active duty service women have been increasing. The United States has witnessed a number of legislative changes leading to increased opportunities for women to serve in the armed services. These changes are partly attributable to anti-discrimination regulations in civilian employment which has led to changes in employment both in the labor supply of women workers and the demand side of the market. Title VII of the Civil Rights Act of 1964 prohibits discrimination on the basis of sex in employment-related matters. As a result of Title VII, more women entered the workforce in the United States. These women worked mostly in clerical or service jobs, but more were entering skilled trades and managerial positions. Additionally, the advent of the All-Volunteer Force required services to compete with the civilian sector for qualified personnel (Moore 2017;Moskos 2000;US President Commission 1970). Advances in modern technology as well as the end of the Cold War also help to explain the expansion of women in the military (Moore 2017).
Approximately 12,500 women served in the Navy as yeomen and another 1400 served in the Navy Nurse Corps (Segal 1989). Approximately 14 African American women served in the military during World War I as yeoman (Miller 1995;Miller 1919). The Marine Corps enrolled 300 women to work in clerical positions (Holm 1982). These women were demobilized after the war without military recognition (Treadwell 1954;Segal 1989).
World War II created a need for large numbers of women to participate in the military around the world. Thousands of women in the United States volunteered for military service. Women in Britain were drafted into the military. The Soviet Union assigned women as combatants. Although these women played an essential role to the war effort, they were not viewed as equal to their male counterparts in uniform. By most accounts, women who served during the World War II did not aspire to compete with men. They were satisfied to do the jobs they were assigned in an effort to release men to fight the war (Treadwell 1954;Moore 1996Moore , 2003Noakes 2006). Britain formed an additional auxiliary, the Auxiliary Territorial Services (ATS), which was attached to the Army. The WRNS was reactivated and attached to the Navy, and the WAAF was attached to the Royal Air Force. More than 400,000 women are reported to have served in the British armed services during World War II (Noakes 2006: 131).
Similarly, the greatest number of American womensome 400,000served in a variety of noncombatant assignments during the World War II. The US War Department consulted with a representative from the British Information Service to determine how best to establish a women's auxiliary corps in the US military (Treadwell 1954;Moore 1996: 38). The Women's Army Auxiliary Corps (WAAC) in the United States was established in March 1942. The following year, the WAAC was converted to the Women's Army Corps (WAC), giving women temporary, but full military status. Women were also recruited by the Navy to serve in the reserve corps as WAVES (Women Accepted for Volunteer Emergency Service). The Marine Corps' Women's Reserve was established, as well as the Coast Guard Women's Reserve.
Women were assigned mostly in clerical and administrative jobs to free men for combat. However, a small percentage of women served in such nontraditional roles as parachute riggers, aircraft mechanics, and intelligence (Moore 1996(Moore , 2003Treadwell 1954). Some 6000 African American women served in racially segregated units in a variety of military occupations (including medical, clerical, motor pool) during the war (Putney 1992:viii;Moore 1996). More than 850 African American women were deployed to Europe as part of the 6888 Central Postal Directory Battalion to redistribute years of backlog mail. Some Japanese American women, many of whom were recruited from US internment camps, graduated the Military Intelligence Service Language School at Fort Snelling, Minnesota, and were later assigned to work in intelligence (Moore 2003). Other Japanese American servicewomen worked in a variety of military occupations to include the medical, supply, and clerical fields (Moore 2003). Sociologist, Brenda Moore (2003) gives a detailed account of the experiences of Japanese American women in the military during WWII. Her book is based on indepth interviews with some of the women who served and extensive archival analyses. American women of all racial and ethnic groups served in the United States and some in combat theaters overseas during World War II. These women were patriotic and for the most part did not question their role as noncombatants (Treadwell 1954;Moore 1996Moore , 2003. By contrast, over 800,000 women of the Eastern Front served in the military during World War II; many of whom were combatants serving in the trenches alongside men (Myles 1981;Saywell 1986: 131). Women in the Soviet Union served in many military specialties including members of the aircrews, tank crews, gun detachments, nurses, bomber pilots, and more. Three female air regiments (586 fighter aviation, 587 bomber aviation, and 588 night bomber units) were formed by a pilot named Marina Raskova for the Soviet Air Force (Saywell 1986). The 588th Night Bomber Regiment became part of the fourth Air Army and was given elite status (Guard Designation) for performance (Myles 1981).
Post World War II and Beyond: Global Trends
Following World War II, the number of women in uniform declined precipitously as they returned to their pre-war lives. Most of the women in the Soviet military left, and those who remained found the military environment unwelcoming to women, and their prior service deemed irrelevant (Saywell 1986;Segal 1993;Carreiras 2006). Great Britain was no longer drafting women, albeit the conscription was still in place, and the ATS relied on women to volunteer (Noakes 2006). In Britain, priority was given to rebuilding the home front, and many women were released to occupational positions considered to be feminine (Noakes 2006). A similar trend of leaving the military occurred in the United States as women returned to their civilian lives when the war had ended. Although some women opted to remain in uniform after the war, most left to resume their pre-war lives.
At the end of World War II, it became evident to the United States and Britain that the Soviet Union may pose a threat to Western democracy. All three nations worked on developing nuclear weapons and the Cold War had begun. Both Great Britain and the United States recognized the potential need for women in any future wars and enacted legislation to secure a permanent place for women in the armed services. On 20 November 1946, the British Parliament confirmed that women's auxiliary corps would be retained. The Auxiliary Territorial Services (ATS) and Women's Auxiliary Air Force (WAAF) would become members of the armed services and the Women's Royal Naval Service (WRNS) would remain as a civilian service (Noakes 2006). Two years later, in 1948, the United States gave permanent military status to women through the Women's Armed Services Integration Act of 1948 (PL-625). For the next two decades, the representation of women in the American armed services was restricted to 2%. In 1967, Public Law 90-130 called for the removal of the 2% restriction. Still, US women were disinclined to join the military and their representation in the military remained low.
There were more opportunities for women to serve in an All-Volunteer Force (AVF). Great Britain abolished conscription in 1960, more than a decade before the United States did. In both countries, women were encouraged to enter the AVF to help meet personnel goals. In the United States, the monetary and other work benefits of military service became an attractive career option to many racial minorities and women who found limited work options in civilian society. Women made up 3% of the active duty forces in 1974. Five years later, the number of women in the US military had increased threefold; and the number of African American women had increased fivefold (Dorn 1989;Moore 1991Moore , 1996. Race has always been a salient issue in American society and was reflected in trends of representation and experiences of women in the US military throughout history. Overrepresentation of African American women among service women, particularly in the Army, raised interest among military scholars. Some scholars raised the concern that the overrepresentation of Blacks in the military would result in them bearing an unfair burden of defense. The scholars argued that the military should be reflective of the broader society (Dorn 1989). The services also began to exam the interaction effects of race and gender factors on the attitudes of military personnel. Some studies found that African American servicewomen were more pessimistic about the military's equal opportunity climate than white service women, (Moore and Webb 2000). In addition, African American female officers were more pessimistic than African American enlisted women (Dansby 2001;Rosenfeld, et al. 1992). Although African women were less satisfied with the equal opportunity climate than white women, they served longer terms and did not separate from service before their terms had expired as often as members of other racial/ethnic groups on active duty (Binkin et al. 1982, 52-53;Moore 1991Moore , 2002. White women had a lower propensity of joining the military and were less likely than African American women to complete their term of service (Binkin et al. 1982, 52-53;Moore 1991;Moore and Webb 2000). Among the challenges faced by the services was to create initiatives to attract more white women and to more effectively meet the equal opportunity needs, such as inclusivity, of African American women Webb 1998, 2000).
In the last four decades, changes in military laws and policies, largely influenced by a climate of equal employment opportunities for women in the broader society, have allowed for women to fill a wider array of military occupations. The trend has been movement toward gender inclusion in the military in the United States as well as other countries. In 1976, US Public-Law 94-106 opened the three major service academies to women. Two years later, Congress passed legislation abolishing the Women's Army Corps as a separate unit integrating women into the regular army.
Unlike previous years, women in the US military started the process of achieving higher ranks in the 1980s. Laws requiring women officers to be appointed, promoted, or separated from service differently from men were abolished in 1980 through the Defense Officer Manpower Personnel Management Act (DOPMA). As a result of DOPMA, US servicewomen could be promoted to grades 05 and above, and civilian husbands of female Nurse Corps officers were authorized dependent benefits (Moore 2017;Rostker et al. 1993). Since the 1990s, there is a greater variety of military occupations opened to US servicewomen. Congress lifted the ban on women flying combat aircraft and serving on combat ships in 1991. In the same year, more than 40,000 women soldiers were deployed to the Persian Gulf region during Operations Desert Shield and Storm; 15 were killed and two were taken prisoner of war (Lanning 2008:7). US women also participated in military operations in Somalia between 1992 and 1994, deployed for peacekeeping duties in Haiti in 1995, and participated in combat operations in Kosovo in 1999 (Lanning 2008).
Following the war, a US Government Accounting Office (GAO) report concluded that women performed well in combat and were an integral part of the operation (GAO 1993). Still, jobs specifically associated with submarines (Submarine Sonar Technicians, gun or missile crew-members) remained closed. Armor, Infantry, Special Forces, Cannon Field Artillery, and Multiple Launch Rocket Artillery occupations would not open to US servicewomen until 2010.
Although women in various countries throughout history have shown that they are capable to performing in combat, the notion of opening direct combat jobs to women in the United States even in the twenty-first century remained a contentious issue. Scholars, military officials, and news pundits all weighed in on the debate. Opponents emphasized biological differences between men and women, arguing that women are physically weaker than men, are at risk of becoming pregnant, and have an overall negative effect on the fighting capabilities of the American armed forces (see, e.g., Mitchell 1989Mitchell , 1998. They raised the issue of military effectiveness, arguing that women lacked the strength and stamina to perform effectively in combat. It was also argued that the presence of women in combat units would undermine morale and cohesion (Mitchell 1989(Mitchell , 1998van Creveld 2000;Kennedy-Pipe 2000). Other opponents maintained that whether or not women are capable of performing as warriors is not withstanding, it is more important for women to bear and raise children than to go off to war (Bruen 1991).
Advocates for lifting the ban on women serving in combat maintained that women should be granted the same opportunities as men to fulfill their citizenship duties (Segal 1982;Stiehm 1998Stiehm , 1981. Another concern raised by advocates early on was that exclusionary policy hampers the career opportunities for women. Active duty military personnel are not likely to be promoted to the highest military ranks without combat experience (Becraft 1992;Burke 1996;Devilbiss 1985;Holm 1991). Advocates for women serving in combat assert that women's capabilities to perform effectively in war are equal to and sometimes surpass those of men (Segal 1982;Roush 1991;Holm 1991;Peach 1996). Others made the case that the combat exclusion law cannot protect military women from danger during wartime, but rather limits their chances for career advancement (Segal 1982;Becraft 1992).
US military women were divided over the issue of whether or not they should serve in combat. Findings from surveys of Army women from 1993 to 1994 revealed that enlisted women, and women of color, were more likely to oppose assigning women to combat (Miller 2001). Female officers were found to have greater advantages for serving in combat, than enlisted women, as they were more likely to plan a career in the military, less likely to have children, and more likely to perceive their command opportunities to be limited without combat experience (Miller 2001). Most of the Army women surveyed were in favor of women being able to volunteer for combat if and only if they were able to meet physical requirements (Miller 2001).
The example of the Israeli Defense Force (IDF) was often referenced during these debates as a country in which women were both drafted and served in combat since its inception in 1948. Although women in the IDF were assigned to combat units, they did not deploy for combat, but were evacuated when the unit went to war (Yuval-Davis 1981;Izraeli 1997). An exception was in 1948, during the Arab-Israeli War, when Israeli women took active part in land battles. Early studies showed that, as a rule, the majority of Israeli female soldiers served in secretarial and clerical jobs as did servicewomen in other countries (Gal 1986;Cohen 1997). Although Israeli women had been conscripted, there were many categories exempting them from service, to include marriage, having children, and religion. Women were also able to receive deferments to pursue a college education, so long as they completed military service upon graduation (Cohen 1997;Izraeli 1997). The Israeli Defense Force did not take all eligible 18-year-old women, but rather selected the number of women it needed to meet personnel quotas each year (Gal 1986;Yuval-Davis 1981;Klein 2002). Therefore, the entrance score requirements for women were higher than that for men (Gal 1986).
Israel continues to conscript men and women into the IDF; and men and women train together in today's Israeli military. Still, studies reveal that even today, the majority of women serving in the IDF are assigned to feminine roles; and that their service is not valued as highly as that of men (Karazi-Presler et al. 2018;Rosman-Stollman 2018). Although Israeli women comprise nearly 34% of the IDF, and 90% of the military occupations are open to women, only 4.6% of all women serve as combat soldiers (Karazi-Presler et al. 2018). Rosman-Stollman (2018) reports that Israeli servicewomen serve primarily in occupations in the Education Corps or in human resources. Unlike male-dominated military occupations, these positions, so Rosman-Stollman argues, do not generally yield tangible rewards in civilian society after servicewomen are discharged. Noteworthy is that Israeli female officers are overrepresented among junior officers; representing 56% (Karazi-Presler et al. 2018). However, female officers in the IDF generally do not advance beyond the rank of major; representing only 14% of the officers in the rank of colonel and above (Karazi-Presler et al. 2018).
Contemporary changes in the structure of the IDF have given female junior officers opportunities to be promoted and serve in command positions in a variety of roles (Karazi-Presler et al. 2018). According to Karazi-Presler et al. (2018), junior officer women experience power and authority in their military positions. Their study is based on indepth retrospective interviews with 25 female officers in the Israeli military. Given the gendered structure of the military, these junior officers often express ambivalence about the power they have as officers. On the one hand, the women who had served as junior officers felt ashamed of wielding their power. On the other hand, many found the power they experienced in the military empowered and strengthened them in other aspects of their lives long after they had left the military (Karazi-Presler et al. 2018). There is also evidence that women serving in combat units in the IDF feel more empowered and have a greater sense of self-efficacy even when they had experienced life-threatening situations (Shahrabani and Garyn-tal 2019). Women who served in mixed-gender combat units stated that they had to work harder than the men in order to prove themselves, but through the process of proving themselves, they believed more in their own abilities (Shahrabani and Garyn-tal 2019).
During the first few years of the twenty-first century, US servicewomen continued to deploy to warzones (Yemen, Afghanistan, and Iraq) and served in combat, albeit unofficially. In 2004, Army women, known as Lioness, were assigned to Marine Corps ground combat units to assist on raids where women and children were present (Moore 2017 The trend of expanding military roles to include women is occurring in most Western countries, as well as Africa, Asia, and Australia (Moore 2017;Carreiras 2006). This trend has been facilitated by the United Nations Security Council Resolution 1325 which has established frameworks to include women in decision making processes about military operations. A study on women in North Atlantic Treaty Organization (NATO) forces reveals that all NATO countries recruit women on a volunteer basis and the percentages of military women in these countries are low, less than 15% (Carreiras 2006). The NATO countries with the largest percentages of women in the year 2000 were the United States (14%), Canada (11.4%), France (8.5%), the United Kingdom (8.1%), and the Netherlands (8.0%) (Carreiras 2006:99). The NATO countries with the lowest percentages of women included Italy (0%), Poland (0.1%), Turkey (0.1%), Germany (1.4%), and Norway (3.2%). In this study, most women serve in the air force and fewest served in the army; and most serve in administrative support and medical positions (Carreiras 2006). A small fraction of women, 7%, served in combat arms positions (Carreiras 2006).
Sweden's draft system constitutes an interesting case as both men and women 19 years of age are required to enlist in the military. This gender-neutral conscription system, which was implemented in 2018, has been decades in the making. Such a policy reflects Sweden's effort to insure that both women and men fulfill their obligation as citizens. Persson and Sundevall (2019) discuss the heated public debate that occurred over implementing a gender-neutral conscription in Sweden beginning with the youth league of the People's Party in 1965. A gender-neutral form of conscription was adopted as early as 2010, but the conservative-liberal government of Sweden proposed a Bill (which was adopted by parliamentary majority) to deactivate it (Persson and Sundevall 2019). In 2017, due largely to perceived threats to national security, a coalition of Social Democrats and the Green Party reactivated conscription. Today, Swedish women serve in all military branches and positions including combat; but their percentages are low, and women are generally subjected to inadequate equipment. According Persson and Sundevall (2019), men comprise 85% of the selected conscripts and 93% of the professional military officers.
In the Ukrainian military, by contrast, a large number of occupations are closed to women. According to Martsenyuk and Grytsenko (2017), approximately 10% of the Ukrainian military are women; 14,500 female soldiers and 30,500 female contract employees of the armed services. Almost 2,000 Ukrainian women are officers, 35 of whom hold managerial positions in the Ministry of Defence. Of the 14,000 people in the National Guard, 21 are women holding positions of doctors and nurses (Martsenyuk and Grytsenko 2017). Many Ukrainian women serve informally. Unless a woman is formally classified as a combatant in the Ukrainian military, she will not receive military benefits when she departs service. Since 2016, Ukraine has implemented a National Action Plan to increase the number of women in the military and to introduce gender sensitivity training for military personnel in an effort to address gender-based violence. As of 2016, 63 staff positions, including some combat positions (i.e., bomb aimer, gunner, and scout, and sniper) are opened to women.
Australia also acknowledges the need to increase the representation of women in its defense force (ADF) particularly among the senior officers. Lee Hayward (2018) reports that women make up 12% of the Australian Army and are vastly underrepresented in the senior ranks. Although promotion to the senior ranks is based on a merit system, Hayward illustrates that the meritocracy reflects the values and biases of the decision-makers who are all male. Consequently the more senior the positions, the more homogenous they are. Hayward (2018) recommends that the Australian Army recognize the bias that is inherent the merit system and introduce new measures to achieve gender equality goals in the senior officer ranks.
The Jordanian armed forces (JAF) is relatively small, consisting of 100,000 active duty and 65,000 reservists (Maffey and Smith 2020). Women comprise only 4% (or 3500) of all personnel in the JAF; most of whom serve in medical services (Maffey and Smith 2020). The role of women in the JAF is determined largely by gender norms within the family structure. Maffey and Smith (2020) discuss how these norms can range from traditional and restrictive to progressive and equitable. With the exception of war (i.e., the Lebanese war 1975-1991 and the Algerian War for Independence 1954Independence -1962, women in the JAF are relegated to positions in education, health services, and business (Maffey and Smith 2020). Many women in the JAF expressed satisfaction with their military assignments and the opportunity to advance in rank vis-à-vis Jordanian women in the civilian labor force. However, the representation of women in the JAF lags behind other countries like Israel, the United States, and Norway. Maffey and Smith (2020) argue that the confluence of cultural, societal, political, and environmental factors impedes the career advancement of women in the JAF. The most prestigious positions are not offered to women as these occupations would require women to be away from their families for long periods of time (Maffey and Smith 2020). Jordan has recently developed an action plan for increasing the representation of women in the military.
According to an exploratory study of militaries in East Asia, China has a long history of ancient women warriors, including General Hao Fu who commanded more than 13,000 soldiers from 1250-1192 BC (Obradovic 2015). According to Lan Obradovic (2015), these ancient female warriors are still revered in China today. Women were first officially integrated in the People's Liberation Army (PLA) in 1967 and were recruited from the families of workers, peasants, soldiers, staff, and small merchants (Obradovic 2015). Between 1966 and 1976, during China's proletarian revolution, serving in the military was regarded as being a privilege for women. Lana Obradovic (2015) claims that Chinese women in uniform were glorified and were not discriminated against because of their gender. Today, China provides female military recruits with economic incentives (Obradovic 2015). Although there is not widely published information on women in the Chinese military today, Obradovic (2015) reports that they are assigned to various occupations to include signal, telegraph, submarines, space missions, and fighter jet pilots.
The Women's Army Corps in South Korea was officially established during the Korean War (Obradovic 2015). During that time, women served mainly in the medical field as surgeons, dentists, and nurses (Obradovic 2015). Since the 1990s, South Korean women have been fully integrated into all branches of military service; however their numbers remain relatively low compared to other democratic societies (Obradovic 2015). A reported 10,000 South Korean women serve among a total of 630,000 active duty personnel. The South Korean military is described as having a problem with discrimination against women, sexual harassment, and sexual violence. Violence against women in the South Korean military has been assessed as being a reflection of "means of reinforcing Confucian culture of gender hierarchy and hegemonic masculinity within the military institution (Obradovic 2015, p.10)." Women comprise approximately 24% of the South African National Defense Force (SANDF). They receive the same training as men and have been serving in combat roles for two decades (Heinecken 2017). Still, as illustrated in a recent study, women are not fully accepted in the SANDF largely because of cultural norms, values, and practices (Heinecken 2017). Data for this study were collected during a Department of Defense (DoD) gender conference in Pretoria, South Africa in 2012. Most respondents (52%) felt that combat service should be optional for women; while 39% felt that women should be compelled to serve as are men, and 10% believed that women should be barred from combat altogether. Still, the SANDF employs a gender-neutral policy in its effort to manage gender integration. Although norms are changing and the presence of women in the SANDF in various positions is beginning to be accepted, the service of women is not valued as highly as that of men. Lindy Heinecken argues that a gender-neutral perspective does not result in gender equality. This is because there are real gender differences and to expect women to perform the same as men is simply unfair.
The United States continues to make strides toward integrating women into the armed services. Recent data from the US Department of Defense reveals that in FY 2017 female representation in the active component (AC) reached its highest level in the history of the US military. US military women now comprise 16% of the enlisted forces and 18% of the commissioned officers (DoD 2018b:6). In FY 2017, female representation among AC with no prior service (NPS) was highest in the Air Force (19.5%), followed by the Navy (19.4%), the Army (14.3%), and the Marine Corps (8.5%) (DoD 2018b:24). The percentage of racial minority servicewomen is almost doubled that of racial minority service men. As in previous years, overrepresentation of racial minority women (specifically African American women) in the enlisted force is related to their higher representation in AC NPS accessions as well as their higher retention rates (DoD 2018b). Racial minority women represented 41.4% of female Army accessions in FY2017, but racial minority men represented 26% of male Army accessions (DoD 2018b:28). Hispanic women are over represented in the Marine Corps; however Hispanics are underrepresented in the enlisted active component (AC). The racial minority (or non-white category) is comprised mostly of African Americans. Similar to previous years, service women in FY2017 are most likely to work in administrative (25%), medical (14.5%), and supply (14.1%) fields. The top three occupational categories for men, on the other hand, are electrical (21.9%), infantry/bun crews/seaman (18%), and supply (11.1%) (DoD 2018b:26).
Women's representation among the active-component officer corps has been steadily increasing in all of the services with the exception of the Marine Corps since the onset of the AVF. The representation of female in the officer corps of the Marine Corps has been relatively low but has maintained a steady level. In FY2017, the Air Force had the highest representation of women in the active component officer corps (21%), followed by the Navy and Army (17.8%), and then the Marine Corps at 7.7%. (DoD 2018b:32).
Current Issues and Methods of Measuring Them Low Propensity to Enlist
There are a several issues which create obstacles to the full integration of women in the military globally. For the most part the problem of female integration is rooted in cultural norms. Scholarly studies have and continue to address these issues, which will require further investigation well into the future. Among many concerns is the low propensity of women to join the military. As mentioned above, the representation of women in the Swedish, Jordanian, Ukrainian, and South Korean military is low. The representation of women in the US armed services has increased but is also low. Women now comprise approximately 16% of the US active armed services. By contrast, women make up between 46% and 51% of the civilian labor force. This is controlling for women of comparable age and education makeup (college degree for officers) (DoD 2018b). Although 16% is a noticeable increase in the representation of active duty women in recent years, it is much lower than the percentages of women in civilian society. Surely, women are grossly under represented in US armed service. The large representation of racial minorities in the enlisted ranks (particularly in the Army) most likely reflects the lack of adequate job opportunities for African Americans in the civilian labor market. This raises the issue of who bears the burden of national defense.
As mentioned above, gender disparity in military representation is not unique to the United States. Low representation of women in the services can be found in militaries globally (Segal 1993;Carreiras 2006;Martsenyuk and Grytsenko 2017;Persson and Sundevall 2019;Maffey and Smith 2020). Although women are underrepresented in the American armed services, the United States has the largest percentage of women in its military than other NATO countries (Carreiras 2006:99). This just further illustrates that low representation of women in the military is a global phenomenon.
Sex Segregation in Military Occupations
Another key issue is sex segregation in military occupations. Although all military positions in the United States are open to women, they are still concentrated in administrative, medical, and other support jobs (such as supply, electrical, electronics, and communications). This is in contrast to military men who are more likely to be in infantry, tactical operations, and equipment repair. Women are underrepresented in military career enhancing positions like infantry. This sex segregation in military occupations is also found in militaries in other countries (Carreiras 2006:105). Studies have shown sex segregation in military occupations to be a factor in women not advancing through the ranks as quickly as do men, which can ultimately obstruct their career progression (Segal 1982;Carreiras 2006). This issue is also of concern in militaries globally (Martsenyuk and Grytsenko 2017;Maffey and Smith 2020).
Contributing to the lack of representation of women in male-dominated service jobs is the lack of support they receive on the part of leadership. It is true the greater presence of women serving in the military challenges the notion that men are more capable than women. However military women are still expected to work in roles defined as appropriate for females. Recent studies of military academy students found that these budding military leaders are opposed to women serving in nontraditional roles (Laurence et al. 2016;Matthews et al. 2009). Examining survey data from service academy cadets, Reserve Officer Training Corps (ROTC) cadets, and college students, investigators found that service academy cadets who identified as male and Republican had the lowest approval scores, or the least support, for women serving in military roles (Laurence et al. 2017(Laurence et al. , 2016. Some scholars recommend aggressive training programs for future military leaders (Laurence et al. 2017(Laurence et al. , 2016.
Sexual Harassment/Assault
Over the last 30 years, sexual harassment/assault has been a primary issue in all of the services in the US armed forces and other countries as well. Although sexual harassment/assault is not unique to the military, it became associated with military organizations in the 1990s as a result of the Navy tailhook incident (1991), rape charges against male non-commissioned and commissioned officers at Maryland's Aberdeen Proving Ground (1997), and sexual harassment charges against the Sergeant Major of the Army (1998). Since then there have been a number of sexual harassment/assault charges concerning military personnel globally. In May 2013, the US Department of Defense developed a strategic plan (amended on January 2015) to unify the services in an effort to eliminate sexual assault. The current Sexual Assault Prevention and Response office (SAPRO) provides oversight, investigates claims, and publishes detailed annual reports on sexual assault involving members of the armed services. Each report presents statistical data and analyses and can be obtained through the following website: https://sapr.mil/reports. Sexual misconduct in the military continues to be examined in scholarly literature as well. An example of a quantitative study recently published discusses the importance of organizational justice climate in alleviating sexual harassment in the workplace. Using data from two Department of Defenses' surveys, the 2006 Workplace and Gender Relations Survey (WGRS) and the Defense Equal Opportunity Climate Surveys (DEOCS), Rubino et al. examined the role of organizational justice climate as a predictor of sexual harassment and assessed its potential as a moderator of already established relationships between antecedents. Among the findings of this study is that organizations that effectively manage "justice climate," also deter sexual harassment. The investigators found that psychological and collective justice climate related negatively to sexual harassment and moderated the effects of sex similarity and sexual harassment climate on sexual harassment (Rubino et al. 2018).
Other inquiries about sexual harassment/assault are best examined through qualitative analyses. In today's society, sexual misconduct may take various forms other than direct contact with victims. A study of implications associated with containing the 2011 Australian Defense Force Academy (ADFA) Skype sex scandal is a good example. According to Habiba (2017), a female cadet of the ADFA went to the media and claimed that, unbeknownst to her, she had been Skyped via web cam having sexual intercourse with another cadet. The female cadet decided to go public with the event after learning that the accuser would only face a minor charge. As a result of the cadet going public, the ADFA Skype scandal resulted in the biggest scandal in Australian military history leading to investigations that unveiled hundreds of other cases of sexual abuse in the Australian Defense Force which had been concealed (Habiba 2017). Using qualitative methods, Boltandki's processes theory, and Bourdieu's field theory, Habiba analyzed how the handling of military sexual misconduct cases resulted in leakage of this information into the public domain. This led to heavy scrutiny of the protagonists as well as the processes within military organizations (Habiba 2017). By becomming public, the cost of the ADFA skype sex scandal increased for all stakeholders. The author concluded that to avoid negative consequences of scandals, organizations need to address conflicts (sexual or otherwise) in a timely fashion and explore all factors that contribute to it early on.
Escalating Suicide Rates
Another central concern is the accelerating suicide rate among military personnel in the United States which has surfaced over the last few years. Recently the RAND Corporation published data showing that the suicide rate among women in the military over the last 6 years has increased twice the pace of male service members.
When compared to civilian women, military service women were two to five times more likely to take their own lives. The data were published in a RAND mulitmedia Veterans in American podcast on November 11, 2018. Military sexual trauma was found to be the main factor as well as combat stress disorder and other factors contributing to suicide among service women (see : Gorn 2018). DoD (2018b) reported that service members who died by suicide were younger than 30 years of age, usually enlisted and male. Other studies show that while men are more likely to die of suicide, women are more likely to attempt to take their lives (DoD 2019). Suicide rates are highest in the Army (24.3 suicides per 100,000 populations), followed by the Marine Corps (23.4), Navy (20.1), and lowest in the Air Force (19.3) (DoD 2019:v).
In 2018, the Department of Defense reported an increase in suicide rates among the active duty service members and higher than expected rates in the National Guard compared to the US population (DoD 2018a). Based on these results, the Department of Defense began implementing a multi-faceted public health approach to suicide prevention. Among DoD strategies for suicide prevention is an initiative to help enlisted Service members develop foundational skills to deal with life stressors early in their military career. DoD also supports military families by implementing strategies for them to increase awareness of risk factors for suicide (DoD 2018a). Each of the services (US Air Force, Army, Marine Corps, and Navy) are required to collect data and submit it to DoD for publication in its suicide event report. Every death by suicide and each identified suicide attempt must be reported. Much of the data are statistical and quantitative analyses are used in the reports which provide suicide rates, describing various factors associated with instances of suicide for each calendar year (DoD 2018a; DoD 2019).
Scholarly publications on suicide rates among military members focus on a variety of topics to include the impact of confidentiality on disclosure of suicidal thoughts (Anestis and Green 2015), combat exposure, and the risk of suicide thought (Bryan et al. 2015). Using a quantitative study, Reimann and Mazuchowski (2018) compare military suicide rates with civilian suicide rates, adjusting for age and sex differences from 2005 to 2014. According to their findings, suicide rates among US active duty service members increased between 2005 and 2009. They also found a significant association between higher suicide incidence for 17-29-year-old females in 2010, 2012, and 2014. Although these and other studies have been published on suicide rates among military personnel, so many more are needed to in an effort to better understand the causes and to best implement strategies of prevention.
A Masculine Culture
Arguably the main issue obstructing full integration of women in the armed services is the persistent culture of masculinity. This factor is articulated in studies of women in the military globally. Other issues like, low propensity to serve, sexual harassment/assault, and gender segregation in occupations may all be related to a culture that excludes women from full participation. Military men have voiced resentment toward the double standard between men and women in uniform. An early RAND study found that male soldiers were less concerned about gender differences and more concerned about structural inequality as it pertained to gender. For the service men who were surveyed, gender issues were not cited as affecting morale as often as were leadership issues. When male respondents raised gender as being an issue, they usually objected to a double standard in policies for men and women. "Men . . . tended to assert that women demanded equal rights and recognition within the company but they were not equal in their performance or contribution to the unit. . . . Men claimed that female standards were too easy and that women were not being forced to meet even the lower standards (see: Harrell and Miller 1997:80)." Indeed, female service members were held to different physical fitness standards than men; required to do modified pull-ups, fewer push-ups; and allowed more time to meet running requirements. A Washington Times article published in 2002 revealed that men at military service academies resented that women were held to a lower standard (Washington Times 2002: 20).
There is a debate in social science literature on whether or not women should meet the same standards as men in order to qualify as good military personnel (Moore 2017). Some argue that in order for servicewomen to be successful, they must perform the same roles, and pass the same tests as male service members (King 2015). For King (2015), military women must be honorary men if they are to be respected in their military jobs. Others argue that women are not the same physiologically as men and do not aspire to be biologically equal to men (Brownson 2014(Brownson , 2016. Brownson rejects King's assessment, arguing instead that service women aspire to a kinship-reciprocity ideal of equivalency, in which they are valued for the contributions they bring to the exchange (Brownson 2016).
Indeed, research confirms that men and women are physiologically different. One such study, using data from DoD's Armed Forces Health Surveillance Center, examined gender by race differences in self-reported post-traumatic stress disorder. Using a logistic regression analysis, the investigators found that more combat exposure were associated with a higher risk of PTSD (post-traumatic stress disorder) for service women compared to service men (Mustillo and Kysar-Moon 2017). Another research question raised by these investigator was whether or not Black female service members are at greater risk of experiencing PTSD following traumatic combat exposure than White female service members. They found no difference between black and white service women. The results of this study show that Black service women do not have a greater risk for PTSD than doWhite service women. However, service women are more vulnerable to traumatic stress exposure than are men (Mustillo and Kysar-Moon 2017). The issue of physiological differences between males and females merits continued discussion if women are to be fully integrated into the military.
Some Additional Concerns
The topics mentioned above are only a few of the gender issues that will follow us well into the twenty-first century. Some issues pertaining to gender integration not discussed in this chapter, but nonetheless merit further investigation, include issues associated with pregnancy, obstetrics, and childcare. Strategic plans to strengthen family care plans for single parents making them more deployable warrants serious discussion (Booth et al. 2007). Ways to provide servicewomen with properly designed and fitted combat equipment also warrant careful exploration.
The goal of gender inclusivity is complicated, and there are a number of normative and practical issues yet to be resolved. Issues pertaining to women in the military are the focus of several committees in the nation's Capital. One such committee is the Defense Advisory Council on Women in the Services (DACOWITS), which reports to the Secretary of Defense on matters relating to women in the US military. The Committee consists of qualified professional women in the US armed forces. They make recommendations on policies relating to recruitment and retention, employment, integration, well-being, and treatment. Each year DACOWITS publishes minutes of their meetings as well as a report of its research findings and recommendations. These data can be obtained through the following website: https:// dacowits.defense.gov/Reports-Meetings/.
Among the issues addressed in their FY 2018 report are (i) variance in women's recruitment and retention by race/ethnicity, conscious and unconscious gender bias in the services, underrepresentation of female chaplains, revised physical fitness tests accounting for physiological gender differences, gender integration of women in ships, pregnancy and parenthood policies, and domestic violence affecting servicewomen (see: DACOWITS 2018).
Summary and Concluding Remarks
The issue of whether or not women should serve in combat is less about a woman's ability to do so and more about cultural norms and gender roles. For sure, women have filled the role of combatants throughout history. During times of conflict, nations have drafted and recruited the services of women. During times of peace, the service of women has fallen into obscurity. A number of structural changes that have occurred over the last four decades in an effort to remove institutional barriers to the integration of women in the military globally have been reviewed. In the United States, opportunities for women to be assigned to occupations that had been previously closed to them were ushered in with the All-Volunteer Force. Doors to aviation duty in noncombat aircraft, as well as noncombatant ships, opened to women. Service academies began to enroll women. The 1980s witnessed even more changes in Congressional laws and military policies moving closer to integrating women in the services. Not only are there more occupational positions available to US military women, but more family support services such as medical, and childcare are provided for service members. Quality of life concerns, such as living facilities, have been addressed by each of the services in an effort to attract good women as well as good men. Although physical constraints which have categorically excluded women from the armed services have been removed, women are still vastly underrepresented.
A question is why should societal members concern themselves with the representation of women in the military? What is really at issue? To answer that question, one must consider the fact that throughout history, militaries of Western democracies (and elsewhere) have been male dominated institutions based on a culture of masculinity (Enloe 1981). Traditionally, service in the military was both a right and an obligation of citizens (Marshall 1950;Janowitz 1975); and full citizenship with all of the accompanying rights was reserved exclusively for men. If women are to be first class citizens, then it follows that they must actively participate in the national defense and peacekeeping efforts.
▶ Dynamic Intersection on Military and Society
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. | 2020-06-25T09:07:33.689Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "ebf925d23146da2b77292d8c03c93e6e8f5d8952",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-030-02866-4_80-1.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "0f7a9b79fa5d00d44103cf418a03ab3b18c19c0e",
"s2fieldsofstudy": [
"Sociology",
"Political Science"
],
"extfieldsofstudy": [
"Political Science"
]
} |
109499405 | pes2o/s2orc | v3-fos-license | Scale-adaptive simulation of a hot jet in cross flow
The simulation of a hot jet in cross flow is of crucial interest for the aircraft industry as it directly impacts aircraft safety and global performance. Due to the highly transient and turbulent character of this flow, simulation strategies are necessary that resolve at least a part of the turbulence spectrum. The high Reynolds numbers for realistic aircraft applications do not permit the use of pure Large Eddy Simulations as the spatial and temporal resolution requirements for wall bounded flows are prohibitive in an industrial design process. For this reason, the hybrid approach of the Scale-Adaptive Simulation is employed, which retains attached boundary layers in well-established RANS regime and allows the resolution of turbulent fluctuations in areas with sufficient flow instabilities and grid refinement. To evaluate the influence of the underlying numerical grid, three meshing strategies are investigated and the results are validated against experimental data.
Introduction
Jets in cross flow have been studied both experimentally and numerically because of their frequent occurrence in technical applications. A special area of interest are auxiliary air system outlets on aircrafts, such as discharge locations of anti-icing systems. A generic system is illustrated in figure 1, where hot air circulates inside the nacelle's lip to prevent the formation of ice on the leading edge. The air is collected in a scoop and then blown out through an ejector grid into the main flow, where it interacts with the downstream structural components. As high temperatures can damage composite parts of the nacelle, special care has to be taken in designing the system and a proper simulation of the flow and temperature field is required. Due to the appearance of large scale turbulent structures and the inherent dynamics of the jet, RANS simulations using standard statistical two-equation turbulence models fail in predicting the correct time averaged quantities. Accounting for anisotropy of turbulence by employing higher order turbulence closure models does not yield considerably improved results (Acharya et al., 2001). Even though Large Eddy Simulations of jets in cross flow can be found in literature e.g. (Yuan et al., 1999), they usually concentrate on small Reynolds numbers and reduced computational domains. Since currently available computational resources impose restrictions on temporal and spatial resolution, Large Eddy Simulations of wall bounded, high Reynolds number flows will not be feasible within the near future. Additionally, as only a local resolution of turbulence scales is required and RANS capabilities should be retained for attached boundary layers, the Scale-Adaptive Simulation (SAS) Egorov et al., 2010) is investigated for the considered case. For turbulence model validation, the influence of different mesh types on scale resolution, dynamical behavior of the jet and time statistics is investigated.
Test case description
Albugues (2005) experimentally investigated a generic jet in cross flow configuration, which is based on the nacelle anti-icing system exhaust and shown in figure 2. The configuration consists of two pipes feeding hot air symmetrically into a plenum, which is integrated inside a threedimensional airfoil with a pressure distribution characteristic for a nacelle. As the hot air exits the plenum through a square shaped ejector, a jet in cross flow forms on the upper side of the wing. To characterize the establishing flow field, a cross flow Reynolds number Re cf = U ∞ D/ν ∞ is built using the free stream velocity of the cross flow U ∞ , its kinematic viscosity ν ∞ and a characteristic length of the ejector D, which in this case is the square's edge length. The large value of Re cf = 90000 implies the broad range of turbulent scales that appear in the jet and cross flow interaction region and therefore highlights the necessity of proper temporal and spatial resolution. Another important similarity parameter is the blowing ratio C R = (ρ j U j )/(ρ ∞ U ∞ ) (Callaghan & Ruggeri, 1948), which quantifies the momentum ratio of jet and cross flow. The small value of C R = 0.69 characterizes an attached jet wake, which consequently leads to a strong thermal impact on the wall behind the orifice. The temperature difference between jet and cross flow for the wind tunnel set-up is ∆T = 62K, which can be used to build a cross flow Richardson number Ri cf = gβD∆T /U 2 ∞ to characterize the ratio of free to forced convection, with acceleration through gravity g and thermal expansion coefficient β. As Ri cf ≪ 1 for the considered case, buoyancy effects can be neglected and temperature is a passive scalar.
Turbulence modeling
Background of the SAS turbulence model is the transport equation for kL, with k being the turbulence kinetic energy and L the integral length scale of turbulence. As this correlation equation is exact, a term-by-term modeling leads to a more rigorous approach finally introducing the von Kármán length scale L vK into the transport equation. Transforming the k − kL model into the k − ω SST framework (Menter, 1994), an additional source term Q SAS enters the turbulence scale equation for the specific dissipation rate ω.
This source is activated when the flow exhibits sufficient inherent instabilities and the numerical mesh is sufficiently refined. Consequently, the local eddy viscosity is reduced allowing a resolution of turbulent fluctuations. As the model is based on the k − ω SST turbulence model, boundary layers are simulated using its RANS capabilities including automatic wall treatment.
Meshing strategies
As the utilized turbulence model requires meshes with the ability to locally resolve turbulent fluctuations and to properly resolve boundary layers, different meshing strategies are considered: a) pure hexahedral, b) hybrid tetrahedral and c) hybrid Cartesian mesh, which are illustrated in figure 3. The first mesh is based on a multi-block approach, which allows very accurate boundary layer resolution and smooth transition inside the volume. The downside however is that mesh refinement cannot be kept locally, therefore increasing the number of cells in areas where it is not needed. The second approach is a hybrid strategy combining prismatic cell layers to accurately resolve the boundary layer and tetrahedral cells in areas away from walls. This allows on the one hand a local refinement confined to the desired areas but on the other hand yields a highly increased number of cells in these areas compared to hexahedral elements. The last approach is also a hybrid strategy, which employs a prismatic and hexahedral inflation layer for boundary layer resolution and a Cartesian mesh in the open domain. Tetrahedral and pyramidal cells are used for the transition from the inflation layer to Cartesian cells. The advantages of this method lie in the use of mostly hexahedral elements and locally confined grid refinement through the use of hanging nodes with the ratio 2:1. As on the one hand attached boundary layers will be kept in RANS regime and a heat transfer calculation is considered, the near wall mesh requires a non-dimensional wall distance y + smaller than one. On the other hand, recalling the ability of the SAS model to resolve turbulent scales, the question of the required temporal and spatial resolution in the jet interaction region needs to be considered. Therefore, an estimation of the large, energy containing and geometry dependent vortices is necessary, which can be achieved by recalling the idea of the energy cascade and respectively the definition of the inertial subrange. For a jet in cross flow, the size l 0 of large eddies is in the same order of magnitude as the jet diameter D and their characteristic velocity u 0 is in the order of U ∞ . As stated by Pope (2000), the demarcation size l d between geometry dependent vortices and those within the inertial subrange can be defined as l d = 1/6l 0 . A characteristic time t d can then be estimated by and the numerical time step ∆t is chosen to be smaller than t d . Once the time step ∆t has been specified, the corresponding grid spacing ∆x can be estimated through with 2∆x representing the size of the smallest resolvable vortices with the characteristic time ∆t. These choices ensure the spatial and temporal resolution of all energy containing and geometry dependent vortices, whereas the more universal turbulent fluctuations within the inertial subrange will be accounted for by the statistical turbulence model. The meshes have been created using ANSYS 13 Meshing applications and the resulting statistics are shown in table 1. The number of inflation layers for the hybrid approaches is 20, with the same height for the wall adjacent cell and the same expansion ratio (∼ 1.2) within the boundary layer as for the hexahedral mesh. Mesh a) 12.9 · 10 6 28.1 • 3500 10 Mesh b) 21.0 · 10 6 20.0 • 7600 8 Mesh c) 13.1 · 10 6 6.0 • 6000 16
Numerical scheme and boundary conditions
The CFD solver ANSYS FLUENT 13 is used to solve the resulting set of equations with a pressure based segregated algorithm, where the SIMPLEC algorithm (Vandoormaal & Raithby, 1984) ensures pressure velocity coupling. A bounded central differencing scheme is used to discretize convective fluxes, whereas an implicit second order central difference scheme is employed for temporal discretization with a physical time step size ∆t = 5 · 10 −5 s. Double precision for numerical accuracy is needed since all three meshes have high aspect ratio cells for boundary layer resolution. Computational domain and boundary conditions are chosen to match the experimental set-up. At the domain inlet a uniform cross flow velocity U ∞ = 47.2m/s at T ∞ = 291K is specified and the mass flow rate at the each pipe is fixed toṁ = 0.01771kg/s with a temperature T j = 353K. At the domain outlet a constant pressure of p = 101325Pa is specified. Boundary conditions for turbulence include an eddy viscosity ratio of 10 and a turbulence intensity of 0.5% for both inlet types. All walls of the generic configuration are adiabatic and have no-slip. Considering the small cross flow Mach number and avoiding pressure reflections, an incompressible ideal gas law is used, where density is only a function of temperature. Sutherland's law is employed to account for temperature influence on viscosity.
Results and validation
Instantaneous iso-surface of the Q-Criterion (Hunt et al., 1988) are illustrated in figure 4 in order to judge the scale resolvability. All three meshing strategies allow the resolution of turbulent fluctuations at different length scales. As the element edge length is identical for tetrahedral and Cartesian cells in the wake, more and finer structures are resolved for the hybrid tetrahedral approach. A horseshoe vortex, which is a characteristic flow feature of a jet in cross flow, can be found upstream of the ejector and is mostly pronounced for the hybrid tetrahedral and hybrid Cartesian mesh. Additionally, hairpin like vortices form in the wake, which have also been described for jets in cross flow at low blowing ratios (Fric & Roshko, 1994). Time statistics have been collected for a total of 7000 time steps. As the mean surface temperature behind the ejector is of prime interest, the thermal efficiency η = (T − T ∞ )/∆T is plotted along the symmetry line and compared to experimental data in figure 5a). In contrast to the far field, where only small differences are visible, the near field shows a stronger mesh dependence. The best agreement is achieved with the hybrid tetrahedral mesh, while the hexahedral mesh follows the trend of the experimental data with a slight underestimation. The hybrid Cartesian mesh shows a stronger temperature gradient, which can be explained by the larger cell size leading to lesser scale resolution and hence thermal mixing. Figure 5b) shows the lateral distribution of η at a location of X/D = 8 downstream of the ejector. The thermal spreading is generally in good agreement with experimental data and only the solution on the hexahedral mesh shows a tendency to underestimate the overall temperature distribution. As the fluctuating field of transient simulations need to be validated too, root mean square values of the X-velocity component along a wall normal line are shown in figure 6a). Even though the maximum value is underestimated, a qualitatively good agreement is achieved with only minor dependence on the mesh. A sample power spectral density for the Y -velocity component in the jet wake is illustrated in figure 6b). The dominant frequency at St D = f U ∞ /D = 0.14 is predicted well for all three meshing strategies, but with a stronger overestimation of the peak for meshes a) and b). Results from a standard unsteady RANS calculation using the k − ω SST turbulence model on the hexahedral mesh have been included in these figures to highlight the necessity of scale resolution for a correct simulation of a jet in cross flow. Figure 5. Thermal efficiency: Exp ♦, SAS mesh a) · · · · · ·, SAS mesh b) ----, SAS mesh c) -· -, URANS mesh a) -· · -
Summary
Scale-Adaptive Simulations have been carried out for a hot jet in cross flow at a high Reynolds number and a low blowing ratio. It was shown that all applied meshing strategies allow the local resolution of turbulent scales. Due to the characteristics of the turbulence model, more and finer turbulent structures are resolved as the cell size decreases. A good agreement with experimental | 2019-04-12T13:56:56.911Z | 2011-12-22T00:00:00.000 | {
"year": 2011,
"sha1": "63932dd5d179ecf32d5be2a2092b2938514f0428",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/318/4/042050",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "89b10704a0f38e959413bfc81147b442c39a5117",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Engineering"
]
} |
269039998 | pes2o/s2orc | v3-fos-license | Stability of Proximal Femoral Osteotomies in Pediatric Bone Models Fixed with Flexible Intramedullary Nails and Evaluated by the Finite Element Method
Objective To evaluate the stability of osteotomies created in the subtrochanteric and trochanteric regions in a pediatric femur model fixed by flexible intramedullary rods. Method Tomographic sections were obtained from a pediatric femur model with two elastic titanium rods and converted to a three-dimensional model. This model created a mesh with tetrahedral elements according to the finite element method. Three virtual models were obtained, and osteotomies were performed in different regions: mediodiaphyseal, subtrochanteric, and trochanteric. A vertical load of 85N was applied to the top of the femoral head, obtaining the displacements, the maximum and minimum main stress, and the equivalent Von Mises stress on the implant. Results With the applied load, displacements were observed at the osteotomy site of 0.04 mm in the diaphyseal group, 0.5 mm in the subtrochanteric group, and 0.06 mm in the trochanteric group. The maximum stress in the diaphyseal, subtrochanteric, and trochanteric groups was 10.4 Pa, 7.52 Pa, and 26.4 Pa, respectively. That is around 40% higher in the trochanteric group in regards to the diaphyseal (control). The minimum stress of the bone was located in the inner cortical of the femur. The equivalent Von Mises stress on the implants occurred at osteotomy, with a maximum value of 27.6 Pa in the trochanteric group. Conclusion In both trochanteric and subtrochanteric osteotomies, fixation stability was often lower than in the diaphyseal model, suggesting that flexible intramedullary nails are not suitable implants for proximal femoral fixations.
Introduction
Intramedullary elastic fixation is a reliable and effective option for treating fractures in the pediatric femur's diaphysis. 1,24][5][6] Under these conditions, it is interesting to simulate the mechanical behavior of an implant in order to anticipate whether it depends on clinical conditions.
There are usually two methods of evaluating the mechanical behavior of bone and implants: direct experimental techniques (or mechanical methods) and mathematical models.However, direct experimental techniques have disadvantages, being prone to errors and inaccuracies. 7he Finite Element Method (FEM) is a powerful tool initially developed in the 1950s and widely accepted after investments in technology by the National Aeronautics and Space Administration (NASA). 8In the field of Engineering, this method is used to solve conditions such as stress analysis, fluid flow, electromagnetism, and heat transfer using computer models. 8 the Medical field, especially in Orthopedics and Biomechanics, the first records of the application of the FEM date back to the 1970s, when estimates of the ability of different types of tests to predict the mechanical behavior of bones were carried out. 7,9Through the FEM, it is possible to accurately represent complex geometries and incorporate the different properties of materials, allowing the application of loads at specific points in the structure.This way, obtaining information about the maximum and minimum stress and deformations is possible. 7Therefore, FEM is used to accurately predict the response of an implant when subjected to a variety of loads, in addition to incorporating the effect of the interfaces between the implant and the bone. 7,10,11he objective of this study was to evaluate the stability provided by two flexible intramedullary rods in simulations of fractures located in the subtrochanteric, trochanteric, and diaphyseal regions created in a pediatric femur model using the finite element method.
was located in the inner cortical of the femur.The equivalent Von Mises stress on the implants occurred at osteotomy, with a maximum value of 27.6 Pa in the trochanteric group.Conclusion In both trochanteric and subtrochanteric osteotomies, fixation stability was often lower than in the diaphyseal model, suggesting that flexible intramedullary nails are not suitable implants for proximal femoral fixations.
Material and Methods
This is a laboratory study using artificial bone models, and therefore, the Institutional Research Committee's approval of the project is waived.An infant femur model with dimensions corresponding to a 9-year-old child (Sawbone Inc., Pacific Research Laboratories Inc., WA, United States).This synthetic bone has mechanical properties similar to human bone. 12,13he preparation of the specimen was described earlier. 6In summary, two flexible titanium rods (Titanium Elastic Nail -TEN®, TiGa 114v, DePuy Synthes®, Oberdorf, Switzerland) with a diameter of 3.5 mm were inserted retrograde into the spinal canal.Radiography were performed to confirm proper positioning, followed by computed tomography of the entire bone model, archived in the DICOM communication protocol (Digital Imaging and Communications in Medicine).Computed tomography was completed using a Siemens ® 16-channel Tomograph, Emotion model (Erlangen, Germany), with a resolution of 512 Â 512 and a cutting distance of 1.0 mm.DICOM was imported into the InVesalius® program (free software of the Renato Archer Information Technology Center, Campinas, São Paulo, Brazil), which enabled the generation of segmented models of the imported anatomical system for the three-dimensional (3D) construction of the anatomical structure.Once the volumetric object reconstructed in three dimensions was obtained, the software allowed the export of the file in the Standard Triangle Language (STL) format.
The Rhinoceros® 6 program (Robert McNeel & Associates, Seattle, WA, United States), version 6, generated virtual 3D models of each bone-stem set.To obtain a more accurate and faithful contour, we carried out reshuffles on the resulting intersection lines.These lines were drawn considering the region under study and may contain variations in the number of points according to the need for details of the area in question.Then, these lines were intersected and cut off, forming a set of three or four lines.This set allowed the generation of a three-dimensional surface.
The analysis by the FEM was conducted by the SimLab® program (HyperWorks, Troy, MI, USA), using the Optistruct solver.
To simulate the fractures, osteotomies were performed in the virtual models at three levels: cut at the level of the lesser trochanter (trochanteric group), cut located 3.5 cm distally to the lesser trochanter (subtrochanteric group), and cut in the central region of the diaphysis (mediodiaphyseal group, or control).Tetrahedral elements were used for knitting, and the number of knots was defined.In the virtual environment, a load of 85.0 N was applied to the top of the femoral head in the vertical direction, and the corresponding deformations and stresses were obtained.
For the simulations, it was necessary to know and define the material properties of each of the digital models' parts, namely cortical bone, spongy bone, and titanium alloy (TiGa114v).The properties of the materials used for the simulations are presented in ►Chart 1.
Results
With a loading of 85.0 N, the following displacements were obtained at the osteotomy simulation site: 0.04 mm in the control group, 0.5 mm in the subtrochanteric group, and 0.06 mm in the trochanteric group.
The greatest areas of stress were identified in the lateral cortical of the femur and the upper region of the neck.The main maximum stress reached 10.4 Pa, 7.52 Pa, and 26.4 Pa in the control, subtrochanteric, and trochanteric groups, respectively (►Fig. 1).
Discussion
5][16] This results in increased stress between the fragments, which become more dependent on the stabilizing effect of the implant.
Therefore, the fixation must counteract the mechanical moments generated by local forces, providing adequate stability to maintain the reduction and allow consolidation.Thus, elastic rods may not meet these criteria, as already shown by clinical reports 3 and mechanical tests, 6 not being indicated for fractures in the most proximal regions.
To study the stability of the bone-implant model set, we used the FEM, used to simulate and verify the distribution of stress and displacements from the solution of equilibrium equations under loads. 17To use the methodology, it was necessary to use a model of a fracture represented by an oblique osteotomy.The FEM provides the theoretical and mathematical substrates.However, in the case of fractures, it is applied to an idealized model.Therefore, it has the inconvenience of not taking into account many characteristics of the fracture, such as irregularities and different inclinations of the stroke, in addition to the possibility of presenting more than one fragment.In addition, it does not consider the action of soft parts in stabilizing/destabilizing the fracture.This limitation is inherent to the method; however, even with all the simplification, it is very useful in preclinical evaluations of implant development, for example, which is useful from the point of view of cost, time, and ethical research with human beings.Simplifications and restrictions also occur in studies in Engineering and other Exact Sciences.
9][20][21][22] The FEM, because it is non-invasive, provides important biomechanical information, as well as assists in the development of orthopedic devices and has been more widely used in models of anatomical structures of adults, including for simulations of fixation of unstable subtrochanteric fractures. 20,23,24Wang et al. 20 evaluated the biomechanical performance of three Fig. 2 The figure represents the reconstruction of the proximal region of the femur, the osteotomy section, and the flexible rods.The rods without the bone contour are presented in detail on the side, illustrating the concentration of Von Mises equivalent stress higher in the osteotomy region (areas in red; critical region).If there is implant failure, it will occur at this level, leading to loss of reduction.
implants to treat unstable subtrochanteric fractures in adults using the FEM and observed that the proximal femoral stem was more stable than the blocked stem and the LISS system (Less Invasive Reverse System).
Our results showed that the most proximal osteotomy (trochanteric) presented the highest maximum stress and the highest Von Mises equivalent stress, which indicates that the implant's mechanical demand is higher in this site than in the other two groups.In addition, since the greatest Von Mises equivalent stress occurs at the sites of osteotomies, it is noticed that the implants serve as "tutors" and protect the fracture.This was also observed in the study by Soni et al. 25 , who performed a two-dimensional simulation of femoral fractures in children with the FEM to evaluate the effectiveness of using flexible rods constructed of steel or titanium.
In this study, when loading was applied, the regions of greatest stress were in the lateral cortical of the femur and the upper region of the neck.These results show that, with the load, the trochanteric cut presented a 153% higher stress request than the control (mediodiaphyseal cut).
However, the fragments' displacement at the osteotomy site was very small in all groups, which can be attributed to the low loading (85.0 N) applied to the systems.This value was selected after considering the mass of the unloaded lower limb of a 10-year-old child ($8.5 kg) 26 ; therefore, intentional loading is not recommended clinically in the early postoperative phase.Additionally, this loading restricted the deformation to the elastic phase of the implants; that is, no irreversible deformation occurred in the clinic.If this limit is exceeded, there will be permanent deformation of the implant and loss of fracture reduction.
Conclusions
For osteotomies in the trochanteric and subtrochanteric regions, there is greater mechanical demand for the implant, which may exceed the stabilization limits of the flexible intramedullary nails.Thus, clinically, this type of implant should be indicated in the classic situations for which it was designed, that is, in fractures of the diaphyseal region of the femur.
Financial Support
The authors state that they received no financial support from public, commercial, or non-profit sources for this study.
Fig. 1 Chart 1
Fig. 1 Distribution of the areas of maximum stress in the proximal regions of the femur in the simulations of the three types of osteotomies.A -Diaphyseal Osteotomy, B -Subtrochanteric Osteotomy, C-Trochanteric Osteotomy.The red colors represent the areas of greatest stress. | 2024-04-12T05:18:46.614Z | 2023-08-29T00:00:00.000 | {
"year": 2024,
"sha1": "23f89c1cb8f9ff680114cff12d044145c129a6d6",
"oa_license": "CCBY",
"oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/s-0044-1785467.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "23f89c1cb8f9ff680114cff12d044145c129a6d6",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
254535085 | pes2o/s2orc | v3-fos-license | A scoping review of methods to measure and evaluate citizen engagement in health research
Background Citizen engagement, or partnering with interested members of the public in health research, is becoming more common. While ongoing assessment of citizen engagement practices is considered important to its success, there is little clarity around aspects of citizen engagement that are important to assess (i.e., what to look for) and methods to assess (i.e., how to measure and/ or evaluate) citizen engagement in health research. Methods In this scoping review, we included peer-reviewed literature that focused primarily on method(s) to measure and/or evaluate citizen engagement in health research. Independently and in duplicate, we completed title and abstract screening and full-text screening and extracted data including document characteristics, citizen engagement definitions and goals, and methods to measure or evaluate citizen engagement (including characteristics of these methods). Results Our search yielded 16,762 records of which 33 records (31 peer-reviewed articles, one government report, one conference proceeding) met our inclusion criteria. Studies discussed engaging citizens (i.e., patients [n = 16], members of the public [n = 7], service users/consumers [n = 4], individuals from specific disease groups [n = 3]) in research processes. Reported methods of citizen engagement measurement and evaluation included frameworks, discussion-based methods (i.e., focus groups, interviews), survey-based methods (e.g., audits, questionnaires), and other methods (e.g., observation, prioritization tasks). Methods to measure and evaluate citizen engagement commonly focused on collecting perceptions of citizens and researchers on aspects of citizen engagement including empowerment, impact, respect, support, and value. Discussion and conclusion We found that methods to measure and/or evaluate citizen engagement in health research vary widely but share some similarities in aspect of citizen engagement considered important to measure or evaluate. These aspects could be used to devise a more standardized, modifiable, and widely applicable framework for measuring and evaluating citizen engagement in research. Patient or public contribution Two citizen team members were involved as equal partners in study design and interpretation of its findings. Systematic review registration Open Science Framework (10.17605/OSF.IO/HZCBR). Supplementary Information The online version contains supplementary material available at 10.1186/s40900-022-00405-2.
Plain English Summary
Involving members of the public (citizens) in health research is important. It helps make sure that research focuses on issues that are most important to citizens. It also helps ensure that the research done is respectful of citizen participation and most likely to provide benefit. However, the best way to engage citizens in research is unclear. In this scoping review, we examined existing studies that assessed citizen engagement in health research. We found that citizen engagement was often assessed by asking for feedback from both citizens and researchers. Feedback was collected in person (one on one interviews or group discussions) or in writing (using surveys or audits). Frameworks (organized ways of thinking about an issue) were also sometimes used to measure empowerment, impact, respect, support, and value of engaging citizens. It was clear from the frameworks that there is a need to develop clearer roles for citizens in research. The two citizen members of our research team who helped interpret our study findings felt that a set of guidelines for citizens to help them best participate in health research needs to be developed. We believe these observations could be used to create a more standard method for assessing citizen engagement in research.
Background
Citizen engagement in health research is an increasingly common approach to conducting biomedical, clinical, health system and services, social, cultural, environmental, and population health research with citizens as collaborators rather than subjects [1,2]. Often referred to using diverse terminology (e.g., community based participatory research, public participation, patient and public involvement), citizen engagement recognizes citizens, defined as interested or affected members of the general public including patients, caregivers, advocates, or representatives of the community as "knowledge users", or individuals who are affected by the processes and results of health research and can use their lived experience to influence research to be more relevant and useful [3,4]. Specifically, citizen engagement encompasses meaningful involvement of citizens in various aspects of the health research process such as: membership in advisory groups or steering committees for priority-setting, co-application on funding grants, and research planning, decision-making, conduct, implementation, evaluation, and dissemination [3][4][5].
Engaging citizens in research has the potential to improve the relevance of study findings, minimize waste by facilitating stewardship over resources, create mutual learning and understanding, and build trust in research findings by improving relationships between communities and researchers [6,7]. Additionally, citizen engagement has shown the ability to provide individuals with opportunities to acquire new skills and knowledge, enjoyment and satisfaction through support and friendship, and financial rewards to compensate their efforts [8]. Due to these documented benefits of citizen engagement, national funding bodies and healthcare organizations worldwide encourage and sometimes mandate citizen engagement in research design and practice [9,10].
Despite the push towards incorporating citizen engagement in research by funding bodies, citizen engagement is often tokenistic and lacks the clarity and guidance needed to facilitate it's meaningful use [11][12][13][14][15][16][17][18][19]. Existing guidance on citizen engagement is often provided within a "stakeholder engagement" context which is not specific to citizens and can include health care practitioners, policymakers and industry members and may not directly address the needs of citizens [20]. Finally, literature around citizen engagement tends to focus reiterating benefits, risks, and impact of citizen engagement in health research without detailing specifics on how to appraise citizen engagement in health research [1,[21][22][23][24][25]. As such, there is need for an evidentiary foundation to enable assessment of the degree (i.e., level of engagement in research processes which can vary from participation in research planning committees to recruitment of participants and dissemination of data) and quality (i.e., determining quality of involvement, which may be ascertained collecting citizens' experiences with the engagement or perceived impact of engagement) of citizen engagement in health research, building upon current guidance provided by national funding agencies and peer-reviewed literature. To develop a high-level understanding of methodology used to appraise citizen engagement in health research and determine aspects of citizen engagement valuable to assess, we conducted a scoping review of literature focused on: (1) Methods to measure (i.e., determine degree of ) citizen engagement in health research; and (2) Methods to evaluate (i.e., determine quality of ) citizen engagement in health research.
Methods
We designed and conducted a scoping review to map the existing literature on methods to measure and evaluate citizen engagement in health research according to the Arksey and O'Malley [26] and Levac [27] recommendations for scoping reviews. We used the Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) to guide the reporting of this scoping review [28]. A detailed description of the proposed methods has previously been published [29].
Identification of the research question
As degree and quality of engagement are closely related features of citizen engagement, we developed the research question: "what is the state of knowledge on methods to measure and evaluate citizen engagement in health research?" to capture a broad range of potentially relevant literature. Our research question was developed to also shed light on the any relationship between measurement and evaluation of citizen engagement. We ensured that our research question defined the scope of inquiry with respect to population, concept, and outcomes of interest and would direct the subsequent steps [26]. We defined our target population as "citizens", or consumers of health services (e.g., patients, families of patients, informal caregivers), advocates and representatives from community organizations, and members of the general public. Our target concept was "engagement", "involvement", or "participation" in health research. We used the CIHR definition of health research, which encompasses the biomedical, clinical, health systems and services, social health, cultural health, environmental health, and population health fields [30]. To complement our usage of the CIHR definition of health research with discussion around citizen engagement, we adopted the CIHR definition of citizen engagement or "the meaningful involvement of individual citizens…that is interactive and iterative with an aim to share decision-making power and responsibility for those decisions". This definition encompassed activities such as priority-setting, planning, acquiring funding, research decision-making, research conduct (e.g., commenting on and developing research materials, interacting with research participants, and/or carrying out research activities), implementation, evaluation, or dissemination to be means of engagement [3].
Identification of relevant studies
We identified relevant literature on citizen engagement in research using a pre-determined plan for data sources and search strategy, including search terms, languages, and dates of search. As per recommendations, we designed the search strategy to return reasonably relevant results while considering time and personnel workload as limiting factors [26,27].
Search strategy
Our study team, which included multiple stakeholders and knowledge users including health services researchers (KMF, HTS, JPL), trainees (AS, BKR), patient partners (BGS, SL), a health care professional (HTS), and a health sciences research librarian developed the search strategy. The search strategy (Additional file 1: Item S1) was independently reviewed by a second health sciences research librarian uninvolved with this project using the Peer Review of Electronic Search Strategies (PRESS) checklist [31].
Using the previously published search strategy [29], we searched the MEDLINE (Ovid) database from January 1, 2000 to February 1, 2021. We then adapted this strategy to each database to be searched: EMBASE (Elsevier) We included subject headings, keywords and relevant synonyms related to three concepts: [1] citizens (e.g., community member, lay person, public, stakeholder), (2) engagement (e.g., collaboration, engagement, participation, involvement), and (3) health research (e.g., biomedical research, clinical research, public health research, environmental health research). We excluded studies published before the year 2000 to capture a modern viewpoint of the ever-evolving practice of citizen engagement in health research. We did not place any exclusion criteria on language. We screened the reference lists of included studies and related systematic reviews to identify additional potentially relevant literature.
Study selection
We developed inclusion and exclusion criteria a priori through meetings with the study team to refine the study selection process at the beginning, midpoint, and endpoint of the citation screening process in case any unforeseen considerations arose [27]. We screened and selected relevant studies for inclusion in the scoping review independently and in duplicate.
Eligibility criteria
We included articles if they: (1) were primary (e.g., observational or interventional studies) or secondary (e.g., systematic or scoping reviews) research, frameworks, reviews, or reports, (2) primarily focused on citizen engagement in health research (including biomedical, clinical, health systems and services, and social, cultural, environmental and population health, as defined by CIHR) and (3) reported method(s) to measure or evaluate citizen engagement. We did not place any restrictions on language. Non-English language studies were screened using Google Translate [32]. We excluded any literature that discussed methods to measure or evaluate citizen engagement in non-research processes, including health promotion, health education, health system or service delivery and governance (including decision-making), or health program implementation. To gain a focused insight on existing methods determining the degree and quality of citizen engagement in health research, we omitted literature focusing primarily on areas other than measurement and evaluation of citizen engagement in health research.
We imported retrieved articles into Covidence (Veritas Health Innovation, Melbourne, Australia) for title & abstract screening, which was completed independently and in duplicate by two reviewers (AS, BKR, KP, KK, RK, MA, ML, LH, KM). Reviewers conducted a pilot screening of 50 titles and abstracts to ensure consistency in application of inclusion and exclusion criteria. Once a Kappa (inter-rater agreement) of ≥ 0.8 was achieved, reviewers proceeded to screen the remaining articles. If one reviewer indicated an article as potentially relevant at the title & abstract screening phase, the article advanced to full-text review to ensure inclusivity. Following title & abstract screening, we exported a list of included articles into Endnote X9 (Clarivate Analytics, London, United Kingdom). We retrieved full-text versions of included articles using a combination of Endnote X9's 'find full text' feature, Endnote Click online, and the local university online libraries. If a full-text version of an article was not available, a search was conducted of the publishing journal's website to gain access. Following the search for full-text articles, the Endnote library was re-uploaded to Covidence and full-text screening was completed independently and in duplicate by two reviewers. Reviewers (AS, IL, RK) pilot screened the full text of 20 articles, and screened the remaining articles once a Kappa ≥ 0.8 was achieved. Both reviewers agreed on inclusion status and reason for exclusion at this stage. Any disagreements were resolved by discussion among the reviewers or the involvement of a third reviewer (IL or RK), if required.
Charting the data
We developed an initial data charting form for data elements to be abstracted from the articles, then refined the form through discussion with the study team. The data charting form [Microsoft Excel (version 16.29.1)] was piloted by two reviewers (AS, IL) with ten included studies, and revised as needed. Once the final data charting form was developed, all relevant articles were abstracted independently and in duplicate by two reviewers (AS, IL). Abstracted variables in the initial data charting form included study characteristics, participants, type and goals of citizen engagement, method(s) used to measure citizen engagement, method(s) used to evaluate citizen engagement, features of method(s) used, and observed benefits or risks of the measurement or evaluation method(s) used.
Collating, summarizing, and reporting the results
We collated, summarized, and reported the results by (1) analyzing the data, (2) reporting results, (3) and evaluating their meaning and aimed to contextualize our findings within existing literature and research, practice, and policy [26,27]. We analyzed qualitative data specifically by having two research team members trained in qualitative methods (AS, IL) inductively code major components of each framework using an analytical approach informed by thematic analysis [33]. First, the researchers independently reviewed each framework to develop a list of relevant terms and associated concepts. Then, the researchers compared lists and discussed discrepancies with a third qualitative expert (JPL). The initial two researchers then deductively analyzed the agreed upon concepts into shared and unique framework features by expanding and collapsing shared meanings through a series of three meetings. Concepts deemed too vague (no label/definition provided) were excluded from analysis.
We present a descriptive summary of characteristics of the included documents, and the characteristics of the intended participants or audiences for these documents alongside a narrative synthesis of abstracted data variables.
Consultation
Involvement of citizens in literature synthesis is recommended by funding organizations such as the NIHR [34] and CIHR [3] and is becoming increasingly commonplace [35,36]. We involved citizens (BS, NF) in study conception and design including the search strategy. As per recommendations, we involved citizens (BS, SL) in the interpretation and contextualization of the data [27].
Results
Our search strategy returned 28,353 total results (16,762 results after duplicates were removed). Of these, studies were excluded because they: (1) reported aspects of citizen engagement other than methods of measurement or evaluation (n = 713), (2) reported outcomes or discussion on specific diseases/interventions (i.e., not citizen engagement) (n = 520), (3) had an ineligible study design (i.e., editorials, letters, commentaries) (n = 504), (4) included citizens as health research subjects only (n = 452), (5) engaged citizens, but in non-research (i.e., health service design or delivery) (n = 437), (6) were not health research related (n = 337), (7) focused on non-citizen stakeholders in health research (i.e., clinicians, policy makers) (n = 285), or (8) and/ or had inaccessible fulltexts (n = 169). Thirty records met inclusion criteria and were included in our scoping review. Three additional records were found through searching reference lists and included for a total of 33 records. A study flow diagram including reasons for exclusion is shown in Fig. 1.
Methods used to measure and/or evaluate CE
Of the 33 included studies, 20 (61.0%) presented method(s) to evaluate citizen engagement, five (15.2%) presented method(s) to measure citizen engagement, and eight (24.2%) presented method(s) to both measure and evaluate citizen engagement. Methods for the measurement and/or evaluation of citizen engagement included frameworks, discussion-based methods (i.e., focus groups, interviews, workshops), survey-based methods (i.e., audits, questionnaires), and other methods (i.e., indicators, observation, prioritization tasks). Many studies utilized and reported on more than one method to measure and/or evaluate citizen engagement. A summary of these methods is presented in Table 2 and described narratively below.
Citizen engagement strategies: frameworks
Five studies presented frameworks [55,57,58,60,67] designed to measure and/or evaluate citizen engagement in health research. Frameworks focused on various aspects of citizen engagement including reflection on and impact of citizen engagement activities in research, and recommendations for improvement. The five included frameworks explored measurement and evaluation of citizen engagement through gauging (1) empowerment (i.e., citizens should feel comfortable in voicing their opinions), (2) impact (i.e., research should be positively shaped by citizen engagement), (3) respect (i.e., citizens should feel respected), (4) support (i.e., citizens should have training and supports available), and (5) value (i.e., citizens should feel important to the process). Included frameworks also highlighted the importance of capacity building (i.e., funds, personnel to support engagement in research) [60], assessing the degree of engagement of researchers and citizens [55], clarity in roles (i.e., of citizens when engaged) [57], and involvement of citizens in critical aspects of research (i.e., protocol development, analysis, outputs) [60]. More detail on each framework is provided in Table 2, and similarities and differences between the included frameworks are highlighted in Fig. 3.
Citizen engagement strategies: other methods
A number of studies presented other methods to measure and/or evaluate citizen engagement. These included indicators of user involvement such as documentation of citizen roles in research and availability of training to citizens to facilitate their involvement in research [40], prioritization tasks focusing on outcomes of the research considered important by participating citizens [49], and citizen observation of any study steering group meetings and scrutiny of any study documentation [48]. One study used a method to appraise existing frameworks for supporting citizen engagement (the Canadian Centre for Excellence on Partnerships with Patients and Public Enjoyment/satisfaction from participation, perceived impact, impact of involvement on study [45] Patients' need for information and layout needs [49] Experiences, activities, perception of involvement (positive or negative), recommendations for involving others [53] Issues of the restricted socio-demographics of the group, the perceived lack of feedback from researchers, and the effectiveness of current and future means to maximize volunteer support [43] Perspectives on the project [48] Semi-structured; with presentation of results Reflection on practical guidelines, based on the preliminary results [41] Interview Semi-structured Views on monitoring and evaluation systems used to record involvement activities, feasibility of systematically collecting/ collating data on the nature and impact of young people's involvement, key opportunities and challenges [38] Researchers views on impact of community expert input on their attitudes and practices [68]
References
Experiences with initiating and formulating the research agenda, verification of data obtained by study, experiences with programming and implementation, barriers and facilitators for patient involvement in programming and implementation [41] Need for public involvement as a means to funding and recruitment, value of efficiency and the speed of turnaround of queries [43] Perspectives of the project [48] Workshop Included introduction of framework Experiences of involvement as members of a public group within their parent organizations; mapping these along framework [58] Method (question based) i.e., audit, questionnaire, survey) CBPR Tool Information not available Satisfaction with involvement, trust,comfortability in sharing opinions [52] PPI audit tool What changes they made to their research proposal, research project, or community engagement practices as a result of the input they received from the community experts, community expert experiences, process and outcomes, suggestions to improve the engagement process [68] Adapted from Patients as partners in research survey Yes/no questions, close-ended questions, qualitative items † Engagement evaluation, views on understanding, sensitivity, feasibility of project, appropriateness of project, satisfaction [50] Post evaluation tool) [42], however many of the frameworks discussed intended to support and report rather than measure or evaluate citizen engagement, falling out of the scope of this review.
Discussion
Our scoping review produced two main findings. First, we found that multiple methods (i.e., audits, focus groups, interviews, frameworks, surveys) have been used, often in combination, to measure and evaluate citizen engagement in health research. These methods collect perceptions of citizens, researchers, and/or research support personnel on many aspects of citizen engagement including reasons, type, and impact of engagement, any challenges encountered in engagement (including project-specific issues), and recommendations for improving future citizen engagement in health research. Secondly, we identified that existing frameworks to measure and evaluate citizen engagement commonly assess perceived empowerment, impact, respect, support, and value. Together, these findings summarize the nature of citizen engagement in health research and itemize citizen engagement aspects that are considered important to assess the degree and quality of citizen engagement in health research. In addition to our main findings, we identified that the terminology used to define citizen engagement and describe its activities varies widely. Citizen engagement is referred to as patient and public involvement (often in the United Kingdom), patient engagement, public engagement, consumer or service user-involved research, and community based participatory research depending on location and context. Varying terminology may pose a challenge for individual researchers to identify and utilize methods to appraise citizen engagement in research. Standardization of terminology could add to the accessibility and applicability of current and future methods to incorporate and evaluate citizen engagement in health research.
In the process of screening literature for inclusion in this scoping review, we found that much of the current guidance on appraising citizen engagement in research exists in the form of editorials, letters to the editor, commentaries, and perspectives from experienced researchers in the field. We noted that this type of literature does not routinely discuss the merit of discussion-based methods that were used to evaluate citizen engagement in included studies. This could reflect a repeated dismissal of discussion-based qualitative research methods to measure and evaluate citizen engagement in research and warrants further investigation. Despite limited discussion-based methods to appraise citizen engagement, this literature emphasizes the context and process of engagement [70], clarity, reflexivity, methodological rigour, transparency, pragmatism, and reciprocity as key principles to evaluating citizen engagement in research [71] and highlights the need for evaluation as an ongoing part of the research process [72]. These elements of citizen engagement complement our main findings and should be taken into consideration when appraising citizen engagement. Our findings also align with previous work emphasizing the importance of evaluating citizen engagement activities as a necessary step in building a strong evidence base for utilizing citizen engagement in health research [73]. Furthermore, previous literature has emphasized a need for standardization in measurement and evaluation of engagement processes as methods to measure or evaluate citizen engagement are seldom utilized beyond the groups that develop them [42]. In light of our findings, we postulate this could occur due to (1) lack of accessibility (i.e., method difficult to find) or (2) lack of perceived applicability/modifiability (i.e., method viewed as unsuitable or too specific to a certain project or type of research and unmodifiable).
As per recommendations by Levac and colleagues, we invited citizen team members (BS, SL) to help interpret the findings of this scoping review and provide insights beyond those in the literature [27]. These citizen team members (BS, SL) remarked that empowerment, impact, respect, support, and value, common to frameworks identified by our study, were important to them in their experiences of participating in research. Additionally, they stated that the ability to openly communicate their concerns about the research project and their involvement has been important to them as members of a research team. Finally, they expressed a desire for an accessible lay resource to help people like them (i.e., citizens) be a meaningful part of research and stated that such a resource would vastly improve their comfort level with participating in research.
Strengths and limitations
This scoping review was designed to form an evidence basis for future work to advance and standardize appraisal of citizen engagement in health research. This study has strengths and limitations to consider. Strengths of our scoping review include: (1) co-development of the study protocol with a multidisciplinary team including researchers, health professionals, and health sciences librarians, and (2) citizen involvement in its design and interpretation. These elements helped to create a comprehensive synthesis and discussion of the existing literature on measuring and evaluating citizen engagement in health research.
Our study also has limitations. A significant number of the methods we summarize in this scoping review are focus groups, interviews, and closed-and open-ended discussions and questions. These methods were often described in the literature with varying levels of detail, presenting difficulty in assessing the rigour of each method. While the level of detail available on included methods is variable, we do not perceive this as a limitation but rather an accurate snapshot of the currently utilized discussion-based methods to appraise citizen engagement in research. Another limitation to this study is possible unintended omittance of relevant literature due to (1) our definition of citizen engagement adapted from the CIHR [3], which may not align with all citizen engagement activities reported in the international literature, and (2) our approach to including only studies which discussed a method of measuring or evaluating citizen engagement as a major aim of the work. We recognize that as a scoping review designed to provide a high-level mapping of the literature, our search strategy will likely have missed some studies. Thirdly, we only searched and included peer-reviewed literature (i.e., omitted grey literature) around methods to measure and/or evaluate citizen engagement in health research to capture studies with higher methodological quality and minimize surplus complexity in the results. Lastly, like previous reviews of citizen engagement [74][75][76][77], much of the literature we captured reflects United Kingdom-based practices around citizen engagement in health research. This is due to targeted NIHR efforts to set standards for patient and public involvement [78] (i.e., citizen engagement), making the United Kingdom a leader in participatory health research. While this is a potential weakness to our study, we have clearly stated the geographical location of included studies to highlight any practices distinct to the United Kingdom, in order to avoid misrepresenting worldwide citizen engagement practices.
Conclusions
While there has been an increase in published methods to measure and evaluate citizen engagement over the past decade, there remains a need for standardized guidelines on appraising citizen engagement in research. Extensive variation in terminology used around citizen engagement contributes to a lack of unified principles or criteria that comprise effective citizen engagement and development of a single set of core principles that indicate degree (i.e., measurement) and quality (i.e., evaluation) of citizen engagement is necessary. This set of principles could be impactful if further developed as guidelines to suit specific types of research (e.g., clinical, health services, preclinical) and varying audiences (i.e., citizens, patients, researchers, other stakeholders). Commitment to citizen engagement in research by funding bodies, research institutions, and scientific journals could create a shift in research culture promoting use of standardized practices, helping citizen engagement move away from tokenism into an efficient and unified process.
Recommendations
• We recommend standardization of terminology (i.e., citizen engagement rather than a multitude of other terms) used to describe participation of lay individuals in health research. • We recommend development of a specific framework for the measurement and evaluation of citizen engagement in health research, built to foster empowerment, impact, respect, support, and value in citizen engagement. | 2022-12-12T05:16:53.086Z | 2022-12-10T00:00:00.000 | {
"year": 2022,
"sha1": "cfae156bc60340c904787abc49e576186b6091e8",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "cfae156bc60340c904787abc49e576186b6091e8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231803288 | pes2o/s2orc | v3-fos-license | Sporadic late-onset nemaline myopathy: a case report of a treatable cause of cardiac failure
Abstract Background Sporadic late-onset nemaline myopathy (SLONM) is a rare, acquired, adult-onset myopathy, characterized by proximal muscle weakness and the pathognomonic feature of nemaline rods in muscle fibres. Sporadic late-onset nemaline myopathy is associated with cardiac pathology in case reports and small case series, but the severity of cardiac disease is generally mild and rarely requires specific treatment. This case report describes severe heart failure as an early feature of SLONM, which responded to specific treatments, and highlights SLONM as a potentially reversible cause of heart failure. Case summary A 65-year-old woman presented with progressive muscle weakness and a dramatic loss of muscle bulk in her thighs, followed by progressive effort breathlessness over an 18-month period. She required a wheelchair to ambulate. A diagnosis of SLONM was made on histopathological assessment of a muscle biopsy along with electron microscopy. An echocardiogram showed a severely dilated and impaired left ventricle. She was treated with standard heart failure medications and autologous stem cell transplantation, which resulted in improvement of both her cardiac and muscle function, and allowed her to walk again and resume near-normal functional performance status. Discussion Cardiomyopathy can be a relatively early and life-threatening feature of SLONM and even in severe cases can be effectively treated with standard heart failure medications and autologous stem cell transplantation.
Introduction
Sporadic late-onset nemaline myopathy (SLONM) is a rare acquired adult-onset myopathy characterized by muscle atrophy. It presents as proximal muscle weakness and has the characteristic histopathological finding of nemaline rods in muscle fibres. It varies in terms of clinical presentation and progression of the disease. In a significant proportion of cases, SLONM is associated with monoclonal gammopathy of unknown significance (MGUS), which is associated with an unfavourable outcome due to respiratory failure. Cardiac
Learning points
• Sporadic late-onset nemaline myopathy (SLONM) is a rare myopathy that can present with severe cardiomyopathy as a prominent feature. • Sporadic late-onset nemaline myopathy and the associated cardiomyopathy can be treated successfully with heart failure medications and autologous stem cell transplantation. involvement has been reported in SLONM-MGUS in a few case reports and one cohort study, 1-3 which demonstrates it can be lifethreatening. 4 There is a paucity of data on management of cardiac dysfunction in SLONM. Recently, autologous stem cell transplantation following high-dose melphalan has shown to reverse features of SLONM if started early. [5][6][7][8] Therefore, while the disease can be lethal, it is now considered treatable if recognized early.
Case presentation
A 65-year-old retired geography teacher presented in March 2016 with a 2-year history of progressive widespread global muscle weakness, dysphagia, and a dramatic loss of muscle bulk in her thighs. Approximately 1 year later, she developed gradual worsening exertional dyspnoea and reduced exercise tolerance, having previously been able to play 36 holes of golf. She did not experience any palpitations, chest pains, orthopnoea, or paroxysmal nocturnal dyspnoea. Her only medical history was a monoclonal gammopathy of uncertain significance (MGUS) and an incidental finding of atrial fibrillation for which she was taking Bisoprolol 5 mg once a day and warfarin with a target INR of 2-3.
On initial physical examination, she had mild to moderate weakness of neck flexion, shoulder abduction, elbow flexion, and extension. Distal strength in the upper limbs was normal. In the lower limbs, she had profound weakness of hip flexion and extension with striking muscle atrophy of the quadriceps bilaterally. She had moderate weakness of knee flexion and extension with minimal weakness of ankle dorsiflexion. Her reflexes were brisk and normal. Cardiac examination was unremarkable apart from an irregularly irregular pulse at a rate of 112 b.p.m. A year later, she remained in atrial fibrillation but was now noted to have a pan-systolic murmur, radiating to her axilla, and bilateral ankle swelling. Her Jugular venous pulse (JVP) was raised at 8 cm. She had scattered bibasal crepitations consistent with mild pulmonary oedema. There were no ascites.
Her baseline blood tests were unremarkable with a normal full blood count, renal, liver, and thyroid function tests. She had a normal creatine kinase and was negative for the antibodies tested, including anti-nuclear antibody level (Hep-2), Mi-2, Ku, PM/Scl-100, PM/Scl-75, Jo-1, SRP, PL-7, PL-12, EJ, OJ, HMGCoAR. Neurophysiology testing was inconclusive and unable to differentiate between a neuropathic or myopathic process. Therefore, she underwent a deltoid muscle biopsy ( Figure 1) and despite extensive analysis, the underlying aetiology remained unclear. Two months later, after worsening weakness, she had a second biopsy of her left vastus lateralis ( Figure 2). This showed more pronounced changes including increased variability of muscle fibre diameter, frequent atrophic fibres, and rare necrotic fibres ( Figure 2A). Importantly, the second biopsy also demonstrated fibres with rod-like structures on modified Gormori trichrome staining ( Figure 2B), which were not apparent in the previous biopsy ( Figure 1B). The presence of presumed nemaline rods and the clinical picture was suggestive of SLONM. Myotilin immunostaining ( Figure 2C) and electron microscopy were performed ( Figure 2D) to confirm the presence of nemaline rods filling atrophic fibres. The typical 'lattice' structure of nemaline rods was confirmed on electron microscopy. 6 Her ECG showed atrial fibrillation with a rate of 112 b.p.m., incomplete left bundle branch block with QRS duration of 98 ms. Her initial echocardiogram showed a severely dilated left and right ventricle with severely impaired function (ejection fraction 25-30%) (Supplementary material online, Videos S1 and S2). She had a severely dilated left atrium. There was a moderate functional mitral and tricuspid regurgitation, but no other significant valvular disease. She had a normal right heart and no signs of pulmonary hypertension. Common and reversible causes of heart failure were considered and addressed, such as ischaemic heart disease, toxic damage, inflammatory, infiltrative, metabolic and genetic, arrhythmias, and high output states. She had no risk factors for coronary artery disease such as hypertension and had not been exposed to any cardiotoxic drugs or radiation. Her routine blood tests were normal.
On receiving the results of the initial echocardiogram, heart failure therapy was commenced. This included ramipril (titrated to the maximal dose over a 4-week period), an increase in bisoprolol to the maximum tolerated dose of 10 mg and bumetanide 1 mg twice daily. She was switched from warfarin to apixaban 5 mg twice daily in view of her atrial fibrillation. Two months later, spironolactone 25 mg once daily was added and amiodarone 200 mg once daily. Her butmetanide was reduced to 1 mg once daily.
She also had a successful direct current (DC) cardioversion to optimize her cardiac status in preparation for an autologous stem cell transplantation, which was performed in January 2018 following highdose melphalan. She had stem cell return in early February 2018.
On commencing standard heart failure medications, her left ventricular systolic function improved from 30% to 40% over a 2-month period. Following the DC cardioversion and autologous stem cell transplant, her ejection fraction improved further to 56% measured by Simpson's biplane method (Supplementary material online, Videos S3 and S4). She reverted back to atrial fibrillation a month later and digoxin 125 lg once daily was added to improve rate control. She had a substantial improvement in her neuromuscular performance and she was able to resume playing golf without experiencing significant dyspnoea. A repeat echocardiogram in January 2019 showed that she maintained a normal ejection fraction of 56% and remains on her standard heart failure medication. This includes apixaban 5 mg twice daily, digoxin 125 lg once daily, Ramipril 2.5 mg once daily, Bisoprolol 5 mg once daily, and spironolactone 12.5 mg once daily.
Discussion
In a review of 76 cases, SLONM had a mean age of onset at 52 years and was associated with MGUS in just over half of these cases. 3 Cardiomyopathy has been reported in SLONM in a few case studies and one cohort study which detected an incidence of 11%. [1][2][3] It is associated with a worse prognosis 9 ; however, large studies and statistical data are lacking. This case highlights unusual features of cardiomyopathy in SLONM including a relatively early and severe phenotype. Previous studies show variability in terms of the severity and type of cardiomyopathy (hypertrophic or dilated) with the majority of studies showing mild cardiomyopathy. Monforte and colleagues performed a single-centre cohort study of six patients with SLONM all of whom had cardiac involvement. While conduction abnormalities or arrhythmias were the most common pathology, the type of cardiomyopathy varied significantly. Nevertheless, a mild reduction in ejection fraction was present in the majority of patients. Additionally, in previous case studies, cardiomyopathy was a late event in SLONM presenting 2-5 years after the initial presentation, 1,7,10 while in this case, it was both a relatively early and severe feature of the disease. Significantly, this case highlights that lifethreatening cardiomyopathy can be a feature of SLONM and can be reversible with standard heart failure medications alongside chemotherapy with an autologous stem cell transplantation. Previous studies have also demonstrated that chemotherapy alone can improve left ventricular function in SLONM. 7 While the underlying mechanism of cardiac failure in SLONM remains unclear, our patient as well as others have demonstrated improvement of cardiomyopathy with standard heart failure medication and chemotherapy followed by an autologous stem cell transplantation. 7 This finding supports the notion that cardiomyopathy is disease-related. It is hypothesized that striated cardiac muscle is susceptible to the same damaging mechanism that affects the skeletal muscle, although histopathological data from heart muscle biopsy has yet to be reported.
The limitations of this study are the lack of cardiac biopsy and cardiac MRI as the patient was too ill to perform these investigations. However, the course of her cardiomyopathy is well documented with sequential echocardiograms. This case demonstrates clear histopathological features of SLONM on peripheral muscle biopsy and evidence of reversal of cardiac failure after commencing routine heart failure medications, chemotherapy, and autologous stem cell transplantation, which may help guide clinical management of patients presenting with cardiac failure and muscle weakness.
Conclusion
In conclusion, cardiologists should be aware of this rare but reversible cause of cardiac failure. Sporadic late-onset nemaline myopathy should be considered in patients presenting with muscle weakness and cardiomyopathy as early detection and treatment with conventional heart failure therapy, chemotherapy, and autologous stem cell transplantation can lead to significant improvement and prevent further progression. Sporadic late-onset nemaline myopathy Slide sets: A fully edited slide set detailing this case and suitable for local presentation is available online as Supplementary data.
Consent:
The authors confirm that written consent for submission and publication of this case report including images and associated text has been obtained from the patient in accordance with COPE guidance.
Conflict of interest: none declared. There is no significant increase in endomysial connective tissue. A few rare pale necrotic fibres may be present. No obvious inflammation is present. (B) Modified Gomori trichrome highlights the presence of relatively frequently lobulated fibres and scattered fibres, which appear to contain aggregates of rod-like structures. (C) Myotilin staining showing myofibril aggregates, which co-localize with rods. (D) Electron microscopy confirms the presence of nemaline rods, which have a high electron density and an internal lattice structure. | 2020-12-24T09:11:47.402Z | 2020-12-22T00:00:00.000 | {
"year": 2020,
"sha1": "6535307253328b2285c0a9876d0cb3d8c1e9f441",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/ehjcr/article-pdf/5/1/ytaa480/36168845/ytaa480.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "923b55f2f12eb3475b5b44c555d81ef09553dcd4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
13416850 | pes2o/s2orc | v3-fos-license | Genomes and evolutionary genomics of animals
Alongside recent advances and booming applications of DNA sequencing technologies, a great number of complete genome sequences for animal species are available to researchers. Hundreds of animals have been involved in whole genome sequencing, and at least 87 non-human animal species’ complete or draft genome sequences have been published since 1998. Based on these technological advances and the subsequent accumulation of large quantity of genomic data, evolutionary genomics has become one of the most rapidly advancing disciplines in biology. Scientists now can perform a number of comparative and evolutionary genomic studies for animals, to identify conserved genes or other functional elements among species, genomic elements that confer animals their own specific characteristics and new phenotypes for adaptation. This review deals with the current genomic and evolutionary research on non-human animals, and displays a comprehensive landscape of genomes and the evolutionary genomics of non-human animals. It is very helpful to a better understanding of the biology and evolution of the myriad forms within the animal kingdom [ Current Zoology 59 (1): 87–98, 2013].
Introduction
Since the Cambrian explosion some 530 million years ago (mya), evolution in multi-cellular animals has been an ongoing process, resulting in the tremendous biodiversity of the animal kingdom. At the core of evolution is the change in DNA sequences. Accordingly, for the evolutionary study of animals, it is imperative to investigate changes in the genome sequences. Thanks in large part to the Human Genome Project, formally began in 1990, a great number of genome sequencing technologies and bioinformatic tools have greatly extended our knowledge of genomics and thus life science. With the booming applications of next-generation DNA sequencing technologies that have a noticeably decreased cost from several years ago, it is possible for numerous researchers to complete genome sequences of animals for use in evolutionary genomic studies. To coincide with these developments, majestic genome projects that aim to decipher the most phylogenetically and economically important species in kingdom Animalia have emerged, such as BGI-1,000 Plant and Animal Reference Genomes Project, Genome 10K Project, i5k Insect and other Arthropod Genome Sequencing Initiative, and NIH National Human Genome Research Institute -Approved Sequencing Targets as well.
Given the massive sequence information, a great deal of insightful comparative evolutionary analyses across these animals and across the biological and academic spectrums are expected. Indeed, evolutionary genomics is quickly becoming one of the most rapidly advancing disciplines in biological sciences. To present a comprehensive landscape of genomics and in particular the evolutionary genomics of non-human animals with published papers on their genome projects, after giving a list of non-human animals with publications on their genome projects, we shall first review genomic studies for model animals, and then introduce such studies for other non-human animals following an evolutionary hierarchy of animal phyla. These studies touch on conserved genes or other functional elements among non-human animal species, as well as genomic elements that confer an organism its own specific characteristics and phenotypes for adaptation. In doing so, we hope to clearly illustrate the marked gains and insights that evolutionary genomics have already made and still have to offer. human animals, we searched the relevant literatures and such main genome projects and databases as BGI-1,000 Plant and Animal Reference Genomes Project (http://ldl. genomics.cn/page/pa-animal.jsp), Genome 10K Project (http://www.genome10k.org/), i5k Insect and other Arthropod Genome Sequencing Initiative (http://www. arthropodgenomes.org/wiki/i5K), NIH National Human Genome Research Institute-Approved Sequencing Targets (http://www.genome.gov/10002154), and the database Ensembl Metazoa Species (http://asia.ensembl.org/ info/about/species.html; http://metazoa.ensembl.org/info/ about/species.html).
We located over one hundred animals that have been sequencing-completed, including house mouse, goat, giant panda, and so on; as well as thousands of animals whose whole genome sequencing is in-progress or proposed as of June 2012. Among them, at least 87 non-human animal species' complete or draft genome sequences have published papers on their genome projects since 1998 (Table 1). Also, certain other animals have released their genome sequences without genome-project publications, such as zebrafish Danio rerio (http://www.sanger.ac.uk/Projects/D_rerio/), domestic goat Capra hircus (http://goat.kiz.ac.cn/) and so forth. Insects are the taxon with the most sequenced-species, followed with mammals whose sequences consist 25% of all animal species' genomes ( Fig. 1). In analyzing the temporal trends as to the number of publications on non-human animal whole genome sequences since 1998, we found that studies deciphering animal genomes have continuingly increased. Notably, this trend was all the more obvious after we excluded 10 fruit fly genomes that were published collectively in a single paper (Fig. 2).
Nematodes
The phylum Nematoda includes not only the important model organism Caenorhabditis elegans but several global parasites of humans, food animals and crops. The solid line was generated by observed raw data, and dash line is an adjustment of 1): excluding an unusual event (10 fruit fly genomes together published in a single paper); and 2): proportional increase of value of 2012 as a whole year from that of June of 2012.
Since completion of the first animal genome sequence for C. elegans (C. elegans Sequencing Consortium 1998), draft genome sequences have been published for another seven worms from lineages of the pan-phylum Nematoda, including two free-living worms C. briggsae (Stein et al., 2003) and Pristionchus pacificus (Dieterich et al., 2008), two plant parasites Meloidogyne incognita (Abad et al., 2008) and M. hapla (Opperman et al., 2008), and three animal/human parasites Trichinella spiralis (Mitreva et al., 2011), Brugia malayi (Ghedin et al., 2007), and Ascaris suum (Jex et al., 2011). Their genome size ranges significantly from 54 Mb of M. hapla to 272 Mb of A. suum, and the gene number varies from 11,500 of B. malayi to 23,500 of P. pacificus. Overall genome organization is also not conserved among these nematodes with divergence of over 500 million year (myr). Comparative genomic analyses, however, have revealed shared genes and proteins of fundamental importance among all nematode species and their common primary biological function suggested with genomic regions of microsynteny.
Fruit flies
The fruit fly Drosophila melanogaster is among the most intensively studied organisms in biology, serving as a model system for investigating numerous genetic and developmental processes common to higher eu- karyotes, including humans. A decade ago, the publication of the D. melanogaster complete genome sequence, with 120 Mb genome and 13,600 genes, spurred a strong uptick in research of fruit fly genomics (Adams et al., 2000). Afterwards, a second genome of Drosophila species, D. pseudoobscura, was published in 2005 (Richards et al., 2005) and additional ten genomes (sechellia, simulans, yakuba, erecta, ananassae, persimilis, willistoni, mojavensis, virilis, and grimshawi) of Drosophila species were dispersed a further two years later (Richards et al., 2005;Clark et al., 2007). Comparative analyses of 12 genomes in a phylogenetic framework illustrated how rates and patterns of sequence divergence across taxa can illuminate evolutionary processes on a genomic scale (Clark et al., 2007). The genomes of Drosophila are remarkably conserved across species, with very similar features such as overall genome size, number of genes, distribution of transposable element classes, and patterns of codon usage. Even so, many variable characteristics were also disclosed across these taxa (Clark et al., 2007). Firstly, there are abundant genomic structural changes and rearrangements, particularly exampled by several different rearrangements of Hox gene clusters. Secondly, gene family size is highly variable, with almost half of all gene families changed in size on at least one lineage and with a noticeable fraction of rapid and lineage-specific expansions and contractions. Moreover, evolutionary analyses revealed numerous non-neutral changes in functional genomic elements, such as protein-coding genes, non-coding RNA genes, and cis-regulatory regions (Clark et al., 2007).
Sea urchins
The genome of the long-lived sea urchin Strongylocentrotus purpuratus, a model for developmental and systems biology, was published in 2006, providing an evolutionary outgroup for chordates and yielding substantial insights into the evolution of deuterostomes (Sodergren et al., 2006). Genomic analyses facilitated a great number of key discoveries regarding the biology of the sea urchin. Notably, the 23,300 genes in the sea urchin represent nearly all vertebrate gene families. Interestingly, the sea urchin possesses an extensive defensome, and orthologs of many human disease-associated genes, etc..
Frogs
The only published genome sequence in frogs is western clawed frog Xenopus tropicalis, and its detailed comparative genomic analyses shed light on the study of embryonic development (Hellsten et al., 2010). Like-wise, this genome encodes more than 20,000 protein-coding genes, including orthologs of at least 1700 human disease genes, representing 79% of those currently identified in humans. Conservation of the vertebrate immune system is highlighted by genomic comparisons between mammalians and Xenopus (Robert et al., 2009;Hellsten et al., 2010). Notably, unique antimicrobial peptides possessed by frogs seem to play an important role in skin secretions, a feature absent in birds, reptiles, and mammals. Accordingly, X. tropicalis, with more tractable genetics when compared to the widely used model African clawed frog X. laevis, became an alternative important model to the latter for studying the vertebrate development.
Mice, rats and opossums
As ones of the most important model laboratory mammals, the house mouse Mus musculus (Chinwalla et al., 2002) and brown Norway rat Rattus norvegicus (Rat Genome Sequencing Project Consortium 2004) were among the first mammals to have their genomes deciphered, after humans. Comparative genomic analyses of these species have illuminated some key issues, including the conservation of large-scale synteny across most of the genomes, the proportion of the genomes under selection, number and evolution of protein-coding genes, and the expansion of gene families related to specific adaptions.
Subsequently, the gray short-tailed opossum Monodelphis domestica was sequenced, providing a unique perspective on the organization and evolution of mammalian genomes (Mikkelsen et al., 2007). Distinctive features of the opossum chromosomes provide support for recent theories about genome evolution and function, including a strong influence of biased gene conversion on nucleotide sequence composition, and a relationship between chromosomal characteristics and X chromosome inactivation. Comparison of opossum and eutherian genomes (e.g. of mouse, dog, and human) also revealed a sharp difference in evolutionary innovation between protein-coding and non-coding functional elements.
To mine the genomic basis of the unusual physiology and longevity of the naked mole rat Heterocephalus glaber, its genome was deciphered in 2011 (Kim et al., 2011). Comparative sequence analyses with the above described three relatives, as well as other mammals (dog, monkey, and human), revealed unique genome features and molecular adaptations consistent with cancer resistance, poikilothermy, hairlessness and insensitivity to low oxygen, altered visual function, circadian rhythms and taste sensing.
Macaques
The most frequently used non-human primates in biomedical research are from the genus Macaca, which is closely related to humans by virtue of sharing a last common ancestor ~25 mya. To date, three important macaques whose genome sequences have deciphered include the traditional research model Indian rhesus macaque Macaca mulatta mulatta (Rhesus Macaque Genome Sequencing and Analysis Consortium 2007), the Chinese rhesus macaque Macaca mulatta lasiota and the cynomolgus/crab-eating macaque Macaca fascicularis (Yan et al., 2011).
Initial comparative genome analyses of primates including the Indian rhesus macaque, chimpanzees and humans (Rhesus Macaque Genome Sequencing and Analysis Consortium 2007) revealed the structure of ancestral primate genomes and identified evidences for positive selection and lineage-specific expansions and contractions of gene families. Afterwards, comparisons across the three macaques (Yan et al., 2011) revealed that each macaque maintained abundant genetic heterogeneity, including millions of single-nucleotide substitutions and many insertions, deletions and gross chromosomal rearrangements. Genetic divergence patterns suggest that the cynomolgus macaque genome has been shaped by introgression after hybridization with the Chinese rhesus macaque.
Macaque genes display a high degree of sequence similarity with human disease gene orthologs and drug targets. However, Yan and his colleagues also identify several putatively dysfunctional genetic differences between the three macaques (Yan et al., 2011), which may explain some of the functional differences between each group as has been previously observed in clinical studies.
Apes
Given the many challenges posed to conducting researches on great apes, it is hard to say if they are model organisms, but they are the closest primate relatives of humans and serve as a good model to understand our own species. All great apes and humans belong to the biological superfamily Hominoidea that split ~14 mya (Pongo), ~7 mya (Gorilla), and 3-5 mya (Pan & Homo).
To date, four ape species' genome sequences have been deciphered: the Sumatran orangutan Pongo abelii, the western lowland gorilla Gorilla gorilla gorilla, the common chimpanzee Pan troglodytes, and the bonobo Pan paniscus. These sequences provided an unprecedented opportunity to teach us about ourselves, both in terms of the similarities and differences between humans and apes. Sequence comparison between chimpanzee and human genomes generated a largely complete catalogue of the accumulated genetic differences since divergence from our common ancestor .
Scientists have likewise revealed that the orangutan genome has many unique features as compared to other primates (Locke et al., 2011), namely that the structural evolution of the orangutan genome has proceeded much more slowly than among other great apes. Besides, a comparison of protein-coding genes revealed approximately 500 genes showing accelerated evolution on each of the gorilla, human and chimpanzee lineages, as well as evidence for parallel acceleration, particularly of the genes involved in hearing (Scally et al., 2012). Furthermore, scientists found that patterns of evolution in human and chimpanzee protein-coding genes are highly correlated and dominated by the fixation of neutral and slightly deleterious alleles.
Notably, the most recently published great ape genome sequence was that of the bonobo (Prüfer et al., 2012). Evolutionary comparative analyses revealed that more than 3% of the human genome is more closely related to either the bonobo or the chimpanzee genome than the latter are to each other. Analysis of these regions will allow various aspects of these two ape species' ancestry to be reconstructed.
Insects
Insects are among the most diverse group of creatures on the planet, representing over half of all known living organisms and over 90% of differing metazoan life forms on earth. To date, we have sequenced the whole genome of at least 32 insect species, meaning that insects represent over 37% of all currently sequenced animals (Fig. 1). Sequences obtained include many important insects such as mosquitoes, ants silkworms, butterflies, wasps, bees, beetles, the human body louse, and the like.
Mosquitoes, another different kind of dipteran insects from classical model fruit flies, are usually considered a nuisance because most of them not only bite living vertebrates for blood-feeding but consequently transmit some of the most harmful human and livestock diseases such as malaria, yellow fever and dengue fever, etc.. Currently, the whole genome for three blood-feeding mosquitoes has been sequenced. The sequence of Anopheles gambiae, a principal vector of malaria, showed marked bimodal density distribution and prominent expansions in specific families of proteins that are likely involved in cell adhesion and immunity (Holt et al., 2002). Surprisingly, the draft genome sequence of another mosquitos Aedes aegypti, the primary vector for yellow fever and dengue fever, show a genome size ~ 5 times greater than that of Anopheles gambiae (Nene et al., 2007a). Nearly 50% of the A. aegypti genome consists of transposable elements, which contribute to an approximately 4−6 factor increase in average gene length and in sizes of intergenic regions relative to A. gambiae. The third mosquito to have its genome sequenced was the southern house mosquito Culex quinquefasciatus (Arensburger et al., 2010); interestingly, its repertoire of 18,883 protein-coding genes is even 22% larger than that of A. aegypti and 52% larger than that of A. gambiae with multiple gene-family expansions, including olfactory and gustatory receptors, salivary gland genes, and genes associated with xenobiotic detoxification.
As eusocial insects, organized societies of ants include short-lived worker castes that display specialized behaviors alongside long-lived queens dedicated to reproduction. Currently, at least 7 ant species are available with whole genome sequences. Initial genomic comparisons of two socially divergent ant species (Camponotus floridanus and Harpegnathos saltator) and the gene expression analyses in different castes (Bonasio et al., 2010) provided clues as to the underlying molecular differences across taxa: up-regulation of telomerase and sirtuin deacetylases related to in longer-lived reproductive queens, caste-specific expression of microRNAs and SMYD histone methyltransferases, and differential regulation of genes implicated in neuronal function and chemical communication. Genomes of the Argentine ant Linepithema humile and the red harvester ant Pogonomyrmex barbatus dispersed in a year later (Smith et al., 2011b) showed distinctive features including remarkable gene family expansions, an abundance of cytochrome P450 genes, and complete CpG DNA methylation toolkits. Comparative genomic analyses of the fire ant Solenopsis invicta, another major pest, reveal an ancestral vitellogenin gene that first underwent a duplication and then was followed by independent duplications of the daughter vitellogenin genes and subfunctionalization with queen-and worker-specific expression . Genomic comparison of two leafcutter ants (Atta cephalotes and Acromyrmex echinatior) revealed insights into their key adaptations to advanced social life and fungus farming during obligate symbiotic lifestyle Suen et al., 2011).
The first reported butterfly genome was for the migratory monarch butterfly Danaus plexippus, which yielded insights into the genetic and molecular basis of long-distance migration (Zhan et al., 2011), involved with circadian clockwork, regulation of the juvenile hormone biosynthetic pathway, oriented flight behavior, differential expression of microRNA between summer and migratory (winter) forms and monarch-specific expansions of chemoreceptors. Additionally, genome sequencing analyses of another butterfly Heliconius melpomene, together with two co-mimics (H. timareta and H. elevatus) via resequencing, reveals a promiscuous exchange of mimicry adaptations among species of the rapidly radiating genus of neotropical butterflies, especially at two genomic regions that control mimicry patterns (Dasmahapatra et al., 2012).
Fishes
The term "fish" most precisely describes any non-tetrapod craniate that consists of all gill-bearing aquatic craniate animals lacking limbs with digits. To date, at least 5 fish draft genome sequences have been published.
The first of these sequenced genomes is from the fish commonly known as "torafugu" Fugu rubripes. Of the 365-Mb pufferfish genome, repeats account for < 1/3 while gene sequences occupying ~1/3 of the genome (Aparicio et al., 2002). Although 75% of predicted human proteins have a strong match to Fugu, approximately 25% of predicated human proteins have highly diverged from or have no pufferfish homologs, highlighting the great extent of protein evolution in the 450 myr since teleosts and mammals diverged.
The second fish genome comes from the teleost fish Tetraodon nigroviridis (Jaillon et al., 2004), which is a freshwater pufferfish with the smallest known vertebrate genome, at 340Mb. Comparison with other vertebrates and an urochordate indicates that fish proteins diverged markedly faster than their mammalian homologues. Analysis of the Tetraodon and human genomes shows that whole-genome duplication occurred in the teleost fish lineage, subsequent to its divergence from tetrapods.
Genomic analyses of a small egg-laying freshwater teleost, medaka Oryzias latipes, show a strict genetic separation of 4 myr between two inbred strains derived from two regional populations, and furthermore suggests that differential selective pressures acted on specific gene categories (Kasahara et al., 2007). Compari-sons with the human, pufferfish Tetraodon, zebrafish and medaka genomes revealed that eight major interchromosomal rearrangements took place in the remarkably short period beginning approximately 50 myr after the whole-genome duplication event in the teleost ancestor and continuing onward thereafter.
Genome sequence analyses of the Atlantic cod Gadus morhua, a cold-adapted teleost which sustains longstanding commercial fisheries and incipient aquaculture, shows evidence for complex thermal adaptations in its haemoglobin gene cluster as well as an unusual immune architecture compared with other sequenced vertebrates (Star et al., 2011). The major histocompatibility complex (MHC) II is a conserved feature of the adaptive immune system of jawed vertebrates, but the Atlantic cod has lost the genes for MHC II, CD4 and invariant chain (Ii) that are essential for the function of this pathway. Results also showed a highly expanded number of MHC I genes and a unique composition of its toll-like receptor families.
Stickleback fishes have colonized and adapted to thousands of streams and lakes formed since the last ice age. With a high-quality reference genome assembly of a threespine stickleback Gasterosteus aculeatus and sequencing data of an additional twenty individuals from a global set of marine and freshwater populations (Jones et al., 2012), researchers identified that the reuse of globally shared standing genetic variation, including coding changes, regulatory changes, and chromosomal inversions, plays an important role in repeated evolution of distinct marine and freshwater sticklebacks, and in the maintenance of divergent ecotypes during the early stages of reproductive isolation.
Birds
With around 10,000 living species, birds (class Aves) are the most speciose class of tetrapod vertebrates. Now, at least three important species of birds have had their sequenced genomes with publications. The first sequenced genome of birds is from chicken, i.e, the red jungle-fowl Gallus gallus, a modern descendant of the dinosaurs, and its genome sequence provides new perspectives on vertebrate evolution (Hillier et al., 2004). For instances, the evolutionary dynamics of protein domains and orthologous groups in coding regions illustrates lineage-specific processes leading to both birds and mammals. Likewise, distinctive properties of avian micro-chromosomes together with the inferred patterns of conserved synteny provide additional insights into vertebrate chromosome architecture.
The second sequenced avian whole genome sequence is from the zebra finch Taeniopygia guttata, a songbird and an important model organism in several fields with unique relevance to human neuroscience. The overall genomic structures of zebra finch are similar to those of chicken's, but differences exists in such characteristics as intra-chromosomal rearrangements, lineage-specific gene family expansions, the number of long-terminalrepeat-based retrotransposons, and mechanisms of sex chromosome dosage compensation (Warren et al., 2010). Song behavior of the songbird engages with gene regulatory networks in its brain, via altering the expression of long ncRNAs, microRNAs, transcription factors and their targets. The domestic turkey Meleagris gallopavo, was the third bird to have its genome sequenced (Dalloul et al., 2010). Comparative genomic analyses among these birds as well as between avian and mammalian support the characteristic stability of avian genomes and identifies genes unique to the avian lineage. Clear differences are seen in number and variety of genes of the avian immune system where expansions and novel genes are less frequent than examples of gene loss.
Mammals
Apart from models mentioned above, sequenced mammals also include five important domestic animals (dogs, cats, horses, cattle, and yak), four popular and phylogenetically important animals (giant pandas, kangaroos, platypus, and the Tasmanian devil) and an extinct Woolly Mammoth.
A draft genome sequence of the domestic dog Canis familiaris was reported together with a dense map of SNPs across various breeds with great phenotypic diversity in morphological, physiological and behavioral traits (Lindblad-Toh et al., 2005). Sequence comparison with the primate and rodent lineages shed light on the structure and evolution of genomes and genes. Notably, the majority of the most highly conserved non-coding sequences in mammalian genomes are clustered near a small subset of genes with important roles in development. Analysis of SNPs reveals long-range haplotypes across the entire dog genome, and defines the nature of genetic diversity within and across breeds.
The genome for an inbred Abyssinian domestic cat Felis catus (Pontius et al., 2007) illustrated historic balancing of translocation and inversion incidences in distinct mammalian lineages, suggested by annotated features, including repetitive elements, endogenous retroviral sequences, nuclear mitochondrial sequences, micro-RNAs, and evolutionary breakpoints.
The genome sequence of the low-altitude domesti-cated cow Bos taurus, an important livestock for milk and meat production, opened a window to ruminant biology and evolution (Elsik et al., 2009). Results of the sequencing indicate that cattle-specific evolutionary breakpoint regions in chromosomes have a higher density of segmental duplications, enrichment of repetitive elements, and species-specific variations in genes associated with lactation and immune responsiveness. Genomic comparisons between cattle and the domestic yak Bos grunniens, which lives at a high altitude on the Qinghai-Tibetan Plateau and near the adjacent regions, provided informative insights into their adaptation to high altitude (Qiu et al., 2012). These comparative genomic analyses also allow identifying expansion of yak gene families related to sensory perception and energy metabolism, as well as an enrichment of protein domains involved in sensing the extracellular environment and hypoxic stress. Positively selected and rapidly evolving genes in the yak lineage are also found to be significantly enriched in functional categories and pathways related to hypoxia and nutrition metabolism. In 2009, the draft genome sequence of the horse Equus caballus, one of the earliest domesticated species and one that has played a vital role in human exploration of novel territories and territorial expansion, was completed (Wade et al., 2009). Comparative analyses showed that chromosomes appear to have undergone few historical rearrangements: 53% of equine chromosomes show conserved synteny to a single human chromosome. Equine chromosome 11 was shown to have an evolutionary new centromere devoid of centromeric satellite DNA, suggesting that centromeric function may have arisen before satellite repeat accumulation.
The genome sequence of the tammar wallaby Macropus eugenii, a member of the kangaroo family (Renfree et al., 2011), meanwhile, provides novel insight into the evolution of mammalian reproduction, development and genome evolution. To illustrate the point, scientists identified innovations in reproductive and lactational genes, rapid evolution of germ cell genes, and incomplete, locus-specific X inactivation.
Genomic analyses of the platypus Ornithorhynchus anatinus, a monotreme, an order that exhibits a fascinating combination of reptilian and mammalian characters, revealed several unique signatures of animal evolution. The analysis of the first monotreme genome highlighted how reptile and platypus venom proteins have been co-opted independently from the same gene families, while milk protein genes are conserved despite platypuses laying eggs, and the immune gene family expansions are directly related to platypus biology (Warren et al., 2008).
A draft genome sequence of the much adored but equally endangered giant panda Ailuropoda melanoleura was published with detailed genomic analyses (Li et al., 2009). This special creature has several unusual biological and behavioral traits: a restrictive diet of bamboo, a very low fecundity, and a phylogenetic position in continuing controversy. Comparisons with other mammals revealed that the panda genome has not greatly diverged from either dogs or humans, but that there has been considerable divergence in the repetitive regions, most of which seems a result from recent transposable-element activity. The assessment of panda genes had some considerable insights that may bode well for conversation efforts, as genomic analysis suggested that its unique bamboo diet may be more dependent on its gut microbiome than its own genetic composition.
Conclusions
This review marks one of many preludes to what will certainly be an explosion in the field of animal genomics over the coming years, though we may still overlook some excellent studies of evolutionary genomics. As we march down the path of deciphering more and more animals' whole-genome sequences, genomic analyses has and will continue to shed new light on the evolutionarily and economically important genomics components across the kingdom Animalia as well as refinement of the current phylogenetic tree obtained by fossils and other methods. These progresses will also make more and more animals potential model for particular interests in life sciences, just as the jumping ant H. saltator and the naked mole rat H. glaber for longevity models or X. tropicalis for an alternative amphibian model for vertebrate embryonic development. Thus, the insightful evolutionary genomics of animals, together with their physiology, morphology, and behavior, will unambiguously begin the process of ushering basic and applied biology into a new era, one marked by a more truly systematic knowledge of organisms surrounding us. | 2016-11-08T18:56:27.780Z | 2013-02-01T00:00:00.000 | {
"year": 2013,
"sha1": "7ea213c540140920ef863a019bea148b2a037f42",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/cz/article-pdf/59/1/87/5125969/czoolo59-0087.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "0b7c56653f7283518559504a51434be7c284f8de",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
251407073 | pes2o/s2orc | v3-fos-license | Automatic Normalisation of Early Modern French
Spelling normalisation is a useful step in the study and analysis of historical language texts, whether it is manual analysis by experts or automatic analysis using downstream natural language processing (NLP) tools. Not only does it help to homogenise the variable spelling that often exists in historical texts, but it also facilitates the use of off-the-shelf contemporary NLP tools, if contemporary spelling conventions are used for normalisation. We present FREEMnorm, a new benchmark for the normalisation of Early Modern French (from the 17th century) into contemporary French and provide a thorough comparison of three different normalisation methods: ABA, an alignment-based approach and MT-approaches, (both statistical and neural), including extensive parameter searching, which is often missing in the normalisation literature.
Introduction
Computational approaches have recently been playing an increasing role in the humanities (Gabay, 2021), especially concerning the study of textual documents. Historical documents are particularly interesting, as they are an invaluable source of historical information and are crucial witnesses of language evolution. Whether documents are to be studied manually by philologists and literary experts or analysed automatically using downstream natural language processing (NLP) tasks such as part-of-speech (PoS) tagging and parsing, a useful preliminary step is normalisation, which consists in modernising the spelling of the documents to conform to contemporary spelling conventions. Normalisation has the effect of (i) reducing spelling variation present in historical documents, often written at a time spelling was not standardised, and (ii) reducing the gap between the historical state of the language and the contemporary state. Importantly, this allows us to apply off-the-shelf NLP tools to old texts and limit the performance drop that can usually be expected, for example for tagging and parsing (Pettersson et al., 2013b) or geographical named entity recognition (Kogkitsidou and Gambette, 2020). There has been a considerable amount of previous research in historical spelling normalisation, with a range of methods being developed, including manually developed rules (Porta et al., 2013;Baron and Rayson, 2009;Riguet, 2019), those exploiting edit distances and other external resources such as lexicons (Mitankin et al., 2014) and machine translation (MT) approaches, both statistical (Scherrer and Erjavec, 2013;Domingo and Casacuberta, 2018a) and neural (Bollmann and Søgaard, 2016;Hämäläinen et al., 2018). Despite this, questions still remain regarding which method is the most effective, particularly between statistical MT (SMT) and neural MT (NMT) approaches. There has for example been little research in optimising these models for the particular task, which could lead to false conclusions being drawn about which model is best; as has been previously shown for low-resource tasks, neural models in particular are sensitive to model size, training parameters and the degree of subword segmentation applied to texts (Sennrich and Zhang, 2019;Fourrier et al., 2021). Our focus in this paper is on the normalisation into contemporary French of Early Modern French (also known as Modern French or Classical French), which is French from the 17 th century. Despite several recent efforts (Gabay and Barrault, 2020;Gabay et al., 2019;Riguet, 2019), there has so far been very little research carried out on spelling normalisation for historical French, and so we aim to fill this gap. Figure 1 illustrates a few of the normalisation types observed, from simple typographic changes (e.g. → s), changes to segmentation (long temps 'a long time' → longtemps), changes reflecting language change (e toit '(s/he) was' →était) and the use of classical false etymological spellings (e.g. ç being used in Modern French çavoir 'to know' as a link to Latin scire, from which it does not originate). In this paper, we present the parallel normalisation corpus, FREEM norm (for Early Modern French), on which we train and evaluate, and, in addition to baseline models, we compare three methods: (i) an alignment-based approach, called ABA, using automatically learned word correspondences from a parallel corpus, (ii) phrase-based SMT, and (iii) NMT, comparing an LSTM model (Bahdanau et al., 2015) and a Transformer (Vaswani et al., 2017). We find that despite extensive parameter optimisation for NMT models, SMT produces the best results overall, with all methods largely exceeding the baselines. Our comparison shows that the methods exhibit quite different be- haviour in terms of how conservative or inventive they are, which could be useful information depending on the downstream task (e.g. as a pre-annotation tool for manual annotation or a downstream NLP application). Our main contributions can be summarised as follows: • Introduction of a new benchmark for the normalisation of Modern French, which can be used in further research.
• Extensive experiments comparing an alignmentbased approach (ABA) with three MT approaches (SMT, LSTM and Transformer), with best results achieved by SMT. We also show that a lexiconbased post-processing step can systematically improve over all other methods tested.
Related Work
A considerable amount of work has been carried out in historical spelling normalisation, across various languages, with research dating back to the 1980s (Fix, 1980). A range of different approaches have been developed, including rule-based (Porta et al., 2013;Riguet, 2019), the use of various types of edit-distance (Hauser and Schulz, 2007;Bollmann, 2012;Pettersson et al., 2013a) and MT-style approaches, both statistical (Vilar et al., 2007;Scherrer and Erjavec, 2013;Ljubesic et al., 2016;Domingo et al., 2017) and neural (Korchagina, 2017;Domingo and Casacuberta, 2018b;Tang et al., 2018). Interestingly, all of these approaches remain useful today, thanks to their different strengths, depending on the type of normalisation and the amount of data available (Bollmann, 2019).
Word Lists, Rules and Edit-based Methods
Approaches relying on word lists, consisting in simply replacing historical variants by their normalised equivalent have been developed in several languages: English (Reynaert et al., 2012), German, Portuguese (Piotrowski, 2012) and Slovene (Erjavec et al., 2011).
Many rule-based and edit-distance-based approaches are unsupervised (i.e. they do not require parallel data), which is a considerable advantage, especially for historical varieties for which annotated data is not readily available. Rules can be developed manually by experts (Porta et al., 2013;Baron and Rayson, 2009;Riguet, 2019) or be extracted from a comparison of historical and modern word lists or parallel data if this is available (Bollmann et al., 2011). The use of edit distance, using for example Levenshtein distance is often a strong baseline (Pettersson et al., 2013a), due to the fact that the surface forms of historical and contemporary spellings are often very similar and the alignment between both words and characters in the two varieties is almost perfectly monotonic. Basic edit-distance can be enhanced with specific weights for different edits (Bollmann et al., 2011) or based on characters or character groups (Hauser and Schulz, 2007;Bollmann, 2012), given the observation that certain errors are more serious than others.
Normalisation as MT
MT approaches to the problem have been popular, with the historical and modern states of the language being treated as the source and target languages respectively.
Characters, Subword or Words? Most previous research has focused on character-based MT, which models transformations at the level of individual characters (Vilar et al., 2007;Scherrer and Erjavec, 2013;Pettersson et al., 2013b;Domingo and Casacuberta, 2021), which makes sense for the task of spelling normalisation, as it often involves local transformations and largely monotonic alignments between source and target sentences. However, there has since been work exploring word translation, subword translation (Tang et al., 2018) or a mixture of these (Vilar et al., 2007;Domingo and Casacuberta, 2021). It is rare however for works in historical spelling normalisation to explore the optimal degree of segmentation, although Tang et al. (2018) do find subwords to be more effective than character-based: character-based segmentation offers a greater possibility for generalisation with the caveat that it requires the model to learn to translate longer sequences and learn patterns better, whereas word or subword segmentation can exploit models' ability to memorise, but may run the risk of limited generalisation, especially to unseen or less frequent words. SMT or NMT? The first approaches were with SMT (Koehn et al., 2007), which proved more effective than rule-based and edit-distance based approaches (Pettersson et al., 2014;Hämäläinen et al., 2018;Bollmann, 2019), when there is parallel data available, and even when this data is produced synthetically (Scherrer and Erjavec, 2013;Domingo and Casacuberta, 2018a). NMT approaches to historical spelling normalisation were developed as it took off in the domain of general MT (Bollmann and Søgaard, 2016;Hämäläinen et al., 2018). Comparisons between SMT and NMT show different results, with SMT being superior in some cases (Domingo and Casacuberta, 2018a), and NMT in others (Bollmann, 2019), provided enough parallel data is available (Bollmann, 2019). Importantly, the methods appear to have different behaviours and therefore their own strengths and weaknesses, meaning that a single method (including rule-based approaches) is not necessarily a systematically better choice (Hämäläinen et al., 2018;Robertson and Goldwater, 2018). Word Translation vs. Sentence Translation A considerable portion of the research in historical normalisation is based on the normalisation of word lists, so of words in isolation. However, as discussed in (Ljubesic et al., 2016), it can be beneficial in some contexts to normalise whole sentences (where there is ambiguity in the normalised form that should be chosen). This has the disadvantage of creating longer sequences to process, but is necessary in order to hope to handle all phenomena. The development of parallel corpora rather than word lists has encouraged research in this direction (Tjong Kim Sang et al., 2017;Gabay and Barrault, 2020;Ortiz Suarez et al., 2022).
Normalisation for Historical French
Despite there being a plethora of research on historical spelling normalisation, little research has been done so far on historical French, with most work focusing on Dutch, German, Hungarian, Slovene, and Swedish, helped by the existence of benchmark data (Dipper and Schultz-Balluff, 2013) and shared tasks (Ljubesic et al., 2016;Tjong Kim Sang et al., 2017). A collaborative word list associating normalised versions of historical words in French was started in 2009 on the Wikisource digital library, 4 which is available for automatic normalisation through word substitution (The French Wikisource Community, 2022). Recently, there has been some preliminary research, with the development of a parallel corpus for the normalisation of Modern French (from the 17 th c.) (Ortiz Suarez et al., 2022) and first baselines, including rule-based (Riguet, 2019) and NMT-style approaches (Gabay et al., 2019;Gabay and Barrault, 2020). Gabay and Barrault (2020) compare character-based SMT and NMT at different granularities (words, subwords and characters): NMT outperformed SMT, and for NMT, the best input representations were found to be words, then characters, then subwords. However, they do not seem to perform a comparison of different levels of subword segmentation or of different sizes of architecture, which has been shown to be important when drawing conclusions about the usability of NMT in lowresource settings (Sennrich and Zhang, 2019).
Approaches Compared
We present and compare several approaches, representing a wide range of techniques: (i) an alignment-based method using a parallel corpus (Section 3.1), (ii) statistical MT (Section 3.2.1), (iii) neural MT, testing both LSTM and Transformer models (Section 3.2.2). In addition to comparing these approaches to two baselines described in Section 6.1, we also assess the impact of a lexicon-based post-processing described in Section 3.3.
ABA: Alignment-based
The ABA method (short for alignment-based method), is a hybrid approach consisting of (i) word-level transformation rules that are automatically learned from an aligned corpus and (ii) character-level transformation rules, which were manually designed by observing frequent character transformations in the aligned corpus. The ABA normalisation method, which has similarities with the approach of VARD2 developed for English (Baron and Rayson, 2009), works as follows.
Creation of a Word Substitution Lexicon The first step is to learn a word replacement lexicon using a parallel training set. This is done using the classical dynamic programming Needleman-Wunsch alignment algorithm (Needleman and Wunsch, 1970) to optimally align tokenised parallel sentences at the token level, adding a score of 4 for matching words in lowercase (or for et and & 'and' which are considered equivalent) and a penalty of -1 for word insertions, deletions or mismatches if the non-matching words have a weighted Levenshtein distance of at least 4 or at least the length of each word. For mismatches between words at weighted Levenshtein distance d < 4 and strictly smaller than the length of both words, 4 − d is the mismatch score taken into account by the alignment algorithm. Note that the weighted Levenshtein distance is computed with a penalty of 1 for insertions and deletions and 2 for character mismatches. These scores were adjusted experimentally after considering the alignment results on a training corpus.
Substitution
Step The second step uses this replacement lexicon as well as a contemporary French lexicon built by combining Morphalou 3.1 (ATILF, 2019) with lexicons of proper nouns developed for CasEN 1.4 (Maurel et al., 2011): CasEN Dico.dic, Prolex-Unitex-BestOf 2 2 fra.dic (CasEN Team, 2019) and Prolex-Unitex 1 2.dic (Prolex Team, 2013). It proceeds in the following way: after simple tokenisation 5 of the input text, for each token: 1) if it is present in the contemporary French lexicon, it is kept as it is; 2) otherwise, if it is present in the word replacement lexicon, it is replaced by the associated normalised version in this lexicon; 3) otherwise, it is transformed by a combination of character replacement rules detailed in Appendix A, designed after careful analysis of the aligned words in the training corpus and available in the apply rules function of the modern.py script in ABA's distribution: 6 among the obtained candidate tokens, the first one found in the contemporary French lexicon is selected; 4) otherwise, if no candidate generated by character transformation rules is selected, then the original token is kept.
MT: SMT and NMT
Following promising results for other languages (Scherrer and Erjavec, 2013;Tang et al., 2018) and Modern French (Gabay et al., 2019;Gabay and Barrault, 2020), we provide a comparison of phrase-based statistical MT and NMT.
Phrase-based SMT
The aim of SMT is to automatically find the most probable translationt given a source sentence s such that t = argmax t∈T P (s|t) P (t) , where P (s|t) models the adequacy of translation, and P (t) the target language model probability, which can be seen as a measure of the fluency/grammaticality of the prediction. The state of the art in SMT is phrase-based MT, where a prediction's score is the sum of scores from various scoring components, including a phrase table (for the translation probability), a language model (for the language model probability), a reordering (or distortion) model and a length penalty. The main implementation used for phrase-based SMT is the Moses toolkit (Koehn et al., 2007), which we use here in this paper. Phrase-based SMT was the state of the art in MT until around 2015, when NMT first outperformed it (Bahdanau et al., 2015). The main disadvantages of SMT with respect to NMT is the limited ability to model longer distance dependencies and to model semantic relationships between input units, given that probabilities are calculated based on discrete surface forms rather than continuous representations. It nevertheless remains relevant in certain settings, notably when little parallel training data is available (Trieu et al., 2017;Fourrier et al., 2021). For historical spelling normalisation, some works have shown that it can outperform neural approaches, particular in these lower-resource settings (Domingo and Casacuberta, 2018a).
NMT (LSTM and TRANSFORMER)
NMT uses neural networks to find the most probable translation. The standard architecture is an encoderdecoder with an attention mechanism (Bahdanau et al.,5 Splitting the sentence on whitespace, the characters . , ! ? ; : and both kinds of apostrophe. 6 https://github.com/johnseazer/aba.
2015)
. The role of the encoder is to encode the source sequence and of the decoder to sequentially produce the target sequence, given the previously translated words and a representation of the input sequence specific to that decoding step (calculated using attention). Importantly, these models work with continuous representations of words, allowing for a greater capacity to generalise across forms and an improved handling of complex linguistic phenomena. The first such models were based on recurrent neural networks (using recurrent units such as LSTM for example), involving sequentially encoding the input and sequentially decoding the output. The current state of the art is the Transformer, which replaces recurrence with self-attention (Vaswani et al., 2017). Transformers have the advantage of speed in training and tend also to perform better, although this does not always hold for very low-resource settings (Fourrier et al., 2021). NMT model performance is sensitive to the size of the architecture, subword segmentation and training parameters. Sennrich and Zhang (2019) show that previous conclusions about the superiority of SMT systems over NMT in low-resource scenarios do not necessarily hold as long as the NMT parameters are well chosen, highlighting the need to perform adequate parameter search before drawing conclusions. In line with this, we perform extensive hyper-parameter searches of both LSTM and Transformer models (Section 6.3).
Optional Lefff -based post-processing
All three approaches described above rely on parallel training data. Despite the generalisation capabilities of such models, it might be the case that rare situations are not properly dealt with. On the other hand, large-scale lexicons of contemporary French, such as the Lefff (Sagot, 2010), can provide high-coverage lexical information regarding the target language of the normalisation process. Based on this observation, we developed a lexiconbased post-processing tool that can be used after any normalisation model and is based on the Lefff (version 3.4). It relies on the idea that a normalised text should mostly contain words known to a large-scale contemporary French lexicon. Any token (whitespaceand/or punctuation-separated character sequence) that does not begin with a capital letter (to avoid proper nouns) and that is unknown to the lexicon is eligible for further normalisation. For every such token, we compute a list of possible normalisations based on a small list of permitted transformations. 7 We then look up all normalisation candidates in our lexicon. If exactly one of the normalisation candidates is known to our lexicon, we replace the input token with this candidate. In all other cases, we leave the token unchanged.
Evaluating Normalisation
In terms of automatic metrics, the most commonly used are translation edit rate (TER), word accuracy (based on the gold normalised tokens, non-symmetrised) and some works have used traditional metrics for MT (Gabay and Barrault, 2020), in particular BLEU (Papineni et al., 2002) and CHRF (Popović, 2015). Arguably the most interpretable metric is word accuracy, since it gives an idea about the number of lexical units that would have to be corrected, whereas MT metrics are less interpretable, given that they are designed to incorporate a certain degree of flexibility concerning word order, which is not relevant for the task of spelling normalisation. On the other hand, they have the advantage of penalising predictions that contain additional (hallucinated) tokens as well as correct tokens, a situation that is plausible given the use of sentence-level MT models. We therefore choose to use a symmetrised version of word accuracy, which is the average between traditional word accuracy (aligning each gold token to predicted (sub)token(s)) and the reverse calculation (aligning each predicted token to gold (sub)token(s)). 8 More details on evaluation can be found in Appendix C. We also evaluate using MT metrics to test how they correlate with word accuracy.
Data
For training, development and test data, we present the FREEM corpus (short for FREnch Early Modern) called FREEM norm . 9 The data covers a range of different genres of text throughout different decades of the 17 th century, written in prose or verse, which have been semi-automatically normalised (Gabay et al., 2019) and manually corrected. Most of these texts belong to the belles-lettres (literature in its broadest sense), which is the type of source we want to normalise, but additional texts from different traditions (science, law, etc.) are present in the corpus. Some of the transcriptions have been produced specifically for this corpus and others have been borrowed from other projects: transcription rules are therefore not strictly equivalent from one text to another regarding, for instance old characters (e.g. ) or abbreviations (e.g.õ→on). "Normalisation" is understood here as a partial alignment with contemporary French: in some specific cases, specific spellings are maintained to keep the meter of the verse intact (e.g. the adverbial -s: jusques+vowel→ jusques and not jusqu' to maintain the three syllables). The dataset has been split into train, dev and test sets, for which basic statistics can be found in Table 1. The split was done such that the test set contains a variety of different genres and periods (see Tables 7 and 5 in Appendix B), some of which are covered in the train and dev set and some of which are unseen. In terms of the difficulty of the task, although many words remain unchanged between the original Modern French and their contemporary French normalisations (75.7% of all words in the training set), there are a non-negligible number of tricky cases. There are a large number of out-of-vocabulary (OOV) items in both the dev and test sets with respect to the training set, and approximately 0.3% of tokens are ambiguous (i.e. they correspond to several possible normalisations depending on the context). Aside minor differences such as punctuation (which is nevertheless not arbitrary, since it can be determined by context), capitals and accents, there are some interesting cases, such as ambiguity concerning verbal conjugations, which may require more contextual information (see Table 2 for two examples). For these cases, it is necessary to normalise words whilst taking into account their context (as in traditional MT). This justifies processing whole sentences rather than isolated words.
Baselines
We compare the approaches described in Section 3 with two baseline approaches, the identity function and a basic rule-based approach.
Identity function This keeps the text unchanged.
Rule-based This is a stronger baseline comprising several dozen regular expressions, which were manually written based on simple corpus statistics from our training set. They range from purely typographic rules, which reflect the evolution of the writing system, to lexical rules, which reflect the evolution of the language.
Here are a few examples, ordered from purely typographic to fully lexical: • → s,õ → om if followed by m, b or p, or on otherwise; • i → j at the beginning of a word when followed by a vowel other than i; • estoit →était and estoient →étaient.
In addition, we also assess the impact of the lexiconbased post-processing step on these baselines.
Experimental setup
All NMT models are trained using Fairseq (Ott et al., 2019), with default parameters unless otherwise specified. All models are trained until convergence; the best checkpoint is chosen based on symmetrised word accuracy on the dev set. Subword segmentation is applied using SentencePiece (Kudo and Richardson, 2018) and the BPE strategy (Sennrich et al., 2016). We train SMT models using Moses (Koehn et al., 2007) and language models using KenLM (Heafield, 2011). We tune using kbmira to maximise BLEU. Table 3 for LSTM models and Table 4 for Transformer models), (ii) the degree of subword segmentation via different BPE vocabulary sizes (500 1k, 2k, 4k, 8k, 16k, 24k), (iii) the learning rate (0.0005, 0.001, 0.001) and (iv) the batch size (1000, 2000, 3000, 4000 tokens). In order to avoid having to explore the combination of all parameters, we explored hyper-parameters in a step-wise fashion from (i) to (iv), keeping the best parameters from the previous step. We then explored variations on the network size parameters, varying attributes one below and one above the default values. Results were calculated as an average of three differently seeded runs for each combination. We began with default values for all hyperparameters and varied only those mentioned. Both models performed best with a BPE vocabulary of 1k, batch size of 3000 and learning rate of 0.001. The best network sizes were M for the LSTM, and a variant of the M model for the Transformer, with only 2 encoder layers rather than 4.
Statistical MT model
As for the neural models, we test several different granularities of segmentation: character-based, 500, 1k and 2k. 10 We use a 4-gram language model trained on the target side of either the parallel training data or the normalised texts of the FREEM max corpus . The best subword segmentation is with vocabulary size 500 (interestingly not character-based as what has previously been used) and with the language model trained on the target side of the parallel training data. 10 Larger vocabulary sizes result in worse scores and were also more difficult to train because of memory problems.
Results
We compare the approaches described in Section 3 according to the three evaluation metrics discussed in Section 4: symmetrised word accuracy (written as WordAcc), BLEU and CHRF. Results are shown in Table 5. For MT approaches, we run each model three times with three random seeds and report the average score and standard deviation. Models (1)-(4) are baselines and already achieve relatively high scores. This is unsurprising, given the large number of words that do not need modifying: the identity function (copying the source text) gives 72.73% word accuracy. The rule-based approach is significantly better than the first baseline, and adding the post-processing step (+Lefff ) considerably improves both results. The two statistical approaches, the hybrid ABA and SMT, both perform better than the baselines, with SMT actually performing the best out of all approaches. The NMT models perform slightly worse according to all metrics than SMT. Although the scores of LSTM and TRANSFORMER are very similar, LSTM scores are slightly higher. It is an interesting finding that the SMT outperforms NMT in our scenario, as this goes against recent findings for Modern French (Gabay and Barrault, 2020), despite us having more parallel data available. As for the baselines, adding the post-processing step improves both statistical and neural models, with the best result being SMT+Lefff with a symmetrised word accuracy of 97.24%. As recommended by Robertson and Goldwater (2018), we also calculate word accuracy for OOV tokens (based on the gold tokens). Results (Table 6) show that the highest scoring model for OOV accuracy is LSTM, although if post-processing is applied, both SMT and LSTM show similar scores. Adding the post-processing step significantly helps the OOV accuracy of all methods, showing that it is an important complementary step. The three evaluation metrics reveal the same pattern in results for these models, with BLEU varying more in absolute scores than the other metrics.
How Similar are the Methods?
In Figure 2, we compare the predictions token by token and report the percentage of identical normalisations between methods. 11 Unsurprisingly, the neural methods (LSTM and TRANSFORMER) are most similar to each other. SMT is the most similar to TRANSFORMER and ABA is most similar to SMT.
Conservative or Zealous?
Depending on how the tool is to be applied, it can be better to have a more conservative or zealous model. 11 The analysis is computed against the first prediction for methods for which three random seeds were used. If automatic normalisation is to be used as a preannotation tool to help experts manually normalise texts, it is important for the automatic step not to introduce serious errors that could be more difficult to detect and time-consuming to correct. This is a concern notably for NMT-based models (Gabay and Barrault, 2020), which can be more creative in their transformation than either rule-based or SMT-based approaches. It may however be less of a problem if normalisation is to be used for certain downstream tasks using standard contemporary NLP tools (e.g. PoS-tagging or parsing). This is because a more zealous normalisation could provide better performance (by providing contemporary word forms), without the word forms themselves having to necessarily correspond to the correct ones.
To compare the methods for their conservativeness/zealousness, we align the output of each method with the source text and calculate (i) how often it changes a token that should have been kept as it is (Table 3), and (ii) how often it leaves a token untouched that should be modified (Table 4). The identity function, rule-based system and ABA rarely over-modify, contrarily to SMT and NMT. Logically, the methods show the the inverse pattern for under-modification, with the identity and rule-based approaches being the most conservative and under-modifying the most. The SMT and NMT models under-modify at very similar rates, suggesting that performance differences could largely stem from over-modification rather than how much they under-modify. The best method, SMT, has the lowest rate of under-modification and a medium level of over-modification. ABA is interesting, because it under-modifies less than the baselines and yet does not over-modify as much as the MT approaches. Adding the Lefff -based post-processing step has the effect of both correcting some over-modifications that were introduced and providing normalisations for previously unmodified tokens, thereby significantly improving the processing of OOV words.
Qualitative analysis of approaches
In this section, we compare the results of the best rule-based approach, ABA + Lefff and the best MT approach, SMT + Lefff , by using an alignment of the normalised versions of the dev data (available at https://freem-corpora. github.io/models/norm_model/). Unsurprisingly, given that the substitution rules are not contextual, ABA + Lefff makes many errors in ambiguous cases, such as A instead ofÀ, prés instead of près, voila instead of voilà, or mes feux redoublez instead of mes feux redoublés. Taking into account frequency scores either for the word replacement or for the character transformation rules in the training data may help avoid those mistakes. ABA + Lefff is also very sensitive to mistakes in the training corpus. For example, it succeeds in transforming auoient into avaient but not avoient, whereas SMT + Lefff succeeds. It also lacks some rules. For example it has no rule to normalise double consonants (for example principalles normalised into principales, assouppit into assoupit), whereas SMT + Lefff performs pretty well in this case. The SMT approach displays some more creative errors, but which appear easy spot if the normalised text is manually proof-read), e.g. ma pẽ ée transformed into ma pmentsée. It is also prone to deleting certain words such as determiners, possibly because in some contexts they are less probable according to the language model. Finally, considering the fact that it is often the case that, when one of the two methods makes a mistake, the other one performs a correct normalisation, finding a relevant post-processing approach seems like a promising way to increase the quality of the results.
Conclusion
We have presented FREEM norm , a new benchmark for the normalisation of Early Modern French, and compared a range of normalisation methods, including an alignment-based approach and various MTbased methods, with SMT outperforming all other approaches. Adding a post-processing with a contemporary French lexicon systematically helps, particularly for OOV tokens. We compare the strengths of the different methods, with rule-and alignment-based approaches being more conservative and MT approaches being less so. While MT approaches achieve the best accuracy, a model such as the alignment-based ABA is possibly more adapted to pre-annotation as it offers a good compromise between making good normalisation choices without overly normalising tokens that should not have been modified. We release all our data, models and scripts to encourage further research on this topic by the digital humanities community.
A. ABA Normalisation Rules
The character transformation rules used in the second step of ABA include ſ → s, ß → ss, & → et; the resolution of letters with a tilde used to abbreviate an n or an m; sç → s; final oing → oin; final y → i; sch → ch; aye → aie, oye → oie. The obtained word is considered as an initial candidate followed by the supplementary candidates obtained with the following rules: ct → t; vowel followed by dv → same vowel followed by v; final ans → ands, final ens → ends, final ans → ants, final ens → ents; final ois → ais (same with oit and oient); final ez →és, finalés → ez; st → t, est →ét; as followed by m n q or t →â followed by the same letter (same with es, is, os and us); y → i;ü or eü → u. Finally, for all generated candidates, the following transformation rules are applied: is →î, ai → aî, u → v, v → u, non final e not followed by s →é.
B. Distribution of the Datasets by
Decade and Genre
C. Evaluation details
Word accuracy is calculated by aligning the set of sentences (each reference sentences and its normalised sentence) on the character level and then using the alignment matrix to produce a token-level alignment. Initial Character-level Alignment Character-level alignment is performed using a modified (weighted) version of Levenshtein, whereby certain characters are considered equivalent (e.g. accented and non-accented versions of characters, long s (ſ) and s). The alignment is also designed to avoid tokenisation and punctuation mismatches unless they are really necessary for a successful alignment: • by default, the cost of a substitution is 1, whereas the cost of an insertion or a deletion is 0.8; • the cost of a substitution of a reference whitespace character with a non-white-space is prohibitive (1,000,000); • the cost of a substitution of a reference non-whitespace character with a white-space is 30; • the cost of a substitution involving a punctuation mark (within ,.;-!?') is 20; • the cost of the deletion of a white-space character in the reference is prohibitive; • the cost of the insertion of a white-space character in the reference is 2.
Token-level alignment The token-level alignment must necessarily be carried out with respect to the tokenisation of one of the sequences (there is not always a one-to-one mapping between reference and normalised tokens). We carry out tokenisation prior to character-level alignment using a very basic tokeniser lightly adapted to French (breaking on whitespace and around punctuation) and use then use whitespace tokens to delimit tokens when token-aligning the two sequences. We can either take the tokenisation of the reference sequence or of the normalised sequence as the basis for alignment. We preserve information about token boundaries such that different segmentations will be penalised even if the non-whitespace characters are identical.
(1) Ref: surtout j'ai choisi davantage sesécrits MT: sur tout ji choisi d'avantage ses escrits, (2) Align: surtout||||sur tout j'||||j ai||||i choisi davantage||||d' avantage seś ecrits||||escrits For example, given a reference (Ref) and a predicted normalisation (MT) as shown in Example 1, the alignment in Example 2 is produced, where: • ||| indicates that the reference and MT output do not match for that token; • indicates that there is a token boundary introduced by the tokeniser in the aligned sequence of characters. Where there is also a space in the original sequence (before tokenisation), a double is indicated (case of over-merging); • indicates that there is no token boundary to the right (case of over-splitting).
Symmetrised Accuracy Once aligned, the accuracy is the number of tokens for which the corresponding token is identical divided by the total number of tokens. We calculate a symmetrised accuracy, which is the average between the two accuracies: (i) the reference sentences are used as the basis for alignment and (ii) the normalised sentences are used as the basis for alignment. This is important because it helps to penalise very poor normalisations, such as those that can be produced by some MT-style models, where words can be hallucinated. If the accuracy is only computed according to the reference tokenisation, it is possible for all hallucinated words to be aligned to a single reference token and therefore penalised very little with respect to the amount of noise added. | 2022-08-08T09:51:14.749Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "1ab1c78e2fcaa2c356612c74bdadb0a7b6e26cd4",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1ab1c78e2fcaa2c356612c74bdadb0a7b6e26cd4",
"s2fieldsofstudy": [
"Linguistics",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
256455249 | pes2o/s2orc | v3-fos-license | Generalized lichen spinulosus and secondary follicular mucinosis
CD: cluster of differentiation FM: follicular mucinosis LAFM: lymphoma-associated follicular mucinosis LS: lichen spinulosus INTRODUCTION Lichen spinulosus (LS) is a follicular keratotic disorder and a variant of keratosis pilaris. LS usually shows a localized distribution, but a rare, generalized variant exists in the setting of chronic diseases such as HIV and Crohn’s disease. Follicular mucinosis (FM) can occur in a primary idiopathic form, or as a secondary phenomenon due to inflammatory skin conditions such as eczema or lymphoma-associated follicular mucinosis (LAFM). In this case report, we describe a healthy young adult who presented with generalized LS and associated incidental, secondary FM.
INTRODUCTION
Lichen spinulosus (LS) is a follicular keratotic disorder and a variant of keratosis pilaris. LS usually shows a localized distribution, but a rare, generalized variant exists in the setting of chronic diseases such as HIV and Crohn's disease. 1,2 Follicular mucinosis (FM) can occur in a primary idiopathic form, or as a secondary phenomenon due to inflammatory skin conditions such as eczema or lymphoma-associated follicular mucinosis (LAFM). 3 In this case report, we describe a healthy young adult who presented with generalized LS and associated incidental, secondary FM.
CASE REPORT
A 21-year-old female was referred with xerosis, generalized spiny skin lesions, and a provisional diagnosis of eczema that did not improve on a medium-strength topical corticosteroid. She did not report other medical ailments, medication, atopy, or allergies, and a personal or family history of similar skin lesions was absent.
Clinical examination revealed multiple minute digitate hyperkeratosis which were folliculocentric. The distribution symmetrically involved the face, neck, trunk, and upper limbs (Fig 1, A and B). Perilesional erythema, palmoplantar keratoderma, or signs of a nutritional deficiency were absent.
Histopathological assessment of hematoxylin and eosin stained sections from a 3-mm punch biopsy obtained from her back showed keratin plugs within hair follicle infundibula, sparse perifollicular lymphocytes, and cystic spaces in the outer root sheath of hair follicles and sebaceous glands (Fig 2). An Alcian blueperiodic acideSchiff stain confirmed mucin within the spaces (Fig 3). Immunohistochemistry revealed cluster of differentiation (CD)3-and predominantly CD4-positive small T lymphocytes. CD8 positive T lymphocytes comprised a minority of the cells, while CD20 positive small B lymphocytes and CD30 positive cells were inconspicuous. T-cell receptor gene rearrangement testing failed for technical reasons and was not repeated.
The preferred diagnosis of LS and secondary FM was made using a diagnostic algorithm (Fig 4) from the article titled ''Multiple minute digitate hyperkeratosis (MMDH): A proposed algorithm for the digitate keratoses''. 4 Vitamin A levels were normal.
She was started on isotretinoin 0.5 mg/kg and topical emollients. After 3 weeks of isotretinoin therapy, there was a marked improvement in the texture of her skin, and there were fewer hyperkeratotic spines (Fig 1, C ). She continued to use isotretinoin for a 7-month course, and thereafter topical emollients were prescribed. Her condition spontaneously resolved 9 months after presenting to the dermatology clinic.
DISCUSSION
In 1883, Crocker described a disorder he named ''lichen pilaris seu spinulosus.'' The currently accepted term is LS, and it predominantly occurs in the second decade of life. 5 The aetiology of LS is unknown, but it has been postulated that there is a genetic predisposition or a follicular reaction pattern. The lesions appear suddenly and persist for weeks to months, followed by spontaneous disappearance with rare exceptions of persistent lesions. 2,5 The pathogenesis of LS involves a follicular hyperkeratotic plug that blocks the hair follicle. 2 The histopathology of LS, as in our case, is non-specific, showing a dilated hair follicle with keratin plugging and a perifollicular lymphocytic infiltrate. The FM was an unexpected find.
Given the generalized distribution and folliculocentricity of the skin lesions in the current patient, the differential diagnosis of LS includes phrynoderma and keratosis pilaris atrophicans. However, the lack of clinical signs of a vitamin deficiency and normal vitamin A levels militate against phrynoderma. Keratosis pilaris atrophicans was excluded as there was no secondary scarring or alopecia in affected areas. Non-follicular and localized forms of multiple minute digitate hyperkeratosis were also excluded.
FM is a rare tissue reaction showing localized accumulation of mucin in sebaceous glands, and the outer root sheath of hair follicles. 6 The aetiology and pathogenesis of FM are unknown. It has been postulated that mucin production may be due to the subsequent stimulation of follicular keratinocytes by surrounding T-helper lymphocytes. 7 Primary FM has a benign, usually self-limiting course. It presents most commonly as a solitary lesion in the head and neck distribution of children and young adults. Another clinical variant of primary FM occurs in older patients and shows more widespread lesions that can last indefinitely. 8 Secondary FM has been associated with inflammatory disorders (eg, atopic dermatitis, discoid lupus erythematosus), drug eruptions (eg, imatinib), and numerous other disorders. 7,9 LAFM and especially follicular mycosis fungoides should be actively excluded. 10 Infrequently, cutaneous T-cell lymphoma other than mycosis fungoides may be associated with FM. 10 The distinction between FM as a reactive process vs FM associated with cutaneous T-cell lymphoma is often subjective. LAFM usually occurs in older male patients with multiple, more widely distributed lesions compared to other forms of FM. 3,9 Regarding histopathology, LAFM shows more inflammatory cells and more pronounced epidermotropism with atypical lymphocytes. 3,11 T-cell receptor gene rearrangement typically shows monoclonality in LAFM. 11 The CD4:CD8 ratio is raised in LAFM compared to other forms of FM. 12 An accurate diagnosis depends on the careful clinicopathological correlation of all the criteria. Every case of FM requires an individualized approach. Given the constellation of clinicopathological findings and the initial good response to treatment, followed by resolution of the skin lesions strongly militates against the possibility of an underlying cutaneous T-cell lymphoma in our patient.
In conclusion, here is no specific treatment for LS or FM. Due to the generalized and symmetric distribution of this patient's hyperkeratotic lesions, the decision was made to use a systemic keratolytic agent such as low-dose (0.5 mg/kg) isotretinoin. The patient responded, and the lesions started improving after 3 weeks of treatment. As mentioned above, the risk assessment for T-cell lymphoma has limitations, and close follow-up of the patient will be performed with serial skin biopsies if the clinical picture recurs. | 2023-02-01T16:28:46.760Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "5213317e839096cc01fcfb9059752579c7c05e64",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.jdcr.2023.01.017",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "556e26d903cd2b4a30a8a523248fb490b390db8f",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
} |
233296645 | pes2o/s2orc | v3-fos-license | Knowledge Graph Anchored Information-Extraction for Domain-Specific Insights
The growing quantity and complexity of data pose challenges for humans to consume information and respond in a timely manner. For businesses in domains with rapidly changing rules and regulations, failure to identify changes can be costly. In contrast to expert analysis or the development of domain-specific ontology and taxonomies, we use a task-based approach for fulfilling specific information needs within a new domain. Specifically, we propose to extract task-based information from incoming instance data. A pipeline constructed of state of the art NLP technologies, including a bi-LSTM-CRF model for entity extraction, attention-based deep Semantic Role Labeling, and an automated verb-based relationship extractor, is used to automatically extract an instance level semantic structure. Each instance is then combined with a larger, domain-specific knowledge graph to produce new and timely insights. Preliminary results, validated manually, show the methodology to be effective for extracting specific information to complete end use-cases.
Introduction: The sheer growth in unstructured content is overwhelming the ability of businesses to respond effectively. For example, there are over 180,000 pages of regulation in the federal register and they are updated frequently. In the banking industry alone, the costs of staying compliant with (local to global) regulatory requirements are expected to exceed $100 billion annually by 2020 [1]. Staying on top of this depends on a combination of human and machine approaches. The goal of this study is to be able to extract and infer just enough to be able to focus human attention on the right content. For example, given news of a regulatory change, can we understand just enough to infer what businesses might be impacted and who needs to be notified. This work at a basic level is an effort to ease up this information consumption need for specific tasks in a domain. We first construct a knowledge graph for the specific domain, in a semiautomated fashion, from an unstructured text corpus of the domain. With the help of a task-specific data model, we extract new information and provide it in an easy to consume manner on top of the constructed knowledge graph. The extracted information is also continuously updated in the knowledge graph, which helps to generate domain-related insights over a period of time. This is a second-level utility of the proposed approach.
Although we developed this approach for the regulatory domain, it is generic enough to be applied to various domain-specific tasks. Figure 1 gives a high level view.
Study Data
We assembled a dataset from three sources capturing the financial regulations and banking infrastructure of the United States. First, updates to U.S federal regulations were downloaded from the U.S. Federal Register (articles) [2]. Second, we identified existing financial regulations by downloading an XML file of Title XII of the U.S. Code of Federal Regulations [3]. Finally, we used a database (CSV) detailing all U.S. financial infrastructure (e.g., active banks, holding companies, federal regulators, etc.), downloaded from the National Information Center: Federal Financial Institutions Examination Council (NIC) [4].
Data-model development Here, we outline the development and application of a small, task-specific data-model within the context of monitoring financial regulatory change. We used the federal register articles to develop a domainspecific data-model that could identify and semantically relate a set of eventspecific entities. The task targeted by the data-model developed for this study was to detect actors and events surrounding any change in a regulatory threshold, which we define as any increase or decrease in quantitative value that affects regulated banks. Data-model development was further informed by consultation with financial regulation subject matter experts. Task-specific entities in the data-model were manually identified and labelled within 131 articles. In total, we identified and extracted various instances of 7 entities from approximately 4500 sentences. The final number of labeled entity instances ranged from a minimum of 561 for regulated_activity_threshold, to a maximum of 6752 for regulatory_authority.
Data Extraction
We trained several NLP models to automatically process a new article to extract entities and relationships and fill their relative slots in the data-model, including: • Semantic Role Labeling (SRL): We used the attention-based deep model of Tan et al. [5]. This RNN-based model first identifies and disambiguates a predicate, and then classifies all other tokens based on their roles. The output of an SRL process for a single document consists of many predicates, each paired with multiple actors.
• Custom Entity Extraction: A bi-directional LSTM model was trained on a combined dataset that included both our labelled data and CoNLL 2017 shared task data for named entities in BIO format [6]. Combining datasets in this way has been shown to boost performance for datasets with limited annotation.
• Relationship Extraction: We used the methodology of Q. Hao et al. [7] to automatically extract relationships based on the set of entities extracted from a given article. This method is a verb-based algorithm that can extract multiple relationships for a given pair of entities in an unstructured text document. Here, we extracted multiple relationships between our entities (if available) to capture different associations between them.
Pipeline for automated instance-level semantic structure extraction
The above algorithms were integrated into a single pipeline, which was used to automatically extract news of a threshold change event from an instancelevel data article as shown in Figure 2. First, an article was simultaneously processed to extract custom entities using the custom entity extractor, and actor information using SRL. A final, filtered instance entity list consisted of all entities which were extracted by both methods. Entities that were only extracted by either SRL or the custom extractor were discarded. Next, every possible pair of the remaining entities were input into the relationship extractor algorithm and assessed against the article to extract all relevant relationships and form triples. These were used to fill the slots in the data-model, producing a data-model instance that semantically represented the article.
Domain knowledge graph construction Each data-model instance represents a semantic knowledge graph with limited scope due to the small size of our data-model and lack of rich training data for our extractors. To overcome this limitation, we wrote heuristics based on OpenIE 1 and ClauseIE [8] extraction triplets, thus extracting more nodes and relationships from each article. These were combined with the data model instance. Finally, the domain-specific NIC dataset was incorporated into the knowledge graph, mapping relationships between regulatory agency, bank, bank branch, bank holding company, and federal insurance agency entities. Additional bank information was included as node properties, including addresses, bank assets, holdings, etc.
Generating notifications & insights
Notifications for a user, depicted in the knowledge graph, were generated in two ways. First, using subscription rules written manually based on properties of the extracted information and a user's provided information needs. Second, using the wordnet 2 based semantic similarity of the role description and the information metadata. A key challenge is to reason about the domain in a limited way with mixed quality information, and this is an ongoing effort. Based on their roles and responsibilities, users receive a notification with each relevant federal register update. The information is also used to update the knowledge graph. Over a period of time, this updated information helps to develop insights into events in the domain. This can also be consumed directly by the user by querying the knowledge graph.
Early results & discussion Quantifying results in numerical terms is difficult for the problem we are addressing. In our use-case, low precision of the overall extraction is acceptable as users are able to easily discard any superfluous information that is returned by a high recall. A measure called summarization [9], defined as the ratio between the size of the input document and size of the output, has been designed to measure the reduction of human effort achieved through text processing. In our case, the summarization value is promising and we are currently evaluating it across our dataset.
Conclusion & future work In this paper, we present a solution for semiautomatic consumption of continuous flows of information from large documents, with a focus on fulfilling certain domain-specific tasks and roles. This is based on the semi-automatic construction of a large domain knowledge graph from a domain text corpus, processing of new instance streams, and extracting taskbased information using criteria set by users. This work is in its very early stages, with several opportunities for future direction. One area of key interest is expanding alert rules to include those automatically generated by machine learning of user browsing history. Another is overcoming the limited available knowledge graph training data using techniques such as REHession [10]. Finally, knowledge graph completion-and reasoningbased methods [11] can be applied to use-case specific reasoning tasks to generate hidden insights. We hope this initial work will inspire further study of the problem. | 2021-04-20T01:15:53.553Z | 2021-04-18T00:00:00.000 | {
"year": 2021,
"sha1": "510691428d4ad19a1278b89121564d110722b2cd",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "510691428d4ad19a1278b89121564d110722b2cd",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
59960508 | pes2o/s2orc | v3-fos-license | Eventos adversos na saúde e os cuidados de enfermagem: a segurança dos pacientes desde a experiência do profissional
O estudo tem como objetivo conhecer os aspectos significativos apresentados pelas enfermeiras acerca da experiencia de ter sido responsaveis por um evento adverso de saude. Estudo qualitativo, com enfoque hermeneutico dialetico. Amostra de 10 enfermeiras que tiveram ao menos uma experiencia de responsabilidade perante um evento adverso e 4 supervisores de enfermagem, com responsabilidade da supervisao ao ocorrer o evento. A analise dos dados coletados permitiu construir categorias de analise relacionadas as necessidades de recursos humanos. Identificaram-se aspectos relevantes respeito a necessidade de fortalecer o sistema de recursos humanos, com dimensionamento de carga de trabalho, o trabalho em equipe, a formacao do pessoal, como questoes importantes para os enfermeiros. Os dados resultantes permitem enxergar um caminho para por em pratica intervencoes orientadas a colaborar com um sistema seguro de atendimento.
INTRODUCTION
One of the major goals of healthcare quality, a major concern for sanitary systems nowadays, is the patient safety, as pointed out by the World Health Organization. 1 Several aspects related to this issue, such as the organizational context, diagnostic techniques, an environment and a culture of safety, and healthcare-based human resources stand out as quite relevant issues to be addressed.These aspects are all links of a chain that may ultimately generate damages to patients receiving sanitary care.In a healthcare team, nursing professionals are very closely related and directly connected with patients.In this sense, the design of a nursing plan should presuppose that specific risks surrounding each patient, as well as the organizational context in which health professionals work, are taken into account. 2or this reason, given the relevance of the nursing human resource in sanitary services, the perception of professionals and the meanings they assign to their service become quite a significant study object.The perspective of those who render the service should draw the attention of researchers concerned about patient safety. 3Aligned with these thoughts, some studies show a growing number of nurses committed to the creation of safer systems, taking over leading positions toward providing patients with risk-free quality care. 4n Uruguay, authorities are constantly concerned about the safety of patients.Nonetheless, the country still lacks comprehensive research in this area.Hence, we can affirm that the generation of contextualized knowledge in this area allows for the exploration, comprehension and dissemination of results.Combined with other studies, such results will be crucial toward correct decision-making processes.
Bearing this in mind, the aim of the present study was to acknowledge the significant aspects brought about by nurses regarding their experience of being responsible for the occurrence of health adverse events.
METHODOLOGY
In face of the need for more deeply understand the experience of nursing professionals regarding adverse events, the authors of the present research decided to undertake a qualitative study, as this methodology allows researchers to acknowledge the perceptions, expectations and feelings of those undergoing such singular situa-tions. 5The study employed a dialectic hermeneutic approach by means of the content analysis, 6 which takes into account the context in which nurses carry out their functions.
Hermeneutics is focused on the interpretation of historically located events, and considers historicity, tradition and authority as determining elements toward the comprehension of any given phenomenon.The employed interviews allowed for a better interpretation of the convergences and divergences produced by the statements of the subjects. 7Such emphasis generated the understanding of their historical achievements, daily life and reality itself. 8The dialectical hermeneutics established a critical attitude toward assessed objects, thus reflecting real correlations.As it comprehends and analyzes each part of the whole picture, such methodology allows reality to be seen as an integrated whole, and hence generates more concrete correlations between specific information units and the whole set of information, 9 thus turning patient safety and the human resources involved into special reference points.The contribution of the interpreter is a critical step toward the comprehension of the assessed event. 10herefore, samples subjects for this study were searched considering a totalizing, integrating perspective.The sample was composed of 10 nurses that carried out their services at second and third level healthcare institutions in Montevideo.These selected nurses had had at least one event in which they experienced an adverse event in the previous two years.Finally, four nurses in charge of supervising the nursing service at the time an adverse event occurred were also part of the study.The selection of subjects and their rich experiences related to this issue were based on a search criterion that added significant relevance to the study.The research defined the limit of participants when all inquiries of the researchers were fully responded. 11he data collection process was carried out by means of in-depth interviews that lasted for approximately 90 minutes each.The interviews were performed in a reserved space, making it easier for participants to open up.The study's guiding questions were defined in a way to extract the most effective data, trying to prevent any sort of previous orientation whatsoever.This process occurred between November and December 2012.All interviews were recorded in order to be fully transcribed later by the researcher.Each interview was identified by a letter and a sequenced number (N for nurse and S for supervisor), in a way to maintain the anonymity of the participants.The subjects agreed with the interviews, and the researcher guaranteed anonymity, privacy and confidentiality of the rendered information.All ethical issues regarding research with human subjects established by the country's decree number 379/008 12 were complied with, and participants were informed on the objectives and the reach of the study.All aspects related to the free and informed consent were agreed between both parts.The study was authorized by the Ethics Advising Committee of the Nursing and Health Technologies Faculty of the Catholic University of Uruguay under protocol number 008-2013.
In terms of the use of the content analysis as a data analysis process, such method allows for the study of contents in a given context, as well as for the interpretation of resulting materials. 13n this sense, data were analyzed in compliance with the history of the organization, classification order and final analysis, which generated common sense groups.The careful, thorough reading of each interview generated the following results: the apprehension of the global meaning of the experience of each subject; the organization of meaningful aspects into group categories, aiming at producing common sense clusters; the constitution of concrete categories, named in such a way that they could represent the addressed issue; the construction of each category; analysis of each category, seeking the real experience expressed by nurses and supervisors; and the discussion about the results in the light of the elected methodology and all produced evidences.
RESULTS
Comprehensive analysis of the interviews based on the dialectical hermeneutics allowed for the emergence of significant aspects related to the experiences of the nurses in face of their responsibility for the occurrence of a health adverse event, which led the present study to come up with a set of issues related to nursing human resources.Such approach constituted the following categories: lack of personnel, workload, teamwork, and continuing education.
In the 'lack of personnel' category, subjects affirmed that the number of professionals working at the time the adverse event occurred was insufficient.Both nurses and supervisors clearly expressed that the lack of personnel directly influenced the emergence of adverse events, according to the following statements: [...] me and the supervising nurse were on our own in the ward that day, two of our colleagues did not show up and were not replaced.Then, I was the whole day in a rush and something would certainly go wrong at any time (N2).
[...] I knew something was going to happen.When I started my shift, there were people lacking, and we did not have any replacement.I tried to organize myself in order to help my colleague in the ward, which had more patients that day, but when I was finally able to get there, half of the shift had already gone, the assisting nurse was new in the job and she had administered an injection to the wrong patient (S1).
The 'workload' category is directly related to the previous one, as a result of personnel deficit.Each statement points out an overload of work at the time the mistaken action took place.The interviewed nurses and supervisors affirm that: [...] there were lots of work to be done, I felt totally alone, I was just not able to do everything at a time.I rushed up and down.It's not like that every day, it's a nice work environment here, but some shifts just surmount lots of work, because of the absences.Our colleagues are often absent and in the end those who come to work end up being overloaded, and sooner or later we'll make mistakes (N6).
[...] I just can't stand the idea of coming through it all once again, I felt terrible, but the reason is that I was utterly overloaded.I told them several times, I insisted that I would not bear it all for much longer, but…(N8).
[...] I believe it happened because she had lots of work to do, she had so many things to do and there were only two people in the whole ward.We don't have many patients, because our ward has only a few beds.The problem is that we lacked personnel (S2).
Analysis of the interviews showed that both supervisors and nurses affirmed that teamwork becomes a strong aspect when lack of personnel emerges.They also pointed out the negative relevance of workload.Moreover, they highlight that the team's communicational process, as well as the promptness of each nurse to help others in the work environment, positively affect the services rendered to patients.Hence, the 'teamwork' category emerged: [...] I would have made a thousand mistakes if it weren't for my team.Despite the inconveniences, it's such a pleasure to work with them.Above all, when a patient comes to your ward and your colleague from another department helps you out, or transfers the shift with everything all set.Teamwork is undoubtedly quite a support for us (N4).
Adverse events in health and nursing care: patient safety from...
[...] I'm committed to support direct care processes, it's a priority for me over the massive paperwork that usually overloads us. The teamwork produced by the technical personnel, managers and supervisors stands out as something valuable here. Problems come and go,
but where there is a good communication process going on, there is always an opportunity to improve (S2).
The interviews show how clearly nurses and supervisors highlight the 'continuing education' category as a special factor to be taken into account.This category was expressed in the following way: [...] the fact is that I just did not know what to do.These things are quite rare in your career, nobody tells you what to do when things go wrong, or if they say you hardly incorporate it (N6).
[...] They organized a workshop on patient safety, but in my opinion there's so much more that needs to be done, things that you can be reminded of every single day.Then, when these things happen again, and I wish they don't, you are more prepared, more aware of what to do (N9). [
DISCUSSION
Analysis of the interviews allowed for the construction of the aforementioned categories, and revealed some aspects to be interpreted and discussed in the light of other studies concerning this issue.
One of the major emerging issues was related to the lack of personnel.Such information matches the findings from other authors and reinforces the evidence that the lack of nursing professionals directly influences the safety of patients. 3,14Obviously, the lack of personnel shapes the number of patients to be cared for per nurse.Such standard reverberates in service and care management processes, and should be counted among the initiatives toward more effective care responsibility. 15n connection with this issue, this study observed the excessive workload of the nurses at the time adverse events occurred.In several cases, daily problem-solving practices, such as multitask demands resulting from the lack of quantity and quality of resources -a characteristic that matches the findings from other authors -consequently brought about various shortcomings in the work processes. 16As it has been shown, the implementation of comprehensive nursing processes generates very positive results toward the safety of patients, as they can assess and detect potential risks. 17In face of the lack of numbers and quality of professionals, excessive workloads and unfavorable conditions experienced by the nurses obviously restricts the implementation of an adequate healthcare management.Sometimes, the response to the lack of personnel is only focused on increasing the number of professionals, thus leaving aside the quality of the health care these professionals are able to render.
In spite of these facts, bearing in mind the previously mentioned difficulties, an important element should be taken into account by nursing managers: teamwork provides a powerful support to professionals.The communication among service personnel, as well as their communication with professionals from other services, stand out as a relevant mechanism toward coping with adverse events, and finally toward the diminishment of failures, a fact that also complies with the results of other studies on this issue. 18These findings call for the establishment of a safety network, where the organizational environment and the openness to horizontal dialogues 19 involving managers, technicians and even patients and their families are envisioned as potential actions toward the generation of safer sanitary health environments. 20nother important aspect observed by this study was related to the permanent training of human resources.Indicators show a shortage of both undergraduate courses and continuing education programs.The competences of nursing professionals directly affect the safety of patients and the generation of risk-free care processes, a consistent fact with the findings from other researchers who focused on this issue. 21Professional and personal competences are doorways to safe care. 22As such, it is very important that this aspect of the academic background of professionals be taken very seriously by those in charge of planning and designing care services.
This study agrees with other researchers who point out that continuing education processes and the effective qualification of professionals enable safer practices, thus preventing damages and minimizing risks derived from sanitary care. 23In this sense, directors of healthcare services should encourage nurses to advance their knowledge and professional skills. 24erreira Umpiérrez A, Fort Fort Z, Chiminelli Tomás V The results of this research allow us to envision relevant aspects of the nursing practice both concerning more complex dimensions -such as the increase in the number of personnel and the diminishment of workload in a sanitary system that lacks human resources within an unattractive national and international context -, and at the same time other simpler, more easily implemented dimensions -such as the encouragement of teamwork and the academic and professional improvement of workers.Various examples and models designed to improve communication, responsibility, teamwork, and leadership focus on safety and quality improvement, among others, and may be taken as starting points toward ensuring that patients are the center of healthcare services. 25he aspects addressed here stand out as part of a safety culture that should permeate institutions and sanitary processes, aiming at becoming essential requirements toward strongly preventing the emergence of adverse events and generating proactive learning processes that may generate good practices.The safety culture allows for the generation of virtuous cycles aimed at designing systems in which past mistakes are prevented from happening again.
Several aspects related to the safety of patients were addressed by the present study.However, this research does not intend to generalize its results, as its focus hits the in-depth approach of singularities observed in the participants involved in an adverse event.In any case, the results originated from the gathered data enlighten a way toward broadening this issue and transforming theory into a practice that may contribute toward a safety culture in the sanitary healthcare, bringing about several positive implications to the practice and the professional qualification of nurses.
FINAL CONSIDERATIONS
This study identified relevant aspects regarding the need for strengthening the human resource system of the nursing practice, as well as its selection process.The nurses highlighted the emergence of categories such as lack of personnel, workload, teamwork, and continuing education of professionals as quite significant aspects.Related data allow for the visualization of a pathway toward the practice of health interventions aimed at collaborating with a safer care system.
In order to contextualize evidences, further studies should be carried out, so that new knowl-edge on such a complex issue -patient safety -is generated, aiming at solving complex problems.Concurrently, such initiatives also encourage each service to start developing the model addressed by this study in their daily practice, thus contributing to the quality of sanitary care. | 2018-12-31T11:32:07.411Z | 2015-06-01T00:00:00.000 | {
"year": 2015,
"sha1": "2649ff76dd4392beb006b1cb438a7f1e57c79734",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/tce/a/Fnb6LHm8zf3gc3TvGQByrJQ/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2649ff76dd4392beb006b1cb438a7f1e57c79734",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Philosophy",
"Medicine"
]
} |
246481228 | pes2o/s2orc | v3-fos-license | The Ability of Silicon Fertilisation to Alleviate Salinity Stress in Rice is Critically Dependent on Cultivar
Silicon (Si) fertiliser can improve rice (Oryza sativa) tolerance to salinity. The rate of Si uptake and its associated benefits are known to differ between plant genotypes, but, to date, little research has been done on how the benefits, and hence the economic feasibility, of Si fertilisation varies between cultivars. In this study, a range of rice cultivars was grown both hydroponically and in soil, at different levels of Si and NaCl, to determine cultivar variation in the response to Si. There was significant variation in the effect of Si, such that Si alleviated salt-induced growth inhibition in some cultivars, while others were unaffected, or even negatively impacted. Thus, when assessing the benefits of Si supplementation in alleviating salt stress, it is essential to collect cultivar-specific data, including yield, since changes in biomass were not always correlated with those seen for yield. Root Si content was found to be more important than shoot Si in protecting rice against salinity stress, with a root Si level of 0.5–0.9% determined as having maximum stress alleviation by Si. A cost–benefit analysis indicated that Si fertilisation is beneficial in mild stress, high-yield conditions but is not cost-effective in low-yield production systems. Supplementary Information The online version contains supplementary material available at 10.1186/s12284-022-00555-7.
Introduction
Silicon (Si) has long been recognised as a beneficial element for many plant species, especially members of the Poaceae (Epstein 1994;Debona et al. 2017;Luyckx et al. 2017). Members of the Poaceae accumulate relatively large amounts of Si; in rice (Oryza sativa) for example values as high as 10% Si by dry weight have been recorded (Epstein 1994). Much of this can be found in the form of silicon bodies, i.e. amorphous silica that is deposited in particular tissues, or in spines and other structures on the leaf surface (Hartley et al. 2015;Piperno 1988). Such high levels of accumulation and deposition suggest substantial benefits to plants from Si, but one consensus emerging from the literature is an absence, or only marginal effect of Si on plant growth in optimal, non-stress conditions (Cooke and Leishman 2016; Coskun et al. 2019). In contrast, in a number of species Si has been linked to increased resistance to pests and diseases (reviewed in Debona et al. 2017;Singh et al. 2020;Van Bockhaven et al. 2013) and also to improved tolerance to abiotic stress, notably drought and salinity (reviewed in Thorne et al. 2020).
Salt stress affects approximately 20% of all arable land (FAO and ITPS 2015). Ample Si supply, which can be achieved using Si fertilisation, can reduce salt stress in crops (Thorne et al. 2020). In rice, Si fertilisation is associated with increased anti-oxidative enzyme activity, which reduces the oxidative damage that occurs during salt stress (Das et al. 2018;Yan et al. 2020). Furthermore, Si can reduce the osmotic stress induced by salinity, which is correlated with changes in root morphology and osmotic potential (Yan et al. 2020). Salt induced depression of photosynthetic rates have also been shown to be partially reversed by Si (Farooq et al. 2015). Overall, these effects of Si are associated with improved growth and yield during salt stress (Ahmed et al. 2019).
The exact underlying mechanisms for the beneficial effects of Si during salinity stress are not clear but may be related to tissue specific Si deposition. In the roots, Si is mostly found in endo-and exo-dermal tissues where it could be integrated into the cell wall by cross linking with other wall components such as hemicelluloses, pectins, lignins and phenolics (Sakai and Thom 1979;Fleck et al. 2015;He et al. 2015). The ensuing physical barrier will limit both ion and water permeability, forcing a relatively large proportion to move via the symplast where flux control is far greater. Alternatively, Si could promote suberisation and lignification of the Casparian strip, for example by altering transcript levels of relevant genes (e.g. Hinrichs et al. 2017). Barrier formation and strengthening of the Casparian strip has been shown to block the apoplastic 'bypass' flow of ions such as Na + (Yeo et al. 1999;Gong et al. 2006;Flam-Shepherd et al. 2018;Yan et al. 2021) and Cl − (Shi et al. 2013) in the root, and could form a mechanistic explanation for the Si-induced reduction in the levels of harmful ions in the shoot.
These findings suggest that increased levels of Si fertilisation may provide a sustainable strategy to mitigate salinity-associated yield loss. However, the economic feasibility of such an approach is unclear and likely to critically depend on a large set of parameters (Singh et al. 2020;Thorne et al. 2020). Some of the most important ones would include the type and cost of Si fertiliser, quantitative data regarding the exact levels of Si that are required to maximise salt stress alleviation, the variety under cultivation, and the level of stress that is applied. We therefore studied a number of rice cultivars, including several that are widely cultivated, to analyse how their response to salinity, to Si supplementation, and the interaction between these factors, varied. Growing plants in both hydroponics and soil, critical values for root and shoot Si contents were determined and showed that cost-benefit ratios greatly vary according to growth conditions, rice cultivar, and production system. Costing models predict that Si fertilisation is beneficial in mild stress, high yield production systems but is not costeffective in low yield production systems.
Different Cultivars Show Variation in Response to Salt Stress and Silicon
To test how the benefits of Si addition for alleviating salt stress varied between cultivars, nineteen cultivars were grown hydroponically at low (0.07 mM) and high (1 mM) ambient Si, without (0 mM NaCl) and with (50 mM NaCl) salt stress ( Fig. 1a; cultivars listed in Additional file 1: Table S1).
A number of observations can be made on the outcomes of this experiment: first, there is little difference in the growth rate of cultivars under low vs high Si when salinity stress is absent, a phenomenon that has previously been reported (Yeo et al. 1999;Lekklar et al. 2019;Ahmed et al. 2019;Yan et al. 2021).
Second, raising external Si levels greatly improved plant resilience to salt stress, such that salinity-induced growth loses were approximately halved. Averaged across cultivars, growth rate reduction at low ambient Si was ~ 30%, whereas it was only ~ 15% at high ambient Si. Previous studies have focussed on overall biomass production rather than relative growth rates (RGR), but have nevertheless reported similar beneficial effects of Si (Flam-Shepherd et al. 2018;Lekklar et al. 2019;Ahmed et al. 2019).
Third, the salt sensitivity index (defined as salt-induced growth reduction with respect to control conditions) is not the same for the low and high Si treatments. For example, although the cultivar FL478 is traditionally defined as a salt tolerant variety (Walia et al. 2005), this is only evident at high external Si, whereas it shows considerable salt sensitivity at low Si. Likewise, Farooq et al. (2015) found that the growth of the selected salt-tolerant cultivar, KS-282, was more inhibited by salt treatment than the selected salt-sensitive cultivar, IRRI-6, when plants were grown without Si. Only when plants were supplemented with Si was the salt tolerance of KS-282 evident.
Fourth, a rise in ambient Si clearly benefits some cultivars more than others. Previous work that compared small numbers of cultivars with different salt tolerance is less clear: Farooq et al. (2015) found a stronger effect of Si on the salt tolerant cultivar than the sensitive cultivar, but Yeo et al. (1999) found that the effect of Si was more pronounced for a salt sensitive cultivar. We therefore compared a much larger number (nineteen) of cultivars. The percentage 'rescue' by Si (i.e. salt induced growth inhibition at high Si relative to that at low Si) varied from − 24% to 106% (Fig. 1b), demonstrating that Si actually exacerbated salt damage in some cultivars (for example in IR64). However, in others it (almost) restored growth to that observed in non-stress conditions, as can be seen for lines GSOR402 and 267. The amount of rescue strongly correlated with salt sensitivity in a negative manner (r = − 0.68), but only in high Si conditions (Fig. 1c). When we used salt sensitivity values that were determined at low Si, a significant correlation was no longer observed (Fig. 1c inset). Thus, these results imply that the mitigating effect of Si on salinity stress is more pronounced in salt tolerant lines.
Where Does Si Impact?
To assess whether the beneficial effects of Si depend on the levels deposited in roots, shoots or both, we analysed correlations between RGR and associated parameters on the one hand, and root or shoot tissue [Si] on the other. RGR was measured rather than yield as measuring yield requires soil-grown plants, from which it is not possible to obtain reliable root Si measurements. When grown in non-stress, control conditions, there was a consistent, substantial negative correlation between RGR and tissue Si. Although present when analysing root Si, this negative correlation was far stronger with shoot Si, and had r values of − 0.54 and − 0.69 respectively for low and high Si conditions (Additional file 1: Fig. S1a, b). When plants were grown in the presence of salt, even stronger (negative) correlations were observed between RGR and shoot Si with r values of − 0.73 and − 0.77 respectively for low and high Si conditions (Additional file 1: Fig. S1c, d). Thus, in rice at least, it appears that varieties that tend to accumulate Si in shoot tissue are relatively slow growing, and this phenomenon is insensitive to both ambient Si levels and (salinity) stress.
Theories of defence allocation on plants predict that Si accumulation should be greater in slower growing plant species and individuals (e.g. Massey et al. 2007). Such a trade-off between Si and growth was demonstrated across a range of crop species and their wild relatives: Simpson et al. (2017) found higher Si accumulation was associated with a lower growth rate, especially for larger plants. Other studies have likewise reported a negative correlation between Si accumulation and plant biomass (Johnson and Hartley 2018;de Tombeur et al. 2021). As yet, we have no mechanistic explanation for this observation; a substantial fraction of Si is translocated to the shoot by bulk flow through the xylem and perhaps transpiration fluxes are relatively large in slow growing plants. Alternatively, as Si accumulation involves the use of active efflux transporters (Ma et al. 2007;Ma and Yamaji 2015), there may be an energetic cost associated with high Si uptake (Simpson et al. 2017).
Interestingly, the nature of these relationships drastically changed when the effect of Si on salt sensitivity was investigated, rather than on growth. Salt sensitivity, expressed as percent rescue (Fig. 1b), showed a significant positive correlation with root Si only (Fig. 2). Though this was rather weak for the low Si condition (r = 0.34), it substantially increased to an r value of 0.68 in plants grown in the high Si plus salt stress condition. These data strongly suggest that root Si rather than shoot Si is instrumental in improving salt tolerance in rice but the involved mechanism(s) are not clear. Si deposits around the root exo-and endodermis can strengthen the Casparian strip barrier function. This is particularly relevant Fig. 1 a Response of different rice cultivars to salinity and Si. Plants were grown in hydroponics for 30 d at low (0.07 mM) and high (1 mM) ambient Si, with (50 mM NaCl) or without (0 mM NaCl) salt stress. Data show means (N = 3-5) with standard deviations. b Increasing external Si concentration reduces the growth loss caused by salinity. When external Si was raised from 0.07 to 1 mM an overall reduction in growth loss was determined. However, the level of 'rescue' (salt-induced growth reduction at high Si relative to salt-induced growth reduction at low Si) varies greatly between rice varieties. c Si Rescue is greater in salt tolerant rice cultivars. Raising ambient levels of Si from 0.07 to 1 mM, lowers the salt-induced growth penalty but less so in salt-sensitive cultivars. Salt sensitivity-ranking was done at high external [Si]. Inset shows large variation in Si Rescue and lack of correlation with salt sensitivity when cultivars are ranked for sensitivity at low external Si condition in young roots and regions where lateral roots emerge because Casparian strips are not fully formed there, allowing considerable 'leakage' of Na + and Cl − ions via the apoplast (e.g. Gong et al. 2006;Flam-Shepherd et al. 2018). Average shoot Na + levels were greatly reduced after Si supplementation (Table 1) from around 2000 to 1200 µmole g DW −1 which corroborates a model where Si reduces ionic bypass flow in the root and as such mitigates ionic toxicity stress in the shoot (Yeo et al. 1999;Gong et al. 2006;Flam-Shepherd et al. 2018;Yan et al. 2021). Such a scenario is supported by data that show no or very little effect of Si on shoot Na + in OM4900 and IR64 (Additional file 1: Table S2), cultivars that do not respond to Si. Thus, in such cultivars bypass flow may be inherently low as was previously argued to be the case for the Si non-responsive Pokkali (Flam-Shepherd et al. 2018). However, we also identified cultivars with a large Si-induced reduction in shoot Na + , but nevertheless showed no, or a negative, response to Si supplementation (e.g. GSOR3 and GSOR108). In these cases, Si-independent salt sensitivity factors other than shoot Na + may be more important such as maintaining gas exchange or adequate vacuolar sequestration of Na + and Cl − (e.g. Maathuis et al. 2014). Alternatively, the potential benefits of Si (e.g. lowered shoot Na + ) may be negated by other Si effects such as increased transpiration causing water stress (Thorne et al. 2020) or adverse interaction with Na + and Cl − transporters (Flam-Shepherd et al. 2018).
Critical Levels of Si for Maximum Stress Mitigation
The above data show that Si leads to an improvement of biomass production during salt stress, but the extent varies greatly between cultivars. To assess whether such variability is a function of cultivar-specific Si requirement, eight cultivars were tested on an expanded range of Si concentrations (0, 0.07, 0.4, 1 and 3 mM) and levels of salinity (0, 50 and 80 mM NaCl). This set included salt tolerant lines (GSOR3 and FL478), lines with medium tolerance (GSOR115, and the widely grown elites IR64 and OM4900), a well characterised drought tolerance trait donor (CSR28) and two salt sensitive, high yielding elite varieties (IR154 and IR74371).
The growth data in Fig. 3 show that the more saltsensitive lines struggle to survive in low Si, saline conditions, but, to various degrees, they can be rescued by Si supplementation. In contrast, for other cultivars such as OM4900 there is no or little effect of Si (as was also seen in Fig. 1), irrespective of the salinity level, while an intermediate response is seen in GSOR3 where the beneficial effects of Si are primarily manifest at 80 but not at 50 mM NaCl. These more detailed data show that where plants do respond, the beneficial effect of Si for mitigating salt stress typically levels off when the external Si concentration reaches ~ 0.4 mM under 50 mM NaCl and when it lies between 0.4 and 1 mM for 80 mM NaCl ( Fig. 3; Additional file 1: Table S3). However, this 'critical' value for maximum Si effect may be higher (~ 1 mM) in salt sensitive lines such as IR154 (at 80 mM) and IR74371 (at both 50 and 80 mM NaCl). To assess how the external Si and NaCl conditions impact on tissue Si, root and shoot Si contents were analysed and values are shown in Table 2. Salinisation per se induced a large raise in tissue Si levels. This is particularly evident for shoot Si and occurred at all external Si concentrations. Although some papers report a salinity-induced reduction in tissue Si (e.g. As of yet, we lack a mechanistic explanation for it. It does argue against Si translocation being (mostly) transpiration-driven; transpiration is greatly reduced in response to both salinisation (Sultana et al. 1999;Moradi and Ismail 2007) and Si fertilisation (Gao et al. 2006;Farooq et al. 2015) yet shoot Si levels dramatically increase (Table 2; Additional file 1: Table S3). Alternatively, changes in Si transporter activity may underpin the increase in Si accumulation, with Abdel-Haliem et al. (2017) reporting that salt stress increased Lsi1 and Lsi2 expression in Si supplemented plants, although expression was decreased in salt-stressed plants without Si.
Table 2 further shows that with 0.4 mM Si in the external medium, a level of Si where mitigation is maximum for most genotypes (see Fig. 3), the corresponding average level of root Si is 0.56% (value range of 0.47 to 0.76%; Additional file 1: Table S3). Thus, assuming the mitigating effect of Si is primarily root based, it is tempting to conclude that ~ 0.56% root Si suffices to maximise its benefits. However, maximum Si efficacy in salt-sensitive lines such as IR154 and IR743 requires around 1 mM external Si (Fig. 3), which corresponds to a root level of around 0.73% (value range of 0.56 to 0.88%). In other words, salt sensitive lines require ~ 30% more root Si for Si-induced mitigation of salt stress In all, these findings suggest that (a) root Si is more important than shoot Si in protecting rice from salinity damage, (b) root Si levels of around 0.5 to 0.9% suffice to maximise the mitigating effects of Si and (c) salt sensitive lines require 30-40% more root Si than tolerant lines to achieve these benefits. Thus, while previous studies predominantly focussed on the role of shoot Si where salt stress is concerned (Abdel-Haliem et al. 2017;Farooq et al. 2019;Lekklar et al. 2019), or in other cases did not determine tissue Si levels (Gong et al. 2006;Shi et al. 2013;Flam-Shepherd et al. 2018), further studies on roots may help reveal the mechanistic basis for the mitigating effect of Si.
The Impact of Si on Biomass and Yield in Soil Grown Plants
The above data give a useful foundation regarding the ambient (i.e. externally supplied) and internal tissue levels of Si that are necessary to achieve relief of salinity stress during moderate or severe salt stress. In an agronomic setting, it is important to determine the external levels of Si required to optimise growth improvement under salt stress, whilst insights into the 'critical values' will facilitate estimation of the amount of Si that needs to be replenished in order to prevent soil depletion of Si. To obtain estimates of these values in soil-like environments as opposed to hydroponic systems, and hence increase the practical relevance of our findings, a number of experiments were repeated using pot grown plants. Furthermore, plants were grown to maturity to allow us to quantify Si impact not only on biomass but also on grain yield. Table 3 shows how biomass and yield were affected by Si and salinity after cultivation at 4 different salinities with electric conductivity (EC) of 0, 4, 6 and 8 dS m −1 , and 3 different levels of 0, 90 and 130 kg ha −1 added Si. To normalise between growth experiments and cultivars, the data are expressed relative to the 'no Si added' control (absolute values for shoot and panicle biomass can be found in Additional file 1: Table S4). In general, salinity greatly suppressed plant vigour, and biomass changes in response to increased salinity largely reflected the data and findings obtained with our hydroponic system (c.f. Figure 3; Table 3). As in hydroponics, more growth reduction was recorded in pot grown sensitive lines such as IR154 and IR743 and less so in tolerant lines like CSR28 and IR64.
One-way ANOVAs showed a significant positive impact of Si addition in limiting the salinity induced growth reduction in the case of three cultivars: IR64 (at EC = 6 and 8 dS m −1 ), IR743 (at EC = 4 dS m −1 ) and Table 3 Effect of Si fertilisation on rice biomass and grain yield Plants were soil grown on four levels of salinisation (EC of 0, 4, and 6 dS m −1 ) and three levels of Si fertilisation (0, 90 and 130 kg ha −1 ). The effect of Si is expressed as percentage reduction, relative to the 'no Si added' condition
Cultivar EC (dS m −1 ) Shoot dry weight reduction (%) Grain weight reduction (%)
Si ( Fig. 3), its growth was actually reduced significantly by Si supplementation at all three levels of salinity (EC = 4, 6 and 8 dS m −1 ), although there was no effect of Si addition for this variety in hydroponics (Fig. 3). Statistical tests on changes in grain yield showed significant mitigation of salt-induced yield reductions in IR64 (EC = 4 dS m −1 ), in CSR28 (EC = 4 dS m −1 ), and IR154 (EC = 6 and 8 dS m −1 ). No discernible influence of Si supplementation was found in the IR743 and IR833 cultivars, but as was seen for plant biomass, Si had a detrimental effect on OM4900 yield (EC = 4 and EC = 8 dS m −1 ). Ahmed et al. (2019) found the beneficial effect of Si during salt stress in rice was similar for shoot dry weight and yield. We found that the impact of Si supplementation on mitigating salt stress was consistent across the two traits of biomass and yield for IR154, CSR28 and OM4900, but less so for IR64 (where yield rescue was only seen at EC = 4 dS m −1 while biomass rescue occurred at EC = 6 and 8 dS m −1 ), and not at all for IR74371, which showed biomass rescue but no effect of Si on yield. This suggests that it can be challenging to predict the beneficial effects of Si for rescuing yield under salt stress from measuring biomass alone; the most complete picture of the ability of Si to mitigate the impacts of salinity on the performance of a cultivar will come from measuring both biomass and yield.
The Economic Feasibility of Si Supplementation
Our data show that Si can positively impact on both biomass and yield production in several cultivars (e.g. IR64, IR154, IR743 and CSR28). At the same time, Si does not appear to affect either growth or yield in other cultivars such as IR833, whereas it can even have a negative influence, as seen with the OM4900 variety. These different responses are clearly going to impact on the utility and efficacy of applying Si as a mitigation for salt stress. For example, the above data strongly suggest that in the case of OM4900 cultivation, Si supplementation is likely to be counter-productive and for cultivars like IR833, negative impacts are unlikely but the lack of measurable Siinduced growth promotion under salt stress would mean it was a waste of money.
For cultivars where Si did improve yield (IR64, IR74371 and IR154), Table 3 allows us to estimate yield improvement at the two Si supplementation levels. For these three cultivars, EC = 4 dS m −1 salinity caused on average a 43% drop in yield when no Si was added (background Si levels were equivalent to ~ 1 kg ha −1 ). This percentage reduced to ~ 30% and ~ 24% respectively when 90 or 130 kg ha −1 Si are applied. Thus, Si application at 130 kg ha −1 would generate a ~ 45% improvement relative to the no Si condition. In contrast, the average yield reductions for EC = 6 dS m −1 and EC = 8 dS m −1 would be around 65% and 80% in the absence of added Si. These values would change to around 60% and 55% for EC = 6 dS m −1 when Si is supplied at 90 or 130 kg ha −1 whereas the equivalent values at EC = 8 dS m −1 would be 80% and 70% for 90 and 130 kg ha −1 respectively. Overall, these numbers show that Si rescue is substantial at low level salt stress, but almost absent when it is moderate or severe.
Using field conditions that included a limited amount of water stress, Flores et al. (2021) suggested that foliar applications of intermediate levels of Si may be economically viable for rice. Likewise, a literature inventory by Alvarez and Datnoff (2001) concluded that Si fertilisation would likely be economically viable in most riceproducing countries. However, neither of these studies was based on specific, experimentally imposed, stress conditions and/or assessed the impact of different rice cultivars. To assess the applicability of Si as a commercially viable approach, a generalised costing model has been developed (Additional file 2: 'Costing model'), based on a number of assumptions (see Suppl. data). Si fertiliser cost depends on its form; blast furnace slags have very low (2-5%) Si contents and can contain many other chemical components that can impact on plant growth. It is therefore not considered here. Likewise, rice straw is often used as a cheap form of Si fertiliser on many small holder farms, but this contains variable amounts of Si in addition to other chemical components and thus is not considered here. Na-, K-and Ca-silicates contain 20-25% Si and command prices of $500-1000 per tonne Si (e.g. https:// www. aliba ba. com/ showr oom/ wolla stoni te-price. html). This equates to an extra production cost of $50-120 ha −1 when applying 90-130 kg ha −1 Si supplementation. The proportional impact of this cost varies according to production system.
In large (> 25 ha) farms in China or the USA, yields typically reach of 7-8 t ha −1 in non-stressed conditions. These farms do not use Si fertilisation, but the ambient Si availability is unknown. Assuming yields of 7 t ha −1 , production value of around $3600, and costs of around $2200 ha −1 (Zhang et al. 2019), such farms would generate profit margins of about $1400 ha −1 . In this scenario, a salinity induced yield reduction of 43% (see above) would lower sales income to $2050 (3600-1550), creating a loss of $150 ha −1 . In the case of Si responsive cultivars, Si supplementation would restore production value to $2750 and consequently to profits of $495 ha −1 . Fertiliser costs would reduce this to $430-450 ha −1 . Moderate (EC = 6 dS m −1 ) and severe (EC = 8 ds m −1 ) stress would further eat into earnings generating a loss irrespective of the production system. It is important to point out that these calculations are made on the basis of yields rather than biomass; salt-induced biomass reductions are generally less severe than yield reductions (Table 3) and therefore apparent profitability would be achieved for EC6 and EC8 salinity levels in large farm production conditions. However, much of the world's rice production takes place in small holder farms, with 400 million people in Asia alone involved in growing rice on farms smaller than 2 ha (IRRI 2016). Such small holder farms typically have lower yields (2-3 t ha −1 ) and the cost-benefit analysis is very different. For example, Pathok and Deka (2019) estimate Indian average production costs per hectare (assuming 3 t ha −1 yield) of around $450 against a paddy sales price of ~ $625 ha −1 (based on the governmental minimum support price) generating a farmer's income of ~ $175 ha −1 . A salinity-induced yield reduction of 43% causes a net loss of around $30 ha −1 , even in the presence of Si, and is clearly not sustainable. Using Fijian numbers of paddy sales price ($1930 ha −1 ) and production costs ($1700 ha −1 ; Bong 2017), results in a slightly higher farmers income of around $230 ha −1 . But in this case too even mild salinity leads to an overall loss which, if a minimal extra cost of $50 is added for Si supply, amounts to $280 ha −1 . In other countries, where low production, small holder-dominated rice cultivation prevails, very similar numbers populate cost-benefit analyses.
Conclusions
The plant science literature has seen an explosion in the number of publications reporting the benefits of Si. This element appears to have positive properties that relate to all aspects of plant physiology, including abiotic stresses such as drought, heat, cold, flooding and metal toxicity, and biotic factors such as tolerance to pathogens and herbivory (see Singh et al. (2020), Thakral et al. (2021), and Thorne et al. (2020) for recent reviews). The mitigating effects of Si with respect to salt stress have been studied for decades, especially in rice (Matoh et al. 1986;Yeo et al. 1999;Gong et al. 2006). Most of these studies typically focused on the impact of Si on biomass in a specific cultivar (Gong et al. 2006;Farooq et al. 2019;Lekklar et al. 2019) whereas field studies frequently involve application of unrealistically high levels of Si supplementation (Mauad et al. 2016;Ullah et al. 2018;de Tombeur et al. 2021).
Work from this study shows that there is great variability in the benefits of Si addition, when ambient Si levels are low, for the alleviation of salinity stress, with rice varieties that are negatively impacted, those that do not respond, and others that show positive effects. Furthermore, the data suggest that Si efficacy is greater in more salt tolerant varieties. Thus, it is imperative that cultivar-specific data are collected in studies aiming to assess the benefits of Si supplementation in alleviating salt stress. Our results also suggest that changes in biomass are not necessarily good predictors of yield when determining the effects of Si fertilisation, so data on both parameters may be needed. In terms of practical applications, it would be very useful for such studies to include evaluations of the economic feasibility of Si supplementation, especially with reference to differing cultivation and production systems. The relatively simple cost-benefit model presented here is based on greenhouse studies and a small set of basic assumptions that can easily be adjusted for various economic parameters. Clearly, the actual financial gains and losses will be sensitive to multiple edaphic and climatological factors and will require data from specific cultivars, preferably in the form of field trials. In contrast, more general trends revealed by our modelling are less likely to depend on local conditions and include the notion that Si application is likely to be more profitable in high production systems and also at lower levels of salinisation.
Plant Growth Using Hydroponics
Rice seeds were germinated in sand. After 7 d, plants were transferred to standard Yoshida hydroponic medium and grown for another 10 d. Subsequently, plant weights were recorded and plants were exposed to hydroponic standard medium (control) or media that were supplemented with 50 or 80 mM NaCl to induce salinity stress. Various levels of Si (0, 0.07, 0.4, 1, or 3 mM) were applied by adding Na-silicate. Where appropriate, Na levels were normalised to 3 mM using NaCl. The hydroponic medium was renewed once per week and treatments lasted for 30 d, after which total plant, shoot and root fresh weights were determined. Plants were cultivated in a greenhouse with the following conditions: a 12 h photoperiod which consisted of natural daylight augmented with artificial light to 600-1000 µmol m −2 s −1 . Day and night temperatures were 28 and 24 °C respectively. Relative growth rates (RGRs) were calculated as ln t2 -ln t1 /t 2 -t 1 where t 1 is the initial weight (g) and t 2 the final weight. At least 3 biological repeats were carried out.
Plant Growth in Soil
To simulate rice cultivation in soil, seeds were germinated in sand and after 7 d, six seedlings were transferred to a 10 L box which contained 7 kg of substrate of the following composition: 75% John Innes compost #2, 15% Coarse Vermiculite, 10% Sand (16:30 Silica Sand). Soil salinisation at 0, 4, 6 and 8 dS m −1 electric conductivity (EC) was achieved by adding 0, 400, 600 or 800 mL of a 50 mM NaCl solution in 8 instalments (twice a week) per box. Silicon fertilisation at 0, 90 and 130 kg ha −1 was achieved by adding 0, 720 or 1040 mg Si per box (800 cm 2 surface) in the form of Na-silicate. XRF measurements (see below) showed low background Si content of around 0.1 mM (soil water basis), equivalent to around 1 kg ha −1 . Silicon was applied in two doses, after one week and 5 weeks. Plants were grown in a greenhouse with 12 h day/night temperatures of 22 and 28 °C, ambient relative humidity and lighting with a minimal level of 500 µmoles m −2 s −1 for 6 months after which all shoot tissue was removed by cutting at the root:shoot junction for determination of plant and panicle weights. Three biological repeats were carried out.
Flame Photometry Sample Preparation and Analysis
Shoots and roots of plants were separated and DW obtained after 48 h drying at 80 °C. Tissue was extracted for 48 h using 5 mL of CaCl 2 (20 mM). Extract Na + content was determined using a Sherwood 410 flame Photometer (Cambridge UK).
Tissue Silicon Measurements
Silicon contents were was measured by portable X-ray fluorescence spectroscopy (P-XRF) using the method of Reidinger et al. (2012). Dried leaf and root material was ball-milled (Retsch MM400 Mixer mill, Haan, Germany) for 3 min at a frequency of 20 Hz. Ground material was pressed at 10 tons into pellets using a manual hydraulic press with a 13 mm die (Specac, Orpington, UK). Si analysis (% Si dry weight) was performed using a Nitron XL3t900 GOLDD XRF analyser (Thermo Scientific, Winchester, UK). For XRF calibration, silicon-spiked synthetic methyl cellulose (Sigma-Aldrich, product no. 274429) was used. To avoid signal loss by air absorption, the analyses were performed under a helium atmosphere (Reidinger et al. 2012).
Statistical Analyses
All experiments consisted of at least 3 biological repeats and data are presented as means with standard deviation. To assess the effect of Si on biomass and yield of hydroponically and soil grown plants one-way ANOVAs were performed using p < 0.05. | 2022-02-03T14:41:52.168Z | 2022-02-02T00:00:00.000 | {
"year": 2022,
"sha1": "c631c949d558f97977f84bfa489415168b7e177c",
"oa_license": "CCBY",
"oa_url": "https://thericejournal.springeropen.com/track/pdf/10.1186/s12284-022-00555-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "440eea47673043daaf3352a44611dbae1b6d191f",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251604884 | pes2o/s2orc | v3-fos-license | The economic-administrative role of geographic information systems in rural tourism and exhaustive local community development in African marginalized communities
Purpose – The purpose of this study was to examine the latent part of geographic information systems in inclusivesustainableruraltourism,community-basednaturalresourcemanagement(CBNRM)andcommunity developmentandempowermentinSouthernAfrica,Africagenerallyandmanyruralareaselsewhereworldwide. Design/methodology/approach – TheviewpointutilizesliteratureanddocumentreviewstoassessAfrican and worldwide agricultural, environmental and tourism resources management scenarios. It thus liaises with CBNRM and geographic information systems in sustainable tourism and local community development applications. Findings – This review viewpoint uncovers a better potential synergetic relationship between tourism and rural (agricultural) activities that geographic information systems along a concept of CBNRM can amplify. Hence, it has poised a need for a decent and integrated tourism strategy to develop and empower the pertinent communities in many rural and marginalized areas within the continent. Originality/value – Many rural communities in Southern Africa and Africa broadly dwell in low-income areas. Such milieus are rich in natural biodiversity, including tourism destination areas. Geographic information systems, sustainable tourism and CBNRM can form a gestalt of local community development projects within their environs.
Introduction
Impact evaluation and simulation are increasingly becoming an integral part of tourism development (Ramaano, 2021a).The earlier inferred is consonant with Singh (2015), who maintains that geographic information systems (GISs) got implemented in various fields such as forestry, urban development and planning, geography and environmental considerations.
Admittedly was limitedly explored in tourism; therefore, factor implying its current turnaround imminence.Tourism success revolves around effective outlining and development, which can be more magnified by applying a GIS.That being the case, Jovanovi c (2016) further emphasized the fundamentals of planning, advancing research and retailing within any tourism enterprise, and posited that GIS has an indispensable role in such prerequisite obligations.African impoverished countries can redeem themselves within such inferred horizons (Ramaano, 2021g, 2022b.It is thus more consistent with Fagerholm, Oteros-Rozas, and Raymond (2016).They appraise the linkages within ecosystem services, land use and well-being in an agroforestry scene utilizing public participation GIS (PPGIS) as the latent features of beneficial agro products.Accordant with the African rural environs conundrum, PPGIS vouch for the empowerment and incorporation of marginalized populations, who have limited voice in the public platform, with geographic technology teaching and cooperation as its central theme.It applies and provides digital maps, satellite imagery, sketch maps and numerous other spatial and visual instruments to improve geographic engagement and knowledge on a social level.Similarly, African rural communities can also lean on the ideals of the open sourcing of Quantum GIS (QGIS) utilities to combat impoverishment and improve diverse entrepreneurship productions and their socio-economic conditions.With that, Albuquerquea, Costa, and Martins (2018) assert that GIS are instruments that permit a more dependable decision-making process.Indeed in reaching legislators and directors in tourism expansion and implementing blended touristic knowledge.They are compelling means for the improvement of destination retailing procedures.GIS such as QGIS, participatory GIS (PGIS) and remote sensing got viewed as having a potentially significant role in checking environmental conditions, assessing the locations for envisaged and planned developments, defining contradictory values and modeling relationships.Henceforth their potential significance in tourism management (Martin, Curtis, Fraser, & Sharp, 2002;Briedenhann & Wickens, 2004;Mearns, 2012), agricultural administration and land use suitability conditioning (Wilson, 1999;Malone et al., 1998;Musakwa, 2018) in African rural areas.
Systematic assessment of the ecology effect is often affected by a lack of information (Manel, Williams, & Ormerod, 2001;Kettenring & Adams, 2011).Thus, Huddleston et al. (2003) assert that even though most rural mountain communities partake in agricultural activities, their livelihood systems are mixed.Hence they endorsed the GIS-based analysis of mountain climates and residents.GISs are instruments and techniques for data manipulation, integration, image and analysis.GISs have since proved an essential solution in these regards and obligations (Bahaire, 1999;Fagerholm et al., 2016).To this end, Orimoloye et al. (2019) urged the application of remote sensing and GIS in Wetland modification surveying and geography dynamics and its imports on ISimangaliso Wetland Park, South Africa.Therefore, suffice to realize that both agriculture and wetlands are significant facets of the provisioning utility for ecosystem services and are essential livelihood items in rural areas and vital subsistence hubs for deprivation in marginalized rural communities.They are indeed crucial cores for agritourism and ecotourism developments.Agritourism involves any farmingbased operation that brings guests to farms.It affords secondary revenue for agriculturists (Phillip, Hunter, & Blackstock, 2010).Thus, enabling an additional source of income to afford adaptation technologies amid the climate change era.Meanwhile, ecotourism entails responsible travel to the natural environment, protecting nature and empowering the local communities (Keyser, 2002).Hence, GIS has been used in culinary mapping and tourism development in South Africa's Karoo region (Du Rand, Booysen, & Atkison, 2016).
CBNRM attaches local people packs to guard their land, water, animals and plants to use these natural reserves to amplify their current and future livelihood opportunities.Therefore it facilitates every participating member of the society to have a duty in their subsistence culturally, economically and spiritually.Community-Based Natural Resource Management Role of geographic information systems (CBNRM) is thus a scheme to work synchronically to defend their natural resources and simultaneously produce long-lasting value to the community (DEAT, 2003).GISs such as PGIS mimic the foremost essence of CBNRM in participatory rationales and can jointly synergize livelihood efforts within African rural communities.Quan, Oudwater, Pender, and Martin (2001) and McCall and Minang (2005) highlighted the position of participatory mapping and PGIS relevance within participatory spatial planning for CBNRM and sustainability in emerging economies.
1.1 The potential for sustainable rural tourism and community-based natural resource management (CBNRM) activities within African marginalized communities CBNRM is a path that allows for land and natural resources management.It holds the potential to contribute resolutions to some of the complications encountered within the communal lands.It labels communities' rights to resources, successful farming, food supply, job production and small businesses.It thus promotes broad equality, diversity, and inclusion in its core themes.In Southern Africa, and Africa generally, many people live with and depend on natural resources.Ecotourism and agricultural enterprises are much-desired targets for land reform capitalizations in rural African locales.Weiner and Harris (2003) highlighted the imperativeness of community-integrated GIS for land reform in South Africa for natural resources and land use administration.Therefore, a point of this narration is that tourism (Agro-tourism) could assist horticulture, forestry, and agriculture.Henceforth community-based tourism (CBT) and CBNRM endeavors (Ramaano, 2021g) provide a secondary and spare source of revenue in many rural areas across the Continent.So, for the procurement of farm implements and improve the socio-economic status of the local people.
With that, agritourism and sustainable tourism can make synergism with each other.To that end, more exits for diverse local entrepreneurship can emerge.The aforesaid is consonant with Rogerson (2006) on tourism pathways in small towns and rural areas of South Africa.
A triumphant CBNRM can induce diverse incomes and promote sustainability in African rural communities within community-based tourism, spurring and improving wildlife tourism dynamics (Mbaiwa & Mogende, 2022).Indeed, for the support of agritourism, ecotourism and agroforestry (Ramaano, 2022b).Likewise, PGIS implementations furnish instruments that certify deprived classes to create a case for honor, collaboration and political route (Linebaugh & Rediker, 2000;Kwaku Kyem, 2001).Hence, CBNRM has become a vital component of PGIS; therefore, promoting community development while combatting climate change dilemmas.Thus for capacitating rural, poverty-stricken and marginalized African communities, especially women empowerment tourism and hospitality-linked projects like selling traditional baskets, brooms, artwork and pottery among others from the indigenous wealth and biodiversity; henceforth, for benchmarking with other rural communities worldwide.
Climate change implications on agriculture and tourism in African rural communities
Climate change is caused by the high greenhouse gas (GHG) emissions into the environment.It contributes to global warming (UNEp and IEA, 1995) and makes human ecosystems vulnerable (Haines, Kovats, Campbell-Lendrum, & Corval an, 2006).Suryabhagavan (2017) posited that climate change is the alarming crisis in agricultural and livestock activities in Ethiopia; hence, they endorse the critical role of GIS-based climate variance and depletion depiction in Ethiopia over three decades.Likewise, in response to climate change dilemmas, Onyancha and Onyango (2020) appraised web-based science for information and communication technologies for agriculture in sub-Saharan Africa for about 27 years to model climate change matters, disparities and their critical mitigation measures.That being the case, Sifolo and Henama (2017) remind us that in Africa, tourism is a trigger for other corporations such as agriculture and is confronted by the menacing climate change AGJSR 40,2 variability.Therefore, Hoogendoorn and Fitchett (2017) hold that where migration is not viable, heightened concerns of advancing temperatures in the Western Cape Province of South Africa will negatively affect wine farming and tourism transactions.Such citations emphasized the need for improved monitoring and mitigation techniques associated with a blend of GISs to enhance productivity and engendering sustainability.Charles, Reneth, Shakespear, and Virginia (2014), with a case study of the marginalized rural communities of Zimbabwe, posit that the ongoing climate change threat to tourism and agriculture has necessitated coping and mitigation strategies for the locals.Henceforth adaptation strategies mostly dwelled on timing, diversifying, and managing crops against predicted harsh seasons.It thus mandates the availability of empowering technology, as early inferred with the application of GIS precisions within the disadvantaged African rural communities.Similarly, in the case of Eastern Africa, Orindi and Murray (2005) opined that prolonged droughts, floods and rapid climate changes have dismal effects on economic activities and tourism ambition alike with mining, subsistence agribusiness and fisheries; hence a call for mitigation measures to negate dire consequences.Akin to the aforesaid, Radhouane (2013) postulates on climate change impacts on North African countries and some Tunisian financial sectors, asserting that rising temperatures linked with climate transformation will reduce the land sites appropriate for agriculture and negatively affect tourism.
Ozor, Umunakwe, Ani, and Nnadi (2015) evaluated the consequences of Climate change among rural agriculturists in Imo State, Nigeria.Therefore, rural people reportedly widen their subsistence strategies and adapt to climate change.Furthermore, Denton (2002) and Milne (2005) professed for rural women's respect and management in climate transformation policies and approaches amid altering climate and doubtful destiny.Thus, along with the integrity of political ecology, ecofeminism, equality and climate change threats.It is crucial to conceive that rural women face the effects of climate change within their everyday exercises, such as formal agricultural actions, fuel wood collection and water assemblage, among other routines globally.The participatory guidance and empowering policies of PGIS and CBNRM can assist in alleviating women's burden in this regards.Nicholls and Amelung (2015) evinced that many rural localities, previously counting on immediate activities such as forestry, fishing, agribusiness and mining, and in the Nordic regions, have been replaced by concentration on tourism ambitions.It thus appears to be a global issue, and climate change is attached to such adaptation due to its sudden influence on the expected yield.Likewise, worldwide climate change imports demand political economy transformation strategies for poverty and subsistence, as exemplified by pastoral mountain neighborhoods in Nepal (Gentle & Maraseni, 2012).
The use of GIS and PGIS in sustainable tourism strategy and activities
A GIS is a computer-based instrument for mapping and analyzing characteristic effects on land.A GIS is a computerized database management system for snap, memory, recovery, handling, use, analysis and display of spatial (i.e.locational defined) data (Knippers, Stoter, & Kraak, 2006).There are examples: (1) Conservation biologists and preservationists might be worried about the effects of slash-and-burn exercises on the populations of certain spiders.Imperatively conservation biologists and agro foresters share common sentiments, both custodians of two integral forms of sustainable tourism within the rural settings in eco-tourism and agro-tourism (Ramaano, 2008(Ramaano, , 2021g, 2022b)).A significant GIS data about conservation areas and eco-tourism, agricultural areas and agro-tourism is fundamental for tourism strategy and local community development in African rural milieus and elsewhere relevant; Role of geographic information systems (2) Natural hazard analysts and specialists might need to probe the high-risk places of annual monsoon-linked swamping by considering rainfall forms and terrain features.Safety is a priority for any tourist associated with various types of tourism.GIS information about the safeness of some areas can form a crucial part of the tourism strategy for the local community development in marginalized African rural environs; (3) Forest managers and administrators might be interested in optimizing timber yield by employing soil data and actual tree viewpoint dispersions, despite many challenges, such as the need to maintain tree diversity.Quite significantly, forests are the central element of a variety of tourism.Decent expertise in forest management enables tourism stakeholders and planners to establish tourism ventures and initiatives (Donohoe & Needham, 2006;Ramaano, 2021g, h); thus, supporting the discovery and management of tourism areas such as heritage, agro-tourism and adventure tourism within rural and remote areas.The latter can broadly help to strengthen sustainable tourism strategy and rural community development activities; hence, influencing the development and management of natural and water-based tourism initiatives and resources toward pro-poor tourism and sustainability (Knippers et al., 2006;Ramaano, 2021g, 2022cRogerson, 2006).
2.1
The prospective role of quantum GIS (QGIS) in agriculture, agronomy and tourism for community empowerment in the African continent's rural and marginalized areas Sherman, Sutton, Blazek, and Luthman ( 2004) indicated that QGIS is an open-source GIS, yielded in May 2002 as a scheme on Source Forge.Therefore, Jaya and Fajar (2019) remind us that it can operate as a combined data bank, managing spatial and non-spatial data in a blended database.To that end, QGIS functions as cross-platform desktop software on GIS.Akin to the African scenarios, it designates a free alternative to more costly GIS Programs (Flenniken, Stuglik, & Iannone, 2020;Ramaano, 2021g).Thus many impoverished and marginalized African communities can capitalize on this platform to enhance their agricultural production.To that account, Thorp, Hunsaker, French, Bautista, and Bronson (2015) expressed that the advancement of sensors contributing to geospatial information on crop and soil circumstances has been a prime advance for accurate agriculture.Thus, The Geospatial Simulation (GeoSim) plug-in for QGIS was significant in enhancing the crop management of cottonseed and maize productions, as revealed by their study (Raj et al., 2021;Ramaano, 2022b).Hence the same could be fundamental for many African countries' agriculture and agronomy endeavors while boosting the socioeconomic statuses of the respective communities (Nawar, Corstanje, Halcro, Mulla, & Mouazen, 2017).As already early predicated, the specified could work hand in hand with CBNRM and farm tourism initiatives for the maximum benefits of the locals.
2.2 Participatory geographical information systems (PGISs) value in sustainable rural tourism PGIS is a participatory approach to spatial planning and spatial data and communication directions.PGIS integrates participatory learning and action (PLA) techniques with GIS.Universally, people bring social change by collaborating to formulate notions and results (Rambaldi, 2005).Kwaku Kyem (2001) specified that PGIS is a vital facet of CBNRM in Africa, while Quan et al. (2001) urge its imperative role in developing countries.Similarly, PGIS was utilized as an empowerment approach to natural resources management and tourism in Indonesia (Corbett & Keller, 2004).Spatial information technologies, entailing GIS, global positioning systems (GPS), remote sensing software and open access to spatial data and AGJSR 40,2 imagery, empower those who apply them.Generally, assorted access can benefit the advantaged people and less help the impoverished communities and local people; hence, perpetuating the marginalization of those already marginalized.However, PGIS is a concept for approaches that seek to alter the latter by endorsing participatory rural appraisal (PRA) and spatial data technologies; minority cadres can be capacitated along within the traditionally omitted from spatial decision-making actions.Rambaldi (2005) detailed how local people can get empowerment.Hence, significant to utilize the technologies to build their maps and employ them for their research.These maps and models can vary from the ground and paper maps of PRA in their greater spatial precisions, permanence, assurances and credibility with government personnel.Rambaldi purported that such maps got utilized as interactive routes for spatial discovery, data exchange and affirming decision making.Thus, for resource usage planning and advocacy activities, it can be very beneficial for the development and maintenance of different local tourism routes.Equality is one of the critical norms of sustainable rural tourism development (Xiang, Isbister, & Okumus, 2015); in conjunction with the diverse roles of GISs, this paper also backs the influential part of women in tourism agendas as livelihood alternatives for local communities.Villagers all around the world can use various PGIS models, including defending indigenous lands and resource rights; management and resolution of disputes over natural resources; collaborative resource utilization planning and management; intangible cultural heritage conservation and identity constructions among indigenous rural communities; equity advancements with regards to ethnicity, culture, gender and environmental justice; hazard minimization, for instances through community safety monitoring and audits; and peri-urban planning and research and climate alterations adaptation (Omara-Ojungu, 1992;Rogerson, 2002Rogerson, , 2006;;Ramaano, 2021a, g).The mentioned could prove essential in tourism as a livelihood strategy for marginalized remote communities (Rambaldi, 2005;Rambaldi, Kyem, McCall, & Weiner, 2006;Ramaano, 2021g, 2022b.Various models of PGIS got authenticated for countries as broad as Australia, Brazil, Bhutan, Cambodia, Cameroon and Canada amongst others. There are hundreds of non-documented cases where technology intermediaries (most NGOs) affirm community-based organizations or indigenous peoples in employing geographic information technology and systems (GITS) to account for their spatial planning requirements.Politics forms part of the constraints toward the flourishing of tourism businesses in rural areas.Therefore land ownership and demarcation of land use are central to healthy tourism initiatives (Rogerson, 2002;Ryan, 2002); thus, affecting the interest in private sector investment (Rogerson & Rogerson, 2020).That being so, GISs has a potentially substantial role for local communities and private sectors when dealing with the discovery, mapping and management of the indigenous land to enhance sustainable tourism endeavors within African marginalized rural populations and rural areas elsewhere.As to Hasse and Milne (2005), tourism researchers and administrators proceed to engage with how to deliver sustainable practices consonant with tourism development.Henceforward community cooperation and stakeholder interplay are increasingly recognized as fundamental to accomplishing aspired ends.
2.3
Remote sensing in holistic agriculture, tourism and rural livelihoods activities Pervez and Brown (2010) designate that authentic geospatial information on the area of irrigated land enhances acquaintance of agricultural water use, local land cover operations, preservation or shortage of water resources, and constituents of the hydrologic appropriation.Therefore, the essence of remote sensing and GIS-based soil erosion estimation from a farming watershed (Patil, Sharma, & Tignath, 2015).Thus, through empowerment strategies and available technological grounds, such can be beneficial to the
Role of geographic information systems
impoverished African rural farmers.Analogously, agritourism offers a secondary revenue opportunity for agriculturalists to afford farm implements.There exist platforms for affording technologies to increase productivity and battling dry conditions, drought, and climate change shocks.To this account, Ozdogan, Yang, Allez and Cervantes (2010) urged us on the prospects and remote sensing usages in agrarian productivities.Thus remote sensing can reinforce farming determinations in homelands at threat of food insecurity, including remote rural African communities (Becker-Reshef et al., 2020).Therefore, refined sensing instruments are vital for sketching soil management zones for variable-rate fertilization (Nawar et al., 2017); hence, ensuring accuracy preservation for environmental sustainability and agriculture, also in conjunction with GIS and GPS (Berry, Detgado, Khosla, & Pierce, 2003;Delavarpour, Koparan, Nowatzki, Bajwa, & Sun, 2021); hence consequential for the Big GIS analytics model for agribusiness supply chains (Sharma, Kamble, Gunasekaran, Kumar, & Kumar, 2020).Satellite remote sensing imagery and GIS in Maasai Mara Game Reserve and Nairobi National Park and the ecosystem's assortments, representatives and influential connections have been recognized; and using the Masai Mara ecosystem wildlife sanctuary in East Africa (Ndegwa, Mundia, & Murayama, 2009;Magige, Jepkosgei, & Onywere, 2020).Similarly, Gcaba and Dlodlo (2016) recommend internet of thing usage and remote sensing South African tourism developments.Accordingly, Remote Sensing is essential in the tourism policy strategy for local community development in African rural societies (Newsome, Moore, & Dowling, 2012); and in ecotourism initiatives worldwide (Salam, Lindsay, & Beveridge, 2000).Therefore, this paper sustains a raison d'etre of the implied significance of integrative GIS in sustainable tourism and comprehensive community development in Musina Municipality South Africa, and a fulcrum of GIS in sustainable rural tourism and local community empowerment.Henceforth, for a natural resources management appraisal for marginalized African communities.
Policy implications
While it was evinced that CBNRM can offer the best means of managing communities' native resources and that various forms of GIS can complement such enterprises.Nonetheless, there is an invincible need for a holistic approach within African rural and marginalized populations to allow comprehensive socio-economic development.Analogously, Shoval and Ahas (2016) appraise GPS and utilize pursuit technologies in tourism analysis.Thus GPS generally affords significant information for GIS recordings, and GPS data were rated at the top of big data in tourism studies (Li, Xu, Tang, Wang, & Li, 2018).Henceforth, in line with precision agriculture, GPS and GIS are crucial in monitoring diverse parameters in farming systems (Koch & Khosla, 2003;Satyanarayana & Mazaruddin, 2013;Yousefi & Razdari, 2015).
Suffice to note that unsustainable agriculture contributes to both climate change and underground water vulnerability (UNEP, 2010), thus reducing productivity and agritourism potential.Consistent with development policy rationales, sustainable agriculture delivers a probable resolution to foster farming systems to provide for a growing population amid changing environmental circumstances (Wittman, 2009).Analogously, sustainable tourism concerns the whole tourism adventure and considers social, economic and environmental issues.Therefore enhancing tourists' occasions and managing host communities' needs (Eagles, McCool, & Haynes, 2002;Ryan, 2002).There exists an obligation within national, regional and local government entities to unite in one common goal of sustainable agriculture, natural resource management, sustainable tourism and community development through the amplification of technology and diverse initiative platforms (Mkwizu, 2020;Ramaano, 2021a, AGJSR 40,2 b, c, d, e, f, g and h).To that end, attention to participatory tourism planning would heighten tourism expansion and population advancement in rural expanses.It will be essential to support an inclusive tourism policy that endeavors to operationalize local resources and rectify past management accounts; also, to maintain equity, equality and empowerment of every member of the respective communities within their domiciles.Ultimately, a booming interspersed tourism development policy could prove paramount for African marginalized, impoverished and remote communities.
Theoretical and practical implications
Therefore (Figure 1), is based on broad recommendations of this viewpoint and its core theme of sustainable tourism fundamentals (Ritchie & Crouch, 2003;Landorf, 2009;Ramaano, 2021a, b, c and d).It exhibits that factors like policy and regulations concerning the effectiveness of role players such as researchers and media can reinforce tourism prospects and community development in rural areas.Hence, along with the integrity of CBNRM and community-based tourism enterprises (DEAT, 2003).A formulation of a tourism strategy ought to implement significant socio-economic, technical and environmental efforts.Thus, significant for tourism marketing and new routes establishment.
Holistic tourism marketing tactics that support diverse marketing outlets, such as social marketing, can stimulate creative tourism economies.The cited could work favorably with eco-marketing, green marketing, and green tourism within ecotourism standards.Therefore, green marketing concerns the technique of creating and publicizing creations based on their (Peattie, 2001;Cherian & Jacob, 2012;Dangelico & Vocalelli, 2017).Furthermore, green tourism entails environmentally friendly exercises with mixed priorities and purposes critical for more natural income and sustainable products (Williams & Ponsford, 2009;Higgins-Desbiolles, Carnicelli, Krolikowski, Wijesinghe, & Boluk, 2019).Likewise, Albuquerque, Costa, and Martins (2018) recommended geographical information systems for tourism marketing purposes with a case study of the Aveiro region, Portugal tourism; thus, various tourism marketing strategies can benefit from GISs.Furthermore, synergies between sustainable tourism strategic partnership, and prioritization of GISs (Geographic Information Systems); QGIS (Quantum GIS), PGIS (Participatory GIS), and remote sensing in the discoveries and monitoring of tourism and CBNRM and community development initiatives could be essential.Ultimately, the adoption of a sustainable tourism strategy is on awareness, benefits and attitudes (Butler, 1999).So, how the locals view sustainable tourism, an adopted strategy could boost local economic development, reduce poverty and alleviate environmental degradation.Admittedly, tourismbased initiatives could encourage sustainable community development and sustainability.Broadly allowing rural areas' development within the African Continent; and also serving as a benchmark globally (Debarbieux et al., 2014).Indeed within the portrayals of gastronomy and agro-tourism, it is known that farmers involve themselves in tourism activities as a secondary means of income generation (Azimi et al., 2012;Ramaano, 2021a, b, c, d, e, f, g and h).
Limitations and further study implications
Any decent tourism strategy within the rural communities should not distance itself from the evident relationship between agricultural, farming, and tourism activities.It is suffice to say that African rural areas are generally attached to agriculture and tourism in their usual nature (Okech, Haghiri, & George, 2015).Hence, this is crucial to rural communities elsewhere (Holland, Burian, & Dixey, 2003;Ramaano, 2021g, 2022b.Thus, for preparation, designing and adopting a possible inclusive tourism strategy to advance livelihoods.As such, in most cases, agricultural development has got a slight edge compared to tourism development.Therefore, in this regard, strategic tourism management that encompasses the value of agriculture, with rather a notion of a synergetic relationship than a juxtaposition and competitive analysis, can be a prime goal for the communities in remote and marginalized domiciles.Hence, synergy amid CBNRM, GISs, sustainable tourism, community development and empowerment is imminent within marginalized and rural areas (Ramaano, 2022b).The limited approach and methodology of literature and document analysis can further be appended.Thus, there is more room for active incorporation of remote sensing and other earth observation methodologies for studies on tourism, agriculture and sustainability of rural community livelihoods.Henceforth monitor the performances of current and potential inclusive tourism strategies in rural milieus for tangible economic developments (Navalgund, Jayaraman, & Roy, 2007).Hence, Figure 2 displays the model for synergetic conceivable economic-administrative roles of incorporated GIS exercises in rural-tourism spin-offs and livelihoods advances in marginalized rural communities.It demonstrates that constituents like public governance and pertinent leadership, tourism and agricultural business and relevant stakeholders jointly with residents' socioeconomic quality are complementary.Indeed, there is evidence of planned GIS utilization worldwide, including in some African neighborhoods.Nonetheless, this analysis dwelled on the premise that GIS has not been utilized to its full potential in African rural communities, theoretically and virtually reflecting its further latent significance in hoisting livelihoods in places of marginality (Kashaigili, 2010;Mahajan, 2014;von Braun & Gatzweiler, 2014).Hereafter, significantly in biodiversity and tourismagriculture orientated societies.Henceforward, tourists, tourism exercises and welfare are potential receptive bases for latent economic-management roles of diverse GIS activities in the deprived and marginalized rural African communities.Thus, an inclusive tourism-orientated rural community's development strategy has to implement significant socioeconomic, technological and environmental efforts in such vicinities (Ramaano, 2022a;Florido-Ben ıtez, 2021).The next part presents references.
; PGIS, Remote Sensing, and GPS Synergies Source(s): Author's own compilation; adapted fromRamaano (2021d) Figure 2. Model for synergetic potential economicadministrative positions of integrated GIS exercises in tourism-orientated and integrated livelihoods advancements | 2022-08-17T15:07:37.067Z | 2022-08-16T00:00:00.000 | {
"year": 2022,
"sha1": "b229f1947dc76fcfe19b3cb39ea02d565181f184",
"oa_license": "CCBY",
"oa_url": "https://www.emerald.com/insight/content/doi/10.1108/AGJSR-04-2022-0020/full/pdf?title=the-economic-administrative-role-of-geographic-information-systems-in-rural-tourism-and-exhaustive-local-community-development-in-african-marginalized-communities",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "aa2d1f9bf9bbb440205b7db132f0c4bee311ec7a",
"s2fieldsofstudy": [
"Geography",
"Environmental Science",
"Economics"
],
"extfieldsofstudy": []
} |
7105415 | pes2o/s2orc | v3-fos-license | Over-expression of Id-1 induces cell proliferation in hepatocellular carcinoma through inactivation of p16 INK4a /RB pathway
Inhibitors of differentiation and DNA binding-1 (Id-1) have been demonstrated to oppose Ets-mediated activation of p16 INK4a . As p16 INK4a protein is inactivated in hepatocellular carcinoma (HCC), we aimed to investigate the role of Id-1 in regulating p16 INK4a expression during the development of HCC in HCC patients and direct ectopic Id-1 introduction into the PLC/PRF/5 HCC cell line. Sixty-two HCC samples were recruited for evaluation of Id-1 and proliferating cell nuclear antigen (PCNA) protein expression. The messenger RNA (mRNA) expression of Id-1 and p16 INK4a was detected by quantitative reverse transcription–polymerase chain reaction. For in vitro Id-1 transfection, five Id-1 transfected clones were isolated and the effect of ectopic Id-1 introduction was investigated by 3-(4,5-cimethylthiazol-2-yl)-2,5-diphenyl tetrazolium bromide assay, flow cytometry, immunostaining and western blot. Our results showed that Id-1 was over-expressed in HCC specimens both at mRNA and protein levels. Overexpression of Id-1 protein was correlated with PCNA ( r (cid:136) 0.334, P (cid:136) 0.033). HCC samples showing low Id-1 protein expression had a lower Id-1 mRNA level (340.2 versus 1467%, P (cid:136) 0.039) and higher p16 INK4a expression (195 versus (cid:255) 78.6%, P (cid:136) 0.039) than samples with high Id-1 protein expression. In the PLC/PRF/5 HCC cell line study, ectopic Id-1 expression resulted in proliferation of HCC cells and an increased percentage of S phase cells and PCNA expression. The results showed that over-expression of Id-1 induces cell proliferation in HCC through inactivation of p16 INK4a /retinoblastoma pathway. In conclusion, the results provided an insight for the understanding of the role of Id-1 in functional inactivation of p16 INK4a in HCC.
Introduction
Inhibitor of differentiation and DNA binding (Id) proteins are transcription factors that belong to a group of helix±loop±helix proteins lacking the DNA binding domain. Therefore, these proteins act as dominant inhibitors of basic helix±loop±helix transcription factors by forming transcriptionally inactive heterodimers. Four Id genes (Id-1 through Id-4) are important for cell fate decisions of growth and differentiation. Their expression is typically high in actively proliferating cells and is down-regulated as a prerequisite for exit from cell cycle and differentiation (1±3). The Id family member Id-1 has been implicated in regulating cellular life span, immortalization and delayed senescence in mammalian cells (4±6). Overexpression of Id-1 has been reported in several types of primary tumors including breast (7), pancreatic (8), prostate (9), cervical (10) and colorectal adenocarcinoma (11). Previous findings showed that ectopic expression of Id-1 induced aggressiveness and metastasis in breast cancer cells (7), and up-regulation of Id-1 has been correlated with tumor stage in squamous cell carcinoma (12). Most recently, over-expression of Id-1 protein has been correlated with patients' poor clinical outcome and mitotic index in several human cancers (10,11). This evidence strongly supports that Id-1 plays an important role not only in tumorigenesis, but also in tumor progression. However, Id-1 expression in hepatocellular carcinoma (HCC) has not been studied and its role remains unknown.
Recently, Id-1 has been demonstrated to oppose Etsmediated activation of p16 INK4a via Ras-Raf-MEK signaling (13). The p16 INK4a /retinoblastoma (RB) pathway has been shown to be down-regulated in various human tumors including HCC, either through loss of p16 INK4a or RB function, or through down-regulated expression of cyclin D or cdk4 (14±16). Several mechanisms of inactivation of p16 INK4a /RB pathway have been proposed including promoter methylation, protein sequestration and post-translational modification (17). However, little is known about the direct transcriptional control of genes within the p16 INK4a /RB family and their role in HCC tumorigenesis. Therefore, we first examined the Id-1 expression in messenger RNA (mRNA) and protein levels in HCC and then investigated whether Id-1 may play a role in regulating p16 INK4a expression during the development of HCC. Biotechnology) ( Figure 1G). A standard avidin±biotin peroxidase technique (DAKO, Carpinteria, CA) was applied. Briefly, biotinylated goat anti-mouse Ig or goat anti-rabbit Ig and avidin±biotin peroxidase complex were applied for 30 min each, with 15-min washes in phosphate-buffered saline. The reaction was finally developed by Dako Liquid DAB Substrate-chromogen System (DAKO).
Cytoplasmic expression of Id-1 and nuclear staining of PCNA were determined by two independent observers who assessed semi-quantitatively the percentage of stained tumor cells as well as staining intensity. The percentage of positive cells was rated as follows: 2 points, 11±50% positive tumor cells; 3 points, 51±80% positive cells; and 4 points, 481% positive cells. Staining intensity was rated as follows (18): 1 point, weak intensity; 2 points, moderate intensity; and 3 points, strong intensity. Points for expression and percentage of positive cells were added, and specimens were attributed to four groups according to their overall scores: negative, 10% of cells stained positive, regardless of intensity; weak expression, 3 points; moderate expression, 4±5 points; and strong expression, 6±7 points. Negative to weak Id-1 expression was graded as group 1, which represented low Id-1 expression; whereas moderate to strong Id-1 expression was graded as group 2, which represented high Id-1 expression. Expression of PCNA was also graded as above.
mRNA levels of Id-1 and p16 INK4a in HCC by quantitative reverse transcription±polymerase chain reaction (RT±PCR) The liver specimen was stored at À80 C until total RNA extraction. The total RNA was extracted using Rneasy Midi Kit (Qiagen Company, GmbH, Germany) and the quality of the total RNA was detected by the spectrophotometer (DU-65, BECKAM, Germany). About 0.5 mg total RNA from each sample was used to perform reverse transcription reaction. Taqman Reverse Transcription Reagents (Applied Biosystem, Foster City, CA) were used according to the manufacturer's instruction (25 C Â 10 min, 48 C Â 30 min, 95 C Â 5 min). Reverse transcription product (1 ml) was used to perform realtime quantitative polymerase chain reaction (PCR) with a reaction volume of 50 ml (TaqMan PCR Core Reagent Kit, Applied Biosystem) by the ABI PRISM 7700 Sequence Detection System (Applied Biosystem). Probes and primers of Id-1 and p16 INK4a were designed under the Primer Express software (Applied Biosystem) according to the criteria for real-time PCR. The sequences are listed in Table I. The Taqman Ribosomal RNA Control Reagent [18S RNA probe (VIC) and primers; PE Applied Biosystem] was used for internal control in the same PCR plate well to normalize the target genes amplification copies. The PCR protocol was according to the manufacturer's recommendation [50 C Â 2 min, 95 C Â 10 min (95 C Â 15 s, 60 C Â 1 min) Â50 cycles]. All the samples were detected in triplicate and the readings from each sample and its internal control were used to calculate the gene expression level. After normalization with the internal control, the gene expression levels in HCC were calculated as the percentage of the levels in normal liver tissue and nontumor tissue.
Cell line transfection PLC/PRF/5 was obtained from the Japanese Cancer Research Bank (Tokyo, Japan). The cells were transfected with 2 mg of plasmid DNA of either Id-1 or pcDNA3.1(±) (kindly provided by Prof. Y.C.Wong of the University of Hong Kong) containing the entire coding of Id-1 or expression vector pcDNA3.1(±) alone using FuGENE 6 according to the manufacturer's protocol (Boehringer, Mannhein, GmbH, Germany). After 48 h, the medium was replaced with fresh Dulbecco's modified Eagle minimal essential medium with Geneticin (G418) at 1 mg/ml. After 2 weeks of clonal selection, all the clones were grown in the presence of G418 at 0.4 mg/ml to ensure stable transfection. Isolated clones were expanded to 25 cm 2 flasks. All the transfected cells used in this experiment were in early passages (passages 4±8).
3-(4,5-Cimethylthiazol-2-yl)-2,5-diphenyl tetrazolium bromide (MTT) assay PLC/PRF/5 were seeded into 96-well plates in medium containing 10% fetal bovine serum (FBS) and serum free medium replaced the FCS containing medium 24 h after plating. After 48 h, MTT dye, at a concentration of 5 mg/ml (Sigma, St Louis, MO) was added every day and the plates were incubated for 12 h in a moist chamber at 37 C. Optical density was determined by eluting the dye with dimethyl sulfoxide (Sigma), and the absorbance was measured at 570 nm. Three independent experiments were performed.
Cell cycle analysis Cells (5 Â 10 5 ) were trypsinized and washed once in PBS. They were then fixed in cold 70% ethanol and stored at 4 C. Before testing, the ethanol was removed and the cells were resuspended in PBS. The fixed cells were then washed with PBS and treated with RNase (1 mg/ml) and stained with propidium iodide (50 mg/ml) for 30 min at 37 C. Cell cycle analysis was performed on an EPICS profile analyzer and analyzed using the ModFit LT2.0 software (Coulter Electronics, Hialeah, FL).
Statistical analysis
Continuous variables were expressed as median and range. The Mann± Whitney U test was used for statistical comparison. The Pearson test was used for bivariate correlation comparison. Significance was defined as P 5 0.05. Calculations were made with the help of SPSS computer software (SPSS, Chicago, IL).
Results
Expression of Id-1 protein in normal, non-tumor and tumor tissue from human HCC by immunostaining In order to determine the significance of Id-1 expression in HCC, we evaluated 62 samples of HCC and their corresponding non-tumor tissues by immunostaining. In normal liver, no cytoplasmic expression of Id-1 in hepatocytes was observed ( Figure 1A). However, in non-tumor liver (cirrhotic liver or chronic hepatitis), absent to weak cytoplasmic expression of Id-1 was observed ( Figure 1B). In HCC samples, cytoplasmic expression of Id-1 was not found in eight cases (12.9%) ( Figure 1C), weak in 16 cases (25.8%) ( Figure 1D), moderate in 25 cases (40.3%), and strong in 13 cases (20%) ( Figure 1E). The margin between non-tumor liver and tumor was shown in Figure 1F. Twenty-four cases were graded as group 1, which represented low Id-1 expression, whereas 38 cases were graded as group 2, which represented high Id-1 expression. mRNA levels of Id-1 and p16 INK4a in normal, non-tumor and tumor tissues by quantitative RT±PCR The mRNA level of Id-1 and its correlation to p16 INK4a were examined by quantitative RT±PCR. The Id-1mRNA expression in non-tumor and tumor tissues showed higher levels when compared with the normal liver from healthy liver transplant donors. For non-tumor liver, the median level was 192% (range 12.8±562%) of normal liver level (100%). There was no Figure 2B).
Correlation of Id-1 and PCNA in HCC tissues
Id-1 was suggested to play an important role in proliferation.
To determine whether over-expression of Id-1 will be correlated with increased proliferation in HCC, we evaluated the expression of Id-1 and a proliferative marker PCNA by immunostaining. All cases showed PCNA immunoreactivity in which it was strong in nine cases (15%), moderate in 31 cases (50%) and weak in 22 cases (35%). Id-1 protein was found to significantly and positively correlate with PCNA expression (r 0.334, P 0.033) ( Figure 3A and B).
Correlation of Id-1 and PCNA in HCC cell line
The effect of Id-1 expression in proliferation was also studied by directly transfecting Id-1 or pCDNA3.1 into PLC/PRF/5.
After clonal selection, five clones were isolated. From the in vitro result, clone 3 showed high Id-1 expression when compared with PLC/PRF/5 harboring empty vector ( Figure 3C and D). Clone 3 showed increased PCNA expression when compared with PLC/PRF/5 harboring empty vector ( Figure 3E and F).
The effect of ectopic Id-1 introduction on HCC cell growth
The effect of FBS on Id-1 expression in PLC/PRF/5 was shown in western blot. In the absence of FBS in the culture medium, the level of Id-1 protein was barely detectable when compared with the presence of FBS in the culture medium ( Figure 4A). As shown in Figure 4A, in the absence of FBS, all 5 clones showed different levels of Id-1 expression. After transfection of Id-1, PLC/PRF/5 exhibited a relatively different morphology with flatter structure when compared with the parental cell. We evaluated the effect of ectopic Id-1 introduction by MTT assay.
Introduction of Id-1 resulted in increased cell growth when compared with the parental cell and PLC/PRF/5 harboring empty vector ( Figure 5). The increase in cell growth was correlated with the level of Id-1 expression.
Effect of Id-1 introduction in cell cycle distribution
Next, we studied if Id-1 induced cell growth was a result of its ability to initiate DNA synthesis in HCC cell line in FBS free medium. Cell cycle analysis showed that there was 18% of S phase in the PLC/PRF/5 harboring pCDNA3.1 (control), but the percentage of S phase cells significantly increased (25.1± 37%) in Id-1 transfectants. There was no significant change in the percentage of G 2 phase (Figure 6).
Effect of Id-1 expression in p16 INK4a /RB pathway
We have shown that expression of Id-1 was negatively correlated with p16 INK4a in HCC samples and positively correlated with increased growth in PLC/PRF/5 in cell culture. To examine whether the increased proliferation was through inactivation of p16 INK4a /RB pathway, we evaluated the expression levels of p16 INK4a , CDK4 and RB in Id-1 expressing clones and PLC/PRF/5 harboring pCDNA3.1. As shown in Figure 4B, the level of p16 INK4a was weak and barely detectable in 5 Id-1 tranfectants and was inversely proportional to ectopic Id-1 expression. This in vitro result confirmed our data from the clinical samples that over-expression of Id-1 correlated with decreased p16 INK4a expression. The phosphorylated form of CDK4 (upper band in Figure 4C) was also found in all five Id-1 transfectants and parental cells in the presence of FBS in the culture medium, but not in parental cells in the absence of FBS in the culture medium. Moreover, phosphorylated RB protein expression was also found in all five Id-1 transfectants ( Figure 4D).
Discussion
In the present study, we first demonstrated that deregulation of Id-1 expression both at the mRNA and protein levels in human HCC, and level of immunohistologically detected protein correlated with levels of Id-1 mRNA. Some reports have found the difference in Id-1 expression level between the mRNA level by RT±PCR and protein level by western blot in ovarian cancer tissues and the discrepancy is due to strong expression of Id-1 in tumor vascular endothelia (19). From our immunostaining result, we observed nearly all cases showing positive Id-1 over-expression induces proliferation in HCC vascular endothelia staining in both non-tumor and tumor tissue. Some cases with absent Id-1 expression in tumor cells also show strong staining of Id-1 in tumor vascular endothelia. This result is consistent with the previous finding that Id-1 is also strongly expressed in vascular smooth muscle cells (20). However, there is no significant increase in Id-1 expression in tumor vascular endothelia. Evaluation of the whole patients' series revealed several significant associations with histopathological and other features of tumors. Of these, the correlation with mitotic index and p16 INK4a were highly significant. We found a positive and significant correlation between the protein expression of Id-1 and PCNA, which indirectly suggested the role of Id-1 in the proliferation of HCC cells. This was further confirmed by our in vitro cell culture model. Increased PCNA expression in clone 3 with the highest ectopic Id-1 expression was also observed when compared with the control in the absence of FBS. Our data further confirmed the function of Id-1 as a promoter of proliferation in cancers (21± 24). From the RT±PCR result, it was also noted that Id-1 mRNA level in the non-tumor liver is relatively high when compared with the normal liver, but it was low when compared with the tumor tissue. Since most of our non-tumor cases are cirrhotic, which has a higher proliferative rate when compared with normal liver (25), it accounts for the enhanced Id-1 expression.
p16 INK4a was found to be frequently inactivated in HCC through promoter methylation (16,26±28). However, few reports have demonstrated the direct transcriptional inactivation of p16 INK4a in HCC. Since the transcriptional regulator Id-1 has recently been identified as a repressor of p16 INK4a transcription (13,29,30), we sought to determine whether transcriptional inactivation of p16 INK4a by Id-1 might play a role in the initiation and progression of HCC. Recent reports showed that high level of Id-1 expression is correlated with the loss of p16 INK4a in early stage melanoma (31), but is positively correlated with the expression of p16 INK4a in breast cancer samples (32). In our study, we found that high Id-1 expression was significantly correlated with decreased p16 INK4a expression and the result is similar to melanoma. Hypermethylation of p16 INK4a within the promoter was suggested to be one of the late events in hepatocarcinogenesis for tumor progression and metastases (33). With reference to the hypothesis by Polsky et al. (31), HCC hepatocarcinogenesis might also occur via S-phase = 28.62% multi-step that entails reversible Id-1 transcriptional inactivation of p16 INK4a in the early growth phase that allows bypass of cellular senescence, and subsequent acquired epigenetic changes in cells such as promoter methylation for vertical growth phase for later stage. Further study on this subject is required. Conclusively, these results suggested that inactivation of p16 INK4a in HCC might be, in part, through transcription control of Id-1.
In order to determine whether Id-1 plays a role in proliferation of HCC through inactivation of p16 INK4a , we transfected PLC/PRF/5 by Id-1. Five Id-1 transfectants were isolated and showed differential Id-1 expression. All these five clones showed FBS-independent proliferation accompanied by increased PCNA expression and increased percentage of cell cycle S phase from G 1 phase. Our in vitro results were consistent with the previous findings that ectopic Id-1 expression stimulated DNA synthesis from G 1 to S phase (7,21,24,28) and resulted in down-regulation of p16 INK4a in the Id-1 transfectants. This result further supported that p16 INK4a inactivation was due to transcription control of Id-1. Other than p16 INK4a , ectopic Id-1 expression induced RB phosphorylation in human keratinocytes (28). One of the functions of p16 INK4a is to prevent cyclin-dependent kinases such as CDK4 and results in prevention of RB phosphorylation. In our study, we found that increased expression of phosphorylated CDK4 and RB was only observed in the five Id-1 transfectants but not in the control. This showed that down-regulation of p16 INK4a was associated with increased expression of phosphorylated CDK4 and RB but not with CDK and RB levels. RB phosphorylation is proposed to regulate cell cycle regulation from G 1 to S phase through cyclin D and CDK4/6 complex. Without the inhibition of p16 INK4a , CDK4 becomes phosphorylated and can prevent the binding of E2F with RB, resulting in G 1 to S phase conversion by RB phosphorylation (34). In our study, all Id-1 transfectants showed a variable degree of G 1 to S phase conversion. Therefore, these results suggested that the effect of Id-1 on proliferation on HCC cells might be caused by the decreased p16 INK4a , which in turn inactivated RB.
In summary, our in vitro and in vivo data provided evidence for the first time on over-expression of Id-1 and its role in HCC. Over-expression of Id-1 played a role in HCC cell proliferation. Id-1 induced HCC proliferation through inactivation of p16 INK4a /pRB pathway as shown by the evidence of decreased p16 INK4a expression and activation of CDK and RB in the five Id-1 transfectants. Our results provided an insight for the understanding of the role of Id-1 in functional inactivation of p16 INK4a in HCC. | 2017-04-13T11:39:18.813Z | 2003-11-01T00:00:00.000 | {
"year": 2003,
"sha1": "c34577c90763f8356ed943a54770d0398e9e57f5",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/carcin/article-pdf/24/11/1729/7085966/bgg145.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "fc07337ebf02685317a2e138faf5bf147bfeae6f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
249642730 | pes2o/s2orc | v3-fos-license | Four-field Hamiltonian fluid closures of the one-dimensional Vlasov-Poisson equation
We consider a reduced dynamics for the first four fluid moments of the onedimensional Vlasov-Poisson equation, namely, the fluid density, fluid velocity, pressure and heat flux. This dynamics depends on an equation of state to close the system. This equation of state (closure) connects the fifth order moment-related to the kurtosis in velocity of the Vlasov distribution-with the first four moments. By solving the Jacobi identity, we derive an equation of state which ensures that the resulting reduced fluid model is Hamiltonian. We show that this Hamiltonian closure allows symmetric homogeneous equilibria of the reduced fluid model to be stable.
I. INTRODUCTION
In order to simulate the dynamics of a plasma, there is a variety of models which are used according to the type of question and the level of detail in the description of the plasma. Most of these models can be categorized as kinetic or fluid, whether the dynamical field variables are functions of the phase-space coordinates (x, v) of the particles or just configuration space coordinates x. Compared to kinetic models, fluid models have the significant advantage to be defined in a dimensionally reduced space, which makes them particularly desirable from a computational viewpoint. The central question is how to define these fluid models from a parent kinetic model. There are plethora of methods to do this, some better suited than others depending on the specific problem at hand. For instance, some reductions rely on an assumption on the shape of the distribution function, 1-5 or introduce suitably designed dissipative terms. [6][7][8] Here we follow a different route by requiring that the reduced fluid model preserves an important dynamical property of the parent model, namely its Hamiltonian structure. [9][10][11] Rather than being an additional constraint on the reduction, we will see that this requirement provides a way to perform the reduction, and precisely define where E is the fluctuating part of the electric field E whose dynamics is given by and j = e vf dv is the current density. We assume periodic boundary conditions in x with period 2L x , so that the fluctuating part is defined as We consider a fluid description obtained by using the first four fluid moments of the distribution function, more precisely, the density ρ(x, t), the fluid velocity u(x, t), the pressure P (x, t) and the heat flux q(x, t) defined by From the Vlasov-Poisson equation, we obtain the equations of motion for these moments and for the electric field: where which is related to the kurtosis (in velocity) of the distribution function f . Here and in what follows, ∂ x and ∂ v denote the partial derivatives of a function of x and v with respect to x and v, respectively. In order to close the set of equations of motion, we need an equation of state of the form R = R(ρ, u, P, q).
An example of closure is obtained by assuming a Gaussian distribution for f (see Ref. 3), which leads to R = 3P 2 /ρ, independent of u and q. One of the main problems of the Gaussian closure is that the resulting model breaks the original Hamiltonian structure of the parent model, the Vlasov-Poisson equation. 12 As a consequence, this closure introduces unphysical dissipation.
Based on the preservation of the Hamiltonian structure, another closure based on dimensional analysis was proposed in Ref. 10, namely We notice that this closure depends explicitly on the asymmetries of the distribution function, measured by q, and is still independent of the fluid velocity u. However this closure has a fundamental drawback which is that homogeneous equilibria are all unstable. In order to see this, we linearize the equations of motion around one of such equilibria with q 0 = 0, u 0 = 0 and E 0 = 0, i.e., ρ = ρ 0 + δρ, u = δu, P = P 0 + δP , q = δq and E = δE. The linearized equations of motion for δX = (δρ, δu, δP, δq, δE) in Fourier space, i.e., for reduce toδ where The matrix A does not have purely imaginary eigenvalues for from which we conclude that all equilibria with q 0 = 0 are unstable.
Here we are looking for a closure which combines two important properties of the Vlasov-Poisson equation, namely, the stability of symmetric homogeneous equilibria, and its Hamiltonian structure.
We do not assume any particular form for the distribution function. Instead we solve the Jacobi identity in order to determine all possible R(ρ, u, P, q) for which this identity is satisfied. As a result, we unveil a one-parameter family of Hamiltonian fluid closures. We show that for these closures, the associated Poisson bracket has two Casimir invariants of the entropy type, i.e., two observables C of the form C = dx ρ Γ(ρ, P, q). These Casimir invariants provide normal variables in which the closure in parametric form is found to be polynomial. We then examine numerically some properties of the resulting Hamiltonian model in two cases: plasma oscillations and the two-stream instability.
II. DERIVATION OF THE FOUR-FIELD HAMILTONIAN CLOSURE
The one-dimensional Vlasov-Poisson equation has a Hamiltonian structure 13 (see also Refs. 14 and 15 for a review), i.e., the equations of motion can be recast using a Hamiltonian and a Poisson bracket: where The Poisson bracket between two scalar functionals of f and E is given by where δF δf and δF δE denote the functional derivatives of F with respect to f and E respectively. In particular, this bracket satisfies the Jacobi identity, i.e., Remark: Gauss's law is derived from a Casimir invariant of the bracket (5): Here we consider a neutral plasma, i.e., such that the value of this Casimir invariant is −4πe which expresses the presence of a neutralizing background.
Regardless of the truncation, Eq. (1) can be recast in the following form (see Ref. 10 for more details): where X = (ρ, u, S 2 = P/ρ 3 , S 3 = 2q/ρ 4 , E), and The 2 × 2 matrices α = ∂ x γ and β are given by where S 4 = R/ρ 5 and S 5 is an arbitrary function of ρ, u, S 2 and S 3 . As a consequence, since the bracket is antisymmetric, the models are all conserving energy regardless of the closure S 4 = S 4 (ρ, u, S 2 , S 3 ) and S 5 = S 5 (ρ, u, S 2 , S 3 ). We notice that β = γ + γ T . This allows us to rewrite the Poisson bracket in a more antisymmetric way The Jacobi identity for the above bracket leads to the following constraints on the matrix γ: for all i, j, k, m (and repeated summation over n).
A. Explicit expression for the Hamiltonian closure
In Ref. 10, it was shown that in order for the bracket (6) to be Hamiltonian, the closures S 4 and S 5 needs to be of the form S 4 = S 4 (S 2 , S 3 ) and S 5 = S 5 (S 2 , S 3 ), i.e., they do not depend on ρ and u. The conditions (7) boil down to three constraints Equivalently, a necessary and sufficient condition is that the closure function S 4 satisfies the following two coupled nonlinear partial differential equations From these equations, we readily check that the Gaussian closure S 4 = 3S 2 2 is not a solution of these equations, which means that the Gaussian closure is not Hamiltonian. In addition, we check that the solution given by Eq. (2), corresponding to the dimensional analysis of Ref. 10, i.e., S 4 = S 2 2 + S 2 3 /S 2 , is the simplest solution. However, this is not an adequate solution since all homogeneous equilibria are always found to be unstable, as pointed out above. To solve Eqs. (8), we start by looking for solutions close to symmetric distributions, i.e., We insert this expansion in Eqs. (8) and consider their leading behavior near S 3 = 0. This lead to a set of two coupled ordinary differential equations By combining these two equations, we obtain one single ordinary differential equation Near S 2 = 0, we look for solutions of the type A possible solution is obviously the one obtained using the dimensional analysis 10 , i.e., In addition there is a less trivial family of solutions for α = 5/3. More generally, we look at solutions which can be expanded in Puiseux series We show that the only possible solutions are f 0 (S 2 ) = S 2 2 and for any value of k. For practical purposes, we define κ = 5k/9. We notice that contrary to the solution provided by dimensional analysis, the second solution comes as a family parameterized by κ. The interesting feature is that this family extends to a Hamiltonian closure for arbitrary large values of S 3 . Indeed we are looking for a solution which can be expanded as Inserting this ansatz in Eq. (8) leads to a recurrence relation for the coefficients f n (S 2 ): and an addition constraint where f n has to satisfy for all n ≥ 1. The first few terms are given by The expression of other terms of the series expansion of S 4 can be obtained using a MAT- we have found that the Jacobi identity is satisfied up to orders S 2nmax 3 for the values of n max we have tested. This led us to conjecture that the limit n max → ∞ corresponds to a Hamiltonian closure. We notice that the closure is singular at for all n ≥ 0. Therefore, we have a scaling relationship for S 4 : In particular one interesting feature is that the first order of the closure does not depend on ρ, i.e., Remark: Relation between the kurtosis and the skewness.
B. Casimir invariants
A very interesting property of the noncanonical Poisson bracket (6) is that it possesses a number of Casimir invariants, i.e., observables C such that {C, F } = 0 for any other observable F . First we are looking for Casimir invariants of the entropy type, i.e., The function Γ satisfies the following conditions: for all j, k and n in (2, 3) (and where we assumed implicit summation over the repeated index i). We assume that we have K solutions, denoted Γ k for k = 2, . . . , K. Using the property β = γ + γ T , we prove that the above-conditions are equivalent to for all n, k and l.
Using series expansions, we found two solutions to Eq. (13): where the first elements in the series are: , h 1 (S 2 ) = 2 The functions h n and g n for n ≥ 1 are determined from the recurrence relations: which are both obtained from Eq. (12) with j = 3 and n = 3.
These Casimir invariants allow us to define particularly relevant variables, referred to as normal variables, in which the Hamiltonian system is greatly simplified. We perform a local change of variables: The bracket (6) becomes whereβ is a symmetric matrix whose elements arẽ with an implicit summation over repeated indices. From Eq. (13), we deduce that the matrixβ is constant. As a consequence, the bracket (14) always satisfies the Jacobi identity.
Therefore the existence of two Casimir invariants of the entropy type for the bracket (6) is sufficient to ensure that it is a Poisson bracket. Note that we use the terminology Casimir invariant also for a bracket which is a priori not of the Poisson type. Using the expressions for S 3 = 0, the matrixβ takes the very simple form In addition, the existence of two Casimir invariants of the entropy type ensures a third Casimir invariant: which is equal to
Its expansion is given by
for n ≥ 0, and the first elements of the series are given by We notice that three Casimir invariants similar to C 1 , C 2 and C 3 (but of course, different) have been found for the Hamiltonian closure obtained using the dimensional analysis (see
C. Parametric expression for the Hamiltonian closure
There is another significant advantage to working with normal variables Γ i : What is not fully satisfactory with the variables S i is that the closure is given as a relatively complex expansion, and consequently we were not been able to check the Jacobi identity at all orders in the expansion. The origin of this complication is due to the search for an explicit closure function S 4 (S 2 , S 3 ), not to the search of a Hamiltonian closure per se. Here instead we are looking at a parametric expression of the closure, and we consider the normal variables as parameters of the closure. More precisely, we consider an arbitrary change of coordinates from some variables Γ i to variables S i : and the closure functions are given by We start with the bracket (14) for all i, j and n. The first set of equations (15) defines parametrically the functions S 3 , S 4 and S 5 : Once the function S 2 is specified, all of the other functions S i are uniquely determined by the above equations. By inverting the equations Γ i = Γ i (S 2 , S 3 ) or by solving one of the constraints (16), we obtain the following expression for S 2 (Γ 2 , Γ 3 ): Inserting this expression in the parametric equations for S 3 , S 4 and S 5 leads to the following expressions: We notice that the closure is no longer given as an infinite series. In particular, the functions S n for n = 2, 3, 4, 5 are polynomials in the two variables Γ 2 and Γ 3 , and the degree in Γ 3 is n and the degree in Γ 2 is n+1. Using Mathematica 23 , we have checked that the constraints ( For S 2 to be positive, a necessary and sufficient condition is that κ > Γ 2 > 0 or if Γ 2 > κ, . This means that S 2 can take arbitrarily large values, provided that S 3 is not too large. We notice that the point (S 2 = κ, S 3 = 0) in Fig. 1 is obtained for Γ 2 = κ regardless of the value of Γ 3 .
In Fig. 2, we have represented the closure function S 4 given parametrically by Eqs. (18) for a selected range of parameters (Γ 2 , Γ 3 ). The surface gets more complicated, with more branches, as the range of (Γ 2 , Γ 3 ) is extended (see the Mathematica code available at Ref. 17). We notice that there is a central brighter patch where there is a single value of S 4 for a given (S 2 , S 3 ). It corresponds to the explicit closure S 4 = S 4 (S 2 , S 3 ) as depicted in Fig. 1.
D. Equations of motion
The Poisson bracket (14) becomes
and the Hamiltonian is
where S 2 is given by Eq. (17). The equations of motion are given byḞ = {F, H}: Remark 1: In the case of an external time-dependent electric field E 0 (x, t), the closure is identical. First we need to autonomize the bracket. For the Vlasov-Poisson equation, the variables are the fields f (x, v, t) and E 1 (x, t), together with t and K (K being the canonically conjugate variable to time t), such that the total electric field is E = E 0 + E 1 .
The Hamiltonian is
and the Poisson bracket
For the reduced fluid equations, the Hamiltonian becomes
and the Poisson bracket
The equations of motion consists in changing E by E 0 + E 1 in the Vlasov equation and in the momentum equation, and replacing E by E 1 in the Ampère equation.
Remark 2: By rescaling the parameters Γ 2 and Γ 3 , and by rescaling the density ρ in the following way As a consequence, the one-parameter family of Hamiltonian closures can be seen as a unique Hamiltonian model, and the parameter κ is now in the initial condition.
E. Stability of the symmetric and homogeneous equilibria
We have found a one-parameter family of closures which fulfill the first requirement, namely, the resulting models are Hamiltonian. The second requirement is the stability of the equilibria q 0 = 0. The linearized equations of motion reduce to Eq. (3) with From the dispersion relation, we define where ω p = 4πe 2 ρ 0 /m is the plasma frequency. The eigenvalues of A are all purely imaginary if where ω BG (k) is the Bohm-Gross dispersion relation given by The non-zero eigenvalues of A are Therefore the homogeneous equilibria are stable for ω 2 0 > ω 2 BG , which is equivalent to requiring that S 2 < S (c) 2 or Γ 2 < κ. In terms of the parameters of the equilibrium, this means that the pressure P 0 is such that P 1/3 0 /ρ 0 < κ. A crucial factor is that the closure R(ρ, P, q = 0) does not depend on ρ, and in this case, the necessary and sufficient condition for stability is We recall that R(ρ, P, 0) = ρ 5 S 4 P ρ 3 , 0 . The fractional exponent 5/3 in the closure comes from the requirement that R does not depend on ρ, ensuring the stability of the equilibria. More general cases for stability would be that at q = 0 ∂R ∂P > 3P ρ , for all ρ > 0 and P > 0. However, these conditions do not ensure that the resulting model is Hamiltonian. As expected, the requirement that the model is Hamiltonian is more stringent than requiring that homogeneous equilibria are stable.
III. NUMERICAL APPLICATIONS
The objective of this section is not to offer a detailed comparison between the numerical implementation of the Hamiltonian fluid model and the one of the parent kinetic model.
The objective is more modest since we limit ourselves to a couple of illustrations of the
A. Plasma oscillations
We consider the following initial distribution function, built from a skew-normal distribution, This is the same dispersion relation given by the fluid and the kinetic models. For the skew-normal equilibrium, We consider the fluid model with κ = 1. Given the initial values of S 2 and S 3 , we compute the initial values for Γ 2 and Γ 3 . We represent the values of E(x, t) in Fig. 3 obtained with the fluid and the kinetic model. We notice some qualitative similarities between the kinetic and the fluid model, such as plasma oscillations. However, as expected, the fluid model does not capture the damping of the field (clearly visible for L x /λ D = 2π), which is a purely kinetic effect. For larger values of L x , i.e., L x /λ D = 3π the damping is reduced as expected, and the agreement between the kinetic and the fluid simulations is improved.
B. Two-stream instability
Next, we consider the two-stream instability with the initial distribution kinetic model, i.e., a growth rate of 0.25924553 ω p (which has been corrected for the effects of the spatial grid). We notice that both models display some similar features, such as the oscillations at the beginning. Also, the slope of the higher-order modes corresponds rather well, despite the fact that these modes are higher in amplitude for the fluid model.
The main discrepancy between both models occur when the amplitude of the field saturates, which is when the kinetic effects are predominant, and these cannot be described by the fluid model. In addition, all wavenumbers are unstable in the Hamiltonian fluid model while only the fundamental mode is unstable in the kinetic model (the higher harmonics are driven by the fundamental mode through nonlinear couplings). For both models, the initial electric field has the same initial amplitude. Nonetheless, the amplitude of the fundamental mode is slightly larger in the fluid model compared to the kinetic model (cf. the blue curves on the left panel of Fig. 4). This is due to differences in how the initial condition projects onto the system modes in the two models. In both cases, a linear analysis produces mode amplitudes that are in excellent agreement with the numerical results.
DATA AVAILABILITY
Data sharing is not applicable to this article as no new data were created or analyzed in this study. | 2022-06-15T01:15:58.596Z | 2022-06-14T00:00:00.000 | {
"year": 2022,
"sha1": "e57e84db9a27664c727ceab34396141097ad26f8",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b1304f67a8bee6a1e1c493e83d1c86acabab9ed2",
"s2fieldsofstudy": [
"Physics",
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
} |
209274980 | pes2o/s2orc | v3-fos-license | Treatment of Idiopathic Membranous Nephropathy (IMN)
We present a 59-year-old patient with type 2 diabetes mellitus and massive nephrotic syndrome (anasarca) and biochemical syndrome. The renal biopsy showed a membranous nephropathy (MN). In the blood analysis the patient presented antibodies against M-type phospholipase A2 receptor (anti-PLA2R) positive at a very high titer. Given the existence of idiopathic membranous nephropathy (IMN), treatment was started with a modified Ponticelli regimen, with no response, requiring periodic ultrafiltration sessions. Rituximab induces nephrotic syndrome (NS) remission in two-thirds of patients with IMN, even after other treatments have failed. We proposed treatment with rituximab based on published evidence. In IMN, the presence of M-type anti-receptor antibodies of A2 phospholipase is considered highly specific to idiopathic forms, but the presence of such antibodies has not been shown to be associated with a particular clinical profile. Assessing circulating antiPLA2R autoantibodies and proteinuria may help in monitoring disease activity and guiding personalized rituximab therapy in nephrotic patients with IMN.
Introduction
MN is a disease characterized by the deposition of immune complexes at subepithelial level. Its most frequent clinical presentation is NS, and it is today the first cause of NS in the Caucasian adult [1].
In recent years, it has been discovered that IMN has an immunological basis. The data in favour of this alteration of the immune system are the findings found in the electron microscopy of renal biopsies, the granular deposits of immunoglobulin G (mainly IgG4) and C3 along the glomerular basement membrane, and the deposit of electrodense immunocomplexes in the subepithelium that entails an activation of the complement products. MN may also be secondary to infections, tumors, autoimmune diseases, and use of different drugs [1]. PLA2R has been found to be the target antigen of autoantibodies in IMN patients, and is known as anti-PLA2R. PLA2R is a type I transmembrane glycoprotein related to the animal family of type C lectin. More recently, anti-PLA2Rs have been found to be immunoglobulins of the IgG4 type [2]. Currently, these antibodies are present in 60-80% of IMN patients prior to immunosuppressive therapy. However, in secondary MN forms these antibodies are much less prevalent. No anti-PLA2R has been observed in other pathological conditions or in healthy individuals. In recent years, articles have been published in which several researchers have addressed the appearance of anti-PLA2R antibodies in patients with secondary MN, so more data are needed to conclude with certainty that when these antibodies are found, there is no need to investigate an underlying cause to guide a secondary MN [2,3].
Recently, various studies have described that about 70% of MNI cases are associated with the presence of anti-PLA2R. Antibody titer at diagnosis is related to the likelihood of spontaneous remission (SR) and response to treatment. However, it has not been demonstrated that in patients with MNI, the presence of anti-PLA2R antibodies is associated with a certain clinical profile of disease presentation or implies differences in clinical course, response to treatment or long-term prognosis. On the other hand, although most studies agree that the presence of anti-PLA2R antibodies is highly specific to IMN, there are cases described in which the presence of these antibodies coincides with other possible etiologies and about 30% of patients with IMN are anti-PLA2R negative. In this last group of patients, antibodies have been described against other podocyte antigens whose clinical correlation is still being investigated and, therefore, there is greater uncertainty about the possible identification of secondary etiologies over time. However, since most studies have been cross-sectional, little information is available about the diagnosis of possible etiologies responsible for MN over time in both positive and negative anti-PLA2R patients [4].
In IMN the disease appears to develop by the binding of an autoantibody directed against an antigen. A podocyte that is located on the subepithelial slope of the podocyte. For this motive there are currently various immunosuppressive treatments available [2].
A very high percentage of patients (more than 40% in many series) develop spontaneous remission of the disease without any type of treatment, while another considerable percentage (around 30-40%) develops progressive renal failure accompanied by nephrotic proteinuria [2]. Ponticelli in 1989 showed that combined treatment with cytotoxics produced partial or complete remission of proteinuria in a significant proportion of patients with membranous nephropathy [5]. However, the literature is controversial regarding the effectiveness of the scheme described by Ponticelli et al. [6].
The most important predictors of risk for a progressive decline in renal function are persistent severe proteinuria for at least 3 months, a reduced creatinine clearance at presentation, and a decline in creatinine clearance over the assessed proteinuria period [7].
Resistant patients are defined as those with moderate or high risk disease who fail an adequate trial of treatment with both cyclophosphamide-based and calcineurin inhibitor-based regimens [8].
A trial of rituximab can be considered after a careful evaluation of the potential risks and benefits of further immunosuppression. Weak evidence suggests that a clinically relevant response to rituximab may be less likely in patients with a creatinine clearance below 75 mL/min per 1.73 m 2 [9].
In this case, we present a 59-year-old patient with type 2 diabetes mellitus and massive nephrotic syndrome (anasarca) and biochemical syndrome. The renal biopsy showed a membranous nephropathy (MN). Anti-PLA2 positive antibodies at a very high titer (366 RU/mL). Given the existence of IMN, treatment was started with a modified Ponticelli regimen, with no response, requiring periodic ultrafiltration sessions. We proposed treatment with rituximab based on published evidence.
Clinical case
A 59-year-old male with a history of long-standing type 2 DM with retinopathy and diabetic neuropathy, OSAS and intrinsic asthma. Blood pressure and normal renal function. One month prior to your entry into our Service (October 2018) refers pretibial edemas-malleolar and scrotal edema of morning onset, with worsening throughout the day and dyspnea of moderate efforts. Initially consult with your Primary Care doctor, being treated with furosemide. In the absence of a response, he is referred to the hospital.
Kidney ultrasound showed normal-sized kidneys, with good cortico-medullary differentiation, without dilation of the urinary tract. A percutaneous renal biopsy was performed observing a renal parenchyma corresponding to the cortical zone that included 28 glomeruli. Diffuse and global thickening of the glomerular basement membranes by subepithelial deposit ("comb peaks") with mesangial focal extension not associated with mesangial cell proliferation but with floccularcapsular adhesions, no endocapillary proliferation or glomerulitis, absence of karyorrhexis, irregular thickening of Bowman's capsule, immunocomplex subepithelial deposits (IgG), Glomerular sclerosis (10%); tubular atrophy and interstitial fibrosis (moderate), arteriosclerosis (moderate), hyaline arteriolosclerosis, and vascular changes associated with hypertension (Figures 1-6). Depletive treatment was started with IV furosemide at high doses and IV albumin, with poor response, requiring periodic ultrafiltration sessions. Treatment was started with a modified Ponticelli regimen: prednisone at a dose of 0.5 mg/kg/day and cyclophosphamide 125 mg/day initially and then 100 mg/day, adjusted for renal function. An inhibitor of the angiotensin conversion enzyme was associated. Two months later, proteinuria has not changed. Kidney function remains normal. After 2 months of treatment, we have not shown changes in proteinuria. Continue to specify ultrafiltration sessions. We have decided to administer Rituximab. We will continue to monitor renal function, proteinuria and anti-PLA2 Ac.
Discussion
MN is the first cause of NS in the adult [1]. From A clinical perspective, it is classified in idiopathic (IMN) or secondary depending on whether or not it is possible to identify a responsible etiology. In the absence of clinical or biochemical data indicating a specific etiology, distinguishing between the two forms can be difficult only through the data provided by the renal biopsy [2].
MN in adults is most often idiopathic (approximately 75% of cases) but can be caused by a variety of drugs, infections, and underlying diseases. These include gold, penicillamine, systemic lupus erythematosus, malignancy, and hepatitis B and C virus infection [2].
It is often not possible to distinguish idiopathic from secondary MN on clinical grounds alone, even though serologic studies (e.g., antinuclear antibodies, hepatitis B serology) and a history of drug exposure or cancer may be revealing of a potential cause. However, there are certain findings on electron microscopy and immunofluorescence that suggest secondary disease. In patients with secondary MN, cessation of the offending drug or effective treatment of the underlying disease is usually associated with improvement in the nephrotic syndrome [10].
In view of the potential toxicity of the drugs used to treat IMN, with or without the nephrotic syndrome, the decision to initiate therapy is based, in part, upon an understanding of the natural history of untreated patients, with and without features of the nephrotic syndrome at presentation [11]: • Spontaneous complete remission of proteinuria occurs in 5-30% at 5 years.
• The occurrence of end-stage renal disease in untreated patients is approximately 14% at 5 years, 35% at 10 years, and 41% at 15 years.
Risk factors for progressive idiopathic MN-in view of the often benign clinical course, immunosuppressive agents should be considered only in those with idiopathic MN who are most at risk for progressive disease or who have severe symptomatic nephrotic syndrome. Both histologic and clinical findings may be important in risk assessment.
• Clinical findings associated with a higher risk of developing end-stage renal disease include older age at onset (particularly greater than 50 years), male sex, nephrotic-range proteinuria (particularly if protein excretion exceeds 8-10 g/day), and an increased serum creatinine at presentation [12].
In contrast to these adverse risk factors, women, children, and young adults, non-nephrotic-range proteinuria, a progressive decline in protein excretion, and presentation with normal renal function have been associated with a relatively benign course [7]. In addition, patients of Asian ancestry seem to have a better long-term prognosis than other ancestries.
• Histologic findings are frequently regarded as important predictors of outcome, as the risk of progression is increased in patients with glomerular scarring (segmental sclerosis) and correlates more closely with the severity of the tubulointerstitial disease than with the degree of glomerular injury [12,13]. This observation is typical of most glomerular diseases.
Importance of attaining remission-attainment of a complete remission (whether spontaneous or not) is associated with good long-term outcomes. In contrast, little is known about the prognosis in patients with a partial remission [14].
A complete remission was defined as protein excretion below 0.3 g/day, while a partial remission was defined as protein excretion below 3.5 g/day plus a 50% or greater reduction in protein excretion from the peak value. Renal failure was defined as a creatinine clearance ≤15 mL/min, initiation of dialysis, or renal transplantation. The following findings were associated with a better renal survival on multivariate analysis that took into account clinical and laboratory data: • higher initial creatinine clearance and lower proteinuria at presentation, • lower mean arterial blood pressure over the observation period, • attainment of complete or partial remission in proteinuria. Based upon this model, is defined as low-, moderate-, and high-risk patient subsets with varying degrees of risk for progression to more advanced kidney insufficiency (defined as a creatinine clearance ≤60 mL/min per 1.73 m 2 ) over 5 years: • Low risk-proteinuria remains less than 4 g/day and creatinine clearance remains normal for a 6-month follow-up period. Such patients have a less than 8% risk of developing chronic renal insufficiency over 5 years.
• Moderate risk-proteinuria is between 4 and 8 g/day and persists for more than 6 months. Creatinine clearance is normal or near normal and remains stable over 6 months of observation. Chronic renal insufficiency develops over 5 years in approximately 50% of these patients.
• High risk-proteinuria is greater than 8 g/day and persists for 3 months and/ or renal function that is either below normal (and considered due to MN) or decreases during the observation period. Approximately 75% of such patients are at risk of progression to chronic renal insufficiency over 5 years.
Nonimmunosuppressive therapies
Given the high rate of gradual spontaneous improvement in patients with IMN, only selected patients with more severe or progressive disease should receive immunosuppressive therapy [1].
In contrast, almost all patients are candidates for more general therapies for nephrotic syndrome, such as angiotensin inhibition, lipid lowering, and, in selected patients, anticoagulation. Other aspects of therapy include diuretics to control edema and maintenance of adequate nutrition.
Proteinuria goal-the optimal proteinuria goal in patients with chronic kidney disease is less than 1000 mg/day. However, this goal is often not attainable in patients with IMN.
Goal blood pressure-the goal blood pressure in patients with MN is the same as it is in other patients with proteinuric chronic kidney disease (125/75 mmHg). Attainment of this goal can slow the progression of proteinuric chronic kidney disease and can provide cardiovascular protection since chronic kidney disease is associated with a marked increase in cardiovascular risk. The data supporting these recommendations are presented separately.
Attainment of the blood pressure goal in patients with MN usually requires more than angiotensin inhibition alone. Correction of volume overload is of particular importance and usually requires loop diuretics. Diuretics should be pushed until the blood pressure goal is reached or the patient has attained "dry weight" which, in the presence of persistent hypertension, is defined as the weight at which further fluid removal leads to symptoms (fatigue, orthostatic hypotension) or to decreased tissue perfusion as evidenced by an otherwise unexplained elevation in the blood urea nitrogen and/or serum creatinine concentration.
A low-salt diet is an important component of antihypertensive therapy (especially when using angiotensin inhibitors) and edema control in patients with MN. In addition, a high-salt diet can increase proteinuria, and in some individuals, a high-salt diet rather than increased immunologic activity should be considered as an underlying cause of worsening proteinuria.
Lipid lowering-hyperlipidemia, with often dramatic elevations in the serum cholesterol concentration, is commonly present in patients with nephrotic syndrome. The mainstay of therapy for such hypercholesterolemia is statins.
Immunosuppressive therapies
Indications for and choice of therapy-since many patients with mild to moderate disease undergo spontaneous remission and immunosuppressive agents have appreciable toxicity, the decision to treat must be based upon the probability that the patient will have progressive disease (defined as an otherwise unexplained elevation in serum creatinine or persistent high-grade or increasing proteinuria in patients at moderate to high risk for progression) [15].
The treatment regimen must be based upon the risk of progressive disease. First-line immunosuppressive therapy consists of cytotoxic drugs (usually cyclophosphamide) plus glucocorticoids or a calcineurin inhibitor with low-dose or no glucocorticoids (a regimen based upon cytotoxic drugs is preferred in some high-risk patients with declining glomerular filtration rate due to MN and an estimated glomerular filtration rate above 30 mL/min/1.73 m 2 ). Patients who do not respond to one regimen are usually treated with the other, and those with resistant disease may be treated with rituximab.
In our case, it is a patient with a high risk of progression.
High risk for progression-high-risk patients with idiopathic MN are defined as those with protein excretion exceeding 8 g/day that persists for more than 3 months and/or renal function that is either below normal (and considered due to MN) or decreases during the observation period, despite maximum nonimmunosuppressive therapy. These patients are also likely to have prominent nephrotic symptoms or signs, such as marked hypoalbuminemia and edema. Approximately 75% of such patients progress to worsened renal insufficiency over 5 years.
Rituximab has been used in patients with idiopathic membranous nephropathy who have failed previous treatment with other immunosuppressive regimens.
Rituximab may have benefit among patients with a moderate risk of progression who have not previously received immunosuppressive therapy [8,16]. In one unblinded trial for 12 months, the rate of complete or partial remission was higher among patients treated with rituximab (65 versus 34%). These findings are consistent with observational studies that demonstrate a maximal reduction in proteinuria at 18-24 months after treatment with rituximab. Anti-PLA2R antibodies, which were present in 73% of patients at baseline, disappeared in a greater proportion of patients receiving rituximab (50 versus 12%). Serious adverse events were similar between the two groups.
Resistant disease-the optimal approach to moderate-or high-risk patients with stable renal function who fail treatment with both cyclophosphamide and calcineurin inhibitor-based regimens is not known. We prefer a trial of rituximab in such patients, although limited data are available suggesting efficacy.
Several observational (nonrandomized) studies in patients with idiopathic resistant MN have reported outcomes following the administration of rituximab: Rituximab therapy is generally well tolerated, adverse effects are minor and primarily consisted of infusion reactions. Anti-PLA2R-positive patients with lower titers had significantly greater remission rates compared with patients who had higher titers.
Rituximab may provide benefit to patients who failed prior immunosuppressive therapy, especially those with relatively preserved renal function. Four weekly doses of rituximab (375 mg/m 2 ) appear to have the same effect on proteinuria reduction as a regimen of 1 g every 2 weeks.
It is suggested the somewhat simpler and cheaper regimen of a dose of 1 g given intravenously and repeated in 2 weeks. Patients who continue to have significant proteinuria may have this dose repeated at 6 months.
PLA2R is a transmembrane receptor that is highly expressed in glomerular podocytes and has been identified as a major antigen in human idiopathic MN.
The anti-PLA2R autoantibody-negative patients may be in the midst of a spontaneous or treatment-induced remission.
The monitoring serum anti-PLA2R antibodies may allow a more accurate assessment of the immunological response to rituximab (and possibly other therapies) than is provided by measurement of proteinuria alone [17,18].
Conclusions
MN is among the most common causes of the nephrotic syndrome in nondiabetic adults, accounting for up to one-third of biopsy diagnoses.
A significant percentage (15-50% of cases) of patients with IMN develop progressive chronic kidney disease.
Rituximab induces NS remission in two-thirds of patients with IMN, even after other treatments have failed. The rate of complete or partial remission was higher among patients treated with Rituximab.
Therefore, assessing circulating anti-PLA2R autoantibodies and proteinuria may help in monitoring disease activity and guiding personalized rituximab therapy in nephrotic patients with IMN. The monitoring serum anti-PLA2R antibodies may allow a more accurate assessment of the immunological response to rituximab (and possibly other therapies) than is provided by measurement of proteinuria alone. | 2019-11-07T15:30:11.681Z | 2019-10-30T00:00:00.000 | {
"year": 2019,
"sha1": "6ad0798c259d456dd3ddef303c697dd97be52154",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/67754",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "c128e2f3e9b0838cc1c59f7ca5ce2ed65fbfc3db",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
44871992 | pes2o/s2orc | v3-fos-license | A Low-Grade Fibromyxoid Sarcoma of the Internal Abdominal Oblique Muscle
A low-grade fibromyxoid sarcoma (LGFMS) is a rare tumor, with a benign histologic appearance but malignant behavior. This report describes a 74-year-old man with an internal abdominal oblique muscle mass. The tumor appeared as a well-defined ovoid mass on computed tomography, with mild uptake on fluorine-18-fluorodeoxyglucose positron-emission tomography images. Radical resection with wide safe margins was performed. Histologically, the tumor was composed of spindle-shaped cells in a whorled growth pattern, with alternating fibrous and myxoid stroma. MUC4 expression, a highly sensitive and specific immunohistochemical marker for LGFMS, was detected. Therefore, we diagnosed the tumor as LGFMS. At the 3-month follow-up, there was no sign of recurrence or metastasis. We report the first case of LGFMS arising from the internal abdominal oblique muscle.
Background
Soft tissue tumors are uncommon tumors accounting for only approximately 1% of cancers in adults, and it is often difficult to diagnose these tumors [1]. Owing to the small numbers of these neoplasms, it is difficult to perform systematic research and to develop optimal approaches for treatment and diagnosis of these patients. Unfortunately, many patients still undergo improper initial diagnosis and treatment.
A low-grade fibromyxoid sarcoma (LGFMS) is a rare variant of the spindle cell tumor that is composed of collagenrich and myxoid parts [2]. Owing to its variable morphology, LGFMS can be difficult to distinguish from benign mesenchymal tumors and other low-grade sarcomas. Clinically, LGFMS develop mainly in the subcutaneous or superficial soft tissue overlying the muscles of the trunk or proximal four limbs in middle-aged adults. LGFMS sometimes recurs locally and distantly [3]. Recently, immunohistochemistry has been playing a key role in the diagnosis of LGFMS. It is identified by using MUC4 staining, which can be helpful to distinguish this tumor type from histologic mimics [4].
This report describes a 74-year-old man with LGFMS of the right internal abdominal oblique muscle.
Case Presentation
The patient was a 74-year-old man who had an abdominal wall mass identified on abdominal ultrasonography during a routine examination (Figure 1(a)). There was no previous history of a significant injury to his abdomen. On physical examination, the elastic firm mass measured approximately 20 × 20 mm without tenderness. Computed tomography (CT) revealed a low-density mass in the right internal abdominal oblique muscle (Figure 1(b)). On contrast-enhanced CT, the mass was mildly enhanced nonhomogeneously (Figure 1(c)). The mass was not detectable on a CT image acquired 5 years previously. Fluorine-18-fluorodeoxyglucose (FDG) positronemission tomography (PET) imaging demonstrated low FDG uptake in the mass in the right internal abdominal oblique muscle. The maximum standardized uptake value (SUVmax) of the tumor was 1.4 ( Figure 1(d)). FDG-PET imaging did not reveal any other distant metastases. For diagnosis and treatment, en bloc resection of the tumor was performed via wide resection. The resected specimen contained a mass with a pseudocapsule. On gross examination, the cut surface of the tumor revealed that the lesion was pale white and glistening (Figure 2(a)).
Histopathological examination demonstrated that the tumor was contained within a thin fibrous capsule and was well demarcated from the surrounding muscle and soft tissue. The tumor cells were spindle-cell-shaped fibroblast-like cells within whirling collagenous stroma. There were sporadic myxoid areas within the whirling collagenous stroma. There were sporadic areas of increased cellularity, and the tumor cells were occasionally multinucleated or stellate in shape. The nuclei of the tumor cells were mildly pleomorphic and hyperchromatic, but these features were not diagnostic for unequivocal malignancy (Figures 2(b) and 2(c)). On immunohistochemical examination, the tumor cells were negative for desmin, S100, smooth muscle actin, CD34, and CD117 and were positive for MUC4 (Figure 2(d)). The tumor was diagnosed as LGFMS.
The patient had not experienced either local recurrence or distant metastasis at the final follow-up 3 months after surgery.
Discussion
LGFMS occurs most commonly in the deep soft tissues of the proximal extremities and trunk. Other sites include the chest Case Reports in Surgery wall, hip, inguinal region, axilla, retroperitoneum, mesentery, pelvis, and maxilla [5][6][7][8]. To our knowledge, this is the first report of an LGFMS occurring on the internal abdominal oblique muscle.
There are a few reports of LGFMS that were positive on FDG-PET. SUVs of the masses ranged from 1.8 to 4.0 [9][10][11]. The SUV-max in our case was 1.4, and the size of the tumor in our case was smaller than the previously reported cases. Williams et al. [9] reported that FDG-PET could be useful to demonstrate sites of possible metastasis and direct biopsy for rare soft tissue sarcomas but is of uncertain negative predictive value for small tumors. Maretty et al. [12] claimed that small tumors were PET negative on the initial scan, so they were not removed until 3 months later, after metastases were observed. They concluded that PET-CT should be used with caution in patients with LGFMS.
In our case, on microscopic examination, the tumor showed alternating fibrous and myxoid areas and had spindle and asteroid fibroblast-like tumor cells in this myxoid background, evident on hematoxylin and eosin staining (H&E). Although the tumor cell nuclei were mildly pleomorphic and hyperchromatic, the malignant nature of the tumor was not readily discernible. Our first differential diagnoses were "intramuscular myxoma," "nodular fasciitis," and "cellular myxoma," but none of them fit the histological features of the tumor. However, on immunohistochemical analysis, the tumor was unexpectedly positive for MUC4, but negative for desmin, S100, smooth muscle actin, CD34, and CD117. Our diagnostic possibility was narrowed to "LGFMS." Doyle et al. reported that MUC4 was a highly sensitive and specific immunohistochemical marker for LGFMS [4]. They reported that all 49 LGFMS cases (100%) showed cytoplasmic staining for MUC4 and all other tumor types were negative for MUC4, other than 6 (30%) monophasic synovial sarcomas. Among other soft tissue tumors, MUC4 is a sensitive and useful marker for identifying only sclerosing epithelioid fibrosarcoma, which has similarities to LGFMS [13]. When we reviewed the H&E slides of the tumor, we found the features to be consistent with LGFMS.
Although patients with LGFMS are often misdiagnosed with benign tumors such as fibromatosis and neurofibroma instead of LGFMS, adequate surgical excision of the tumor is undoubtedly necessary because of the frequent recurrence of LGFMS. The local recurrence rate proved to be clearly lower in specimens with adequate margins [3,5]. Recent study reports that the rate of metastases in LGFMS is 45% [3], although earlier studies claimed it rarely had metastatic potential. The treatment of metastatic LGFMS is difficult and may include multiagent chemotherapy and repeated and selective surgery of operable metastases [12]. Moreover, Evans reported that clinical and histological responses were quite poor for distant metastases and nonresectable lesions [3]. In our case, the resected specimen had a wide margin and the tumor was small. The patient was followed-up with clinical physical examinations and CT every 3 months.
Conclusion
In conclusion, this is a case of LGFMS that formed in the internal abdominal oblique muscle of a 74-year-old man. Although LGFMS can be difficult to distinguish from a benign tumor on clinical examination, it should be correctly diagnosed on histological and immunohistochemical examinations in order to ensure adequate treatment. | 2018-04-03T05:53:21.594Z | 2016-05-10T00:00:00.000 | {
"year": 2016,
"sha1": "314dcbfef01e03b7a3e42d87e6451540d8ee5197",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/cris/2016/8524030.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e0f71b0b961b30b7d94c1bd878d10d604402a1d1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
103852195 | pes2o/s2orc | v3-fos-license | The use of dual reductants in gold nanoparticle syntheses
We present novel syntheses of gold nanoparticle colloids using two reducing agents simultaneously as dual reductants. These colloids were characterised by absorption spectroscopy, hydrodynamic distribution data and transmission electron microscope images, and compared to those produced by single reducing agents. The use of tannic acid and sodium borohydride together has resulted in small spherical particles and a brown coloured colloid, and has demonstrated how the size of the reducing/stabilising agent can directly in fl uence the size of the resulting nanoparticles. Further, the simultaneous use of tannic acid and ascorbic acid has given purple/blue and blue/grey shades resulting from quasi-spherical and anisotropic particles respectively. This illustrates how dual reductants can be utilised as a means to control particle shape.
Introduction
Gold nanoparticles are well known and utilised for their bright and attractive colours. Since the 17th century they have been used to give red and purple shades to glass, 1 and today their controllable colour is utilised in sensor, imaging and design applications. [2][3][4][5][6] The observed colour results from localised surface plasmon resonance (LSPR); the resonance oscillation of nanoparticle conduction electrons upon interaction with incoming electromagnetic radiation. 7 This gives rise to intense absorptions in the visible spectrum. The energy of these absorptions, and hence the colour of the nanogold, can be controlled by modifying the size and shape of the particles. For example, spherical gold nanoparticles 20 nm in diameter absorb light with a wavelength of approximately 520 nm, giving a red coloured colloid, and increasing the size of the particles results in the LSPR band shiing to longer wavelengths and becoming broader. 8 Correspondingly, the colloidal colour changes through purple to grey for very large particles and aggregates. Further, nanorods possess two size dimensions and so give rise to two LSPR absorptions. When synthesised by a seeding growth approach using cetyltrimethylammonium bromide (CTAB), nanorods with aspect (longitudinal to transverse) ratios of 13 and 18 give blue coloured colloids. 9 However CTAB is known to be toxic and so new environmentally safe methods to extend the colour range are desirable. 10 Gold nanoparticles are commonly synthesised by the chemical reduction method in which Au 3+ ions are reduced by a suitable agent. Common reducing agents include trisodium citrate, [11][12][13] tannic acid, 14,15 sodium borohydride, [16][17][18] ascorbic acid, [19][20][21] and amine molecules. 22,23 The addition of a stabilising agent such as polyvinylpyrrolidone (PVP) is oen required. 24,25 Here we present two novel nanogold syntheses in which two reducing agents are used simultaneously as dual reductants. To the best of our knowledge these syntheses have not been reported in the literature before now. We demonstrate that when tannic acid and sodium borohydride are used together, a brown coloured colloid results from small spherical particles. The use of tannic acid and ascorbic acid together gives purple/blue and blue/grey shades resulting from quasi-spherical and anisotropic particles respectively. No structure directing agents such as CTAB are used. These reproducible syntheses, each consisting of two simple steps, were carried out in aqueous media at room temperature. They provide advantages over other common methods utilised to obtain gold nanoparticles of these types.
Previously reported two reductant syntheses have utilised a block copolymer which is a very mild reducing agent and acts mostly as a stabilising agent to control particle size. 26 The addition of trisodium citrate as a second reducing agent in this report has increased the concentration of spherical particles. Anisotropic particles were not formed. Our synthesis of brown nanoparticles has extended this approach by using the size of the stabilising/reducing agent to directly control the size of the nanoparticles. However our synthesis of blue and grey nanoparticles is completely distinct from this in that we utilise two highly effective reducing agents to signicantly alter the mechanism of nanoparticle formation from when either reducing agent is used alone, resulting in different sized and shaped particles including anisotropic particles. To the best of our knowledge, we report for the rst time the use of small molecule dual reductants with very similar reducing properties, added in very quick succession, to control nanoparticle shape and produce anisotropic particles. This is achieved without the use of a toxic growth directing agent such as CTAB, instead utilising the non-toxic, biodegradable and biocompatible chemicals tannic acid and ascorbic acid.
Brown nanogold synthesis
Sample A was synthesised by adding HAuCl 4 (20 mL, 3.36 wt% Au) to distilled water (30 mL) in a glass beaker, while stirring at room temperature. Tannic acid (200 mL, 1 wt%) was then added. Sodium borohydride (1200 mL, 0.05 mol dm À3 ) was added 30 s later (before any colour change resulting from Au 3+ reduction by tannic acid). A colour change was observed within 10 s of the addition of sodium borohydride and the solution was allowed to stir for a further 45 min. The nal pH of the sample was 8.0. The colour of the colloid was observed to change overnight, and to continue to change slightly over the next ve to six days. Therefore characterisations were carried out seven days aer synthesis. The colloid was stable for at least six months.
To act as a comparison, gold colloids made using only sodium borohydride (sample 1) and only tannic acid (sample 2) were also synthesised. HAuCl 4 (20 mL, 3.36 wt% Au) was added to distilled water (30 mL) in a glass beaker, while stirring at room temperature. Sodium borohydride (sample 1, 1200 mL, 0.05 mol dm À3 ) or tannic acid (sample 2, 200 mL, 1 wt%) was added and the solution allowed to stir for a further 45 min.
For further comparison, a colloid made using sodium borohydride and polyvinylpyrrolidone (PVP) as the stabilising surfactant was synthesised (sample 3). HAuCl 4 (20 mL, 3.36 wt% Au) was added to distilled water (30 mL) in a glass beaker, while stirring at room temperature. PVP (100 mL, 2 wt%) was added followed by sodium borohydride (1200 mL, 0.05 mol dm À3 ). The solution was allowed to stir for a further 45 min.
Blue nanogold synthesis
HAuCl 4 (10 mL, 3.36 wt% Au) was added to distilled water (10 mL) in a glass vial, while stirring at room temperature. Subsequently a volume of tannic acid (1 wt%) was added (50 mL, 100 mL and 500 mL for samples B, C and D respectively). Ascorbic acid (50 mL, 5 wt%) was added 30 s later (before any colour change resulting from Au 3+ reduction by tannic acid). A colour change was observed within 10 s of the addition of ascorbic acid and the solution was allowed to stir for a further 15 min. The nal pH of all the samples was 2.4. All characterisations were performed on the day of synthesis.
To act as a comparison, gold colloids made using only tannic acid (sample 4) and only ascorbic acid (sample 5) were also synthesised. HAuCl 4 (10 mL, 3.36 wt% Au) was added to distilled water (10 mL) in a glass vial, while stirring at room temperature. Tannic acid (sample 4, 100 mL, 1 wt%) or ascorbic acid (sample 5, 50 mL, 5 wt%) was added and the solution allowed to stir for a further 45 min.
Characterisation
Absorption spectra of gold colloids were recorded with an Agilent 8453 spectrophotometer using disposable plastic cuvettes with a path length of 1 cm. Backgrounds were run using distilled water. Transmission electron microscopy (TEM) images were obtained using a JEOL 2010 electron microscope. Colloid samples were drop-cast onto 200 mesh copper grids and allowed to air dry. A JEOL EC-52000IC ion cleaner was used to plasma treat each sample for 15 min at $300 V to remove organic substances. Particle diameters were determined using the soware ImageJ. Hydrodynamic radius measurements were obtained using a Zetasizer Nano ZS instrument, using disposable plastic cuvettes with a path length of 1 cm. A 4 mW He-Ne laser was employed at a wavelength of 633 nm and a backscattered light detection angle of 173 .
Results and discussion
The syntheses presented here have utilised two reducing agents simultaneously to produce gold nanoparticles with interesting sizes and/or shapes. First, the use of tannic acid and sodium borohydride simultaneously has resulted in a colloid with optical properties that differed from when either reducing agent was used alone ( Fig. 1(a) and (b)). When sodium borohydride was used alone to reduce Au 3+ (sample 1), a pale pink solution resulted with an LSPR peak centred at 519 nm in the absorption spectrum. This compares well to literature results where 5.5 AE 0.2 nm spherical particles resulted in a red/pink solution with a l max value of 515 nm. 18 The use of tannic acid alone in sample 2 gave a red coloured solution and a LSPR band at 528 nm. This is consistent with literature reports in which similar LSPR bands have resulted from spherical particles with average diameters ranging from 8 nm to 25 nm. 14,15 The reducing agent concentrations in these samples were chosen such that complete reduction of Au 3+ ions was achieved.
The use of these two reducing agents together gave a brown coloured colloid (sample A). The absorption spectrum of sample A has shown a strong absorption in the ultraviolet region that decays into the visible region, with superimposed LSPR bands at 430 and 490 nm ( Fig. 1(a)). This is consistent with the colour observed and is indicative of the presence of very small gold nanoparticles. 27 The two peaks suggest two dominating particle sizes but because they were not well dened there are likely a signicant number of other nanoparticle sizes also present around these size ranges. The hydrodynamic diameter distribution data has shown two peaks centred at 2.6 nm and 7.0 nm and so supports the presence of two main particle sizes ( Fig. 1(c)). Because larger particles absorb light of a lower energy, 8 this suggests that $2.6 nm particles were responsible for the absorption band at 430 nm, and $7.0 nm particles for that at 490 nm, collectively resulting in the brown colour.
TEM analysis of sample A has revealed a large number of small, spherical nanoparticles as predicted from the absorption spectrum ( Fig. 1(d) and (e)). The majority of the particles had diameters between 1 nm and 8 nm, however some larger particles were also observed ( Fig. 1(f)). Although care was taken to ensure the TEM images obtained were representative of the entire sample, the size analysis did not conrm the two main sizes of nanoparticles that the absorption and hydrodynamic diameter data have suggested.
Tannic acid is an effective reducing agent and is capable of reducing Au 3+ at room temperature and forming a high concentration of nanoparticles, however in this case it is its role as a stabilising agent that is of more importance. When tannic acid was replaced in the synthesis by PVP (sample 3), the brown colour of the resulting colloid was visually similar to that of sample A, as reected by the fact that both colloids gave a band in the absorption spectra at 490 nm (Fig. 2). Because PVP cannot act as a reductant at room temperature but only as a stabiliser, the similarity between the colloids suggests that tannic acid played a mostly stabilising role in the formation of the sample A colloid. However the absorption spectrum for sample 3 did not possess the peak at 430 nm that the sample A colloid did and therefore the presence of the nanoparticles that have resulted in this peak ($2.6 nm in diameter) must be due to the addition of tannic acid. It could be that these $2.6 nm nanoparticles have formed as a result of a favourable nanoparticle-tannic acid ratio. These nanoparticles are on the same size scale as the stabilising tannic acid molecules ($2.5 nm from one side of a tannic acid molecule to the other). This suggests there might be a favourable number of tannic acid molecules per nanoparticle, and if it is benecial to maximise nanoparticle coverage to maximise stabilisation then this would drive the formation of nanoparticles of a certain size. If a tannic acid molecule is approximated as a circle with an area of $4.9 nm 2 then it would require four molecules to completely coat a nanoparticle with a diameter of 2.5 nm (surface area ¼ 19.6 nm 2 ). If this is the tannic acid to gold ratio that is most favoured then this would drive the formation of 2.5 nm diameter gold nanoparticles, explaining the presence of the peak at 2.6 nm in the hydrodynamic diameter distribution. Nanoparticles of this size fall into the range known to be most effective for catalysis (typically less than 5 nm). [28][29][30][31] This concept of using the size of the tannic acid stabilising agent to directly control the size of the resulting particle is a novel approach and has signicant potential in catalysis applications. Therefore further research into optimising this process would be benecial for this eld of research.
The absorbance intensity in the spectrum of sample 3 was much lower and the colloid much paler than for sample A, even though the gold concentration was identical and the sodium borohydride added in excess. It is likely that a large number of the nanoparticles in sample 3 were too small to undergo LSPR and so did not contribute to the colour and were not observed via absorption spectroscopy. The synthesis of sample A is therefore advantageous in applications where a strong brown colour is desirable, for example in certain design applications.
This dual reductant synthesis of nanogold has similarities with that given in the literature, in that one of the reducing agents plays a signicant role in particle stabilisation. 26 However where the second reductant simply increases the concentration of nanoparticles in the reported synthesis, the use of tannic acid and sodium borohydride together here has resulted in an overall decrease in the nanoparticle size. This control over size is an important advancement. Further, the choice of reducing agents in dual reductant syntheses can completely change the mechanism of nanoparticle formation, as shown below.
Tannic acid and ascorbic acid are both effective reducing agents, and are both able to reduce Au 3+ at room temperature within comparable timeframes. Their use as dual reductants has resulted in particles with optical properties that differ from when tannic acid or ascorbic acid was used alone (Fig. 3(a)). Like in sample 2, the use of tannic acid alone in sample 4 gave a red coloured solution and an LSPR band at 528 nm. When ascorbic acid was used alone (sample 5), this produced a purple colloid with a slightly wider LSPR band at 553 nm. This colloid was analysed via TEM in this study and found to contain spherical and quasi-spherical particles with an average diameter of 40 AE 14 nm, consistent with the nanospheres 30-40 nm in diameter and red/purple colloids reported in the literature. 19 Again, the reducing agent concentrations utilised have ensured complete reduction of the Au 3+ ions.
The colloid changed signicantly in colour to a purple/blue shade upon use of the two reducing agents together (sample B) (Fig. 3(b)). Tannic acid and ascorbic acid are both biodegradable, biocompatible and possess low toxicity. Therefore this synthesis of blue nanogold offers a signicant advantage over the typical CTAB synthesis of nanorods utilised to obtain blue colours, as CTAB is known to be toxic. 10 The peak in the absorption spectrum of sample B has red-shied from those of samples 4 and 5 to 593 nm, which suggests a larger particle size or that some agglomeration/aggregation has occurred. Additionally the LSPR peak was much broader for sample B, consistent with the blue shade observed and indicating a greater particle size distribution or anisotropy. The TEM images of the purple/blue colloid (sample B) showed particles that were spherical or quasi-spherical in shape ( Fig. 4(a)), similar in shape to those observed in TEM images of sample 5. In the absence of any observable anisotropy, the blue colour of sample B is attributed to the large range of nanoparticle sizes observed ( Fig. 4(b)). A mechanism of nanoparticle growth has been developed for sample B. Although tannic acid can fully reduce Au 3+ , it is not quite as effective as ascorbic acid. Hence, when the tannic acid was added to the Au 3+ solution initially it was not able to produce any signicant quantity of nanoparticles in the short time before ascorbic acid was added (30 seconds), as indicated by the lack of any colour development. Because ascorbic acid is the stronger reducing agent, the following addition of ascorbic acid to the tannic acid and Au 3+ solution resulted in nanoparticles quickly beginning to form. These partially formed nanoparticles possess layers of Au + on their surface, and Au + is easier to reduce than Au 3+ (E values of +1.692 V and +1.498 V for Au + and Au 3+ respectively). 32 Therefore tannic acid was able to act to grow the nanoparticles partially formed by ascorbic acid by capturing the early formed tannic acid nanoparticles on the surface as well as forming further nanoparticles in its own right. The large range of nanoparticle sizes observed by TEM can be attributed to this mechanism due partly to the slightly different rates of reaction between tannic acid and ascorbic acid, and that individual nanoparticles will have come into contact with different quantities of tannic acid molecules, at different stages in their growth, during this fast synthesis.
Unlike for sample A, the optical properties of the colloid could be further altered by increasing the concentration of the second reducing agent tannic acid, as shown in Fig. 5. This has resulted from the fact that the tannic acid and ascorbic acid both have reducing roles in this synthesis. The colour change from purple/blue to a blue/grey shade is consistent with the absorption spectra obtained in which the LSPR peaks for samples C and D were very broad and further red-shied compared to sample B (l max ¼ 620 nm and 643 nm for samples C and D respectively). This suggests that agglomeration or aggregation has occurred and/or that there were anisotropic particles present in these colloids. The absorption spectra of samples C and D have additionally shown a shoulder at $530 nm which is reminiscent of the transverse LSPR band commonly seen when nanorods are present. This peak was poorly separated from the main, longitudinal band (at 620 nm and 643 nm) suggesting the presence of polydisperse anisotropic particles or aggregates/agglomerates. TEM analysis of sample D has conrmed that signicant anisotropy was present (Fig. 6). The structures observed by TEM may be split into two categories; (i) smaller branched structures with diameters in the range of 20-40 nm ( Fig. 6(a)), and, (ii) larger particles that were overall essentially spherical but had an uneven surface of nanoscale protrusions ( Fig. 6(b)). The larger spherical particles of category (ii) had an average 'outer' diameter (including protrusions) of 56 AE 4 nm as compared to an average 'inner' diameter (excluding protrusions) of 40 AE 2 nm (calculated in ImageJ). The nanoprotrusions on these particles have induced multiple LSPR modes that were similar to the longitudinal mode of gold nanorods, and have resulted in a broad red-shied LSPR band and consequently a blue/grey coloured colloid. The poor resolution between the longitudinal and transverse peaks in the absorption spectra can be explained by a number of factors; (i) a large range of different nanoprotrusion aspect ratios, with a higher aspect ratio giving a red-shied LSPR mode, (ii) a large range of outer-to-inner ratios for the spherical nanoparticles with nanoprotrusions, and, (iii) a large range of structure sizes, from the small branched structures to the larger spherical particles.
The different densities observed in the TEM images of the smaller branched structures suggest that they consisted of smaller nanoparticles that have grown together to form anisotropic clusters. This is consistent with the spherical particles observed for sample B; it is likely that such particles have grown together by the action of excess tannic acid reducing agent to form structures such as those observed in Fig. 6. Because the overall size of many of these formations are smaller than a number of the spherical particles observed for sample B, this suggests that the particles have grown together while they were still in the process of formation, i.e. before they had grown to their full size.
In this synthesis, if the time of the ascorbic acid addition was delayed until aer the colour change resulting from tannic acid reduction was observed (starting at $60 seconds), then this changed the optical properties of the colloid. It resulted in a narrower, blue-shied LSPR band and purple/pink colloids, resulting from a mixture of spherical particles made by tannic acid and spherical particles made by ascorbic acid (the greater the time delay, the greater the similarity to the optical properties of sample 4 tannic acid nanoparticles). This highlights that the tannic acid and the ascorbic acid need to be acting as reducing agents essentially simultaneously to achieve particle shape control and anisotropy. This is a novel demonstration of the use of two small molecule reducing agents of similar strength to control particle shape that we believe has not before been examined in the literature.
Conclusions
When two reducing agents with similar reductive capabilities are used simultaneously to produce gold nanoparticles, a simple mixture of two nanoparticle types is not produced. Instead the mechanism of nanoparticle formation has been altered and nanoparticles of different shapes and sizes result. The two syntheses developed proceed at room temperature, are carried out in aqueous media, and are simple and reproducible. The use of tannic acid and sodium borohydride is useful when small spherical nanoparticles and a strong brown colour are desirable. The use of tannic acid and ascorbic acid is environmentally safe and provides simple alternative means to produce anisotropic nanoparticles and purple/blue and blue/grey shades of colloid.
Conflicts of interest
There are no conicts to declare. | 2019-04-09T13:06:03.527Z | 2017-09-22T00:00:00.000 | {
"year": 2017,
"sha1": "685d9b1d2d81db0068637bd0e1f43b11b13b4b4e",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2017/ra/c7ra07724f",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "113b0df9359a28494b478a24289094fab14be73a",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
131843387 | pes2o/s2orc | v3-fos-license | INTEGRATING MULTIPLE CRITERIA EVALUATION AND GIS IN ECOTOURISM : A REVIEW
The concept of 'Eco-tourism' is increasingly heard in recent decades. Ecotourism is one adventure that environmentally responsible intended to appreciate the nature experiences and cultures. Ecotourism should have low impact on environment and must contribute to the prosperity of local residents. This article reviews the use of Multiple Criteria Evaluation (MCE) and Geographic Information System (GIS) in ecotourism. Multiple criteria evaluation mostly used to land suitability analysis or fulfill specific objectives based on various attributes that exist in the selected area. To support the process of environmental decision making, the application of GIS is used to display and analysis the data through Analytic Hierarchy Process (AHP). Integration between MCE and GIS tool is important to determine the relative weight for the criteria used objectively. With the MCE method, it can resolve the conflict between recreation and conservation which is to minimize the environmental and human impact. Most studies evidences that the GIS-based AHP as a multi criteria evaluation is a strong and effective in tourism planning which can aid in the development of ecotourism industry effectively.
INTRODUCTION
Ecotourism has played an important source of revenue for the country.But ecotourism should be balanced in terms of a clean environment without polluting and spoiling the natural beauty and also does not harm the surrounding population.Ecotourism is a type of tourism that continues to preserve, providing various facilities in areas with tourist attractions, either relating to attraction on the natural environment or ecology-made through the modernization of physical and social infrastructure in the region to become one of the economic activities that contribute to the national income besides to being able to improve the living standards of local communities.Strategic planning to develop ecotourism area is not measured in terms of the landscape only , but many factors should be considered such as topography, climate area, soil type and many more.This planning can be overcome by using a method in which the existence of AHP technique with the help of GIS.Tourism and Information technology are two of the most active motivators of the global economy growth.
MULTIPLE CRITERIA DECISION ANALYSIS WITH AHP
Multi Criteria Decision Analysis (MCDA) is a spatial decision problem that mostly involves a large set of feasible alternatives and multiple criteria evaluation.Usually most researchers make decision process by using GIS that recognized as a decision support system which can analyzing, designing, evaluating and prioritizing alternative decisions to ecotourism development.
GIS also can obtained the information for decision making through a transformation process and combines the geographical data and the value judgments.MCDA provides a variety of collections with an efficient technique and structuring procedure to get a decision.Thus, decision making involves multi criteria evaluation (MCE) used to rank and achieve the priorities for the alternatives of a decision.
Multiple criteria evaluation is commonly relate and use the Analythic Hierarchy Process (AHP).Analytic Hierarchal Process (AHP) is an important technique to analyse land suitability that developed by Thomas L.Saaty and it is a measurement theory through pairwise comparisons in making a decision between alternatives and criteria needed to earn the scale of priorities.So, before AHP is implemented, all relevant criteria like slope, temperature, climate and soil types should be sufficient.
The importance and suitability established by ranking weights of specific criteria.With a strong knowledge and data can commonly be identified the weight of factor in AHP.AHP include some steps to untangle the decision with a more systematic way in which define the problem and draft the decision hierarchy from the top with the goal, through the intermediate until the lowest level (determine the criteria).Then, make a set pairwise comparison matrix and from the comparison need to use the priorities acquired to weight in the level immediately below.This is because to make it easier for decision makers to identify the relative weight and compute the quantitative to validate the result of land suitability.
THE IMPORTANT OF MULTI CRITERIA EVALUATION IN ECOTOURISM DEVELOPMENT
A criterion (factor) need to evaluated and measured for decision making.Selecting or formulating the criteria which is decision maker can choose several methods to determine the weight of each factor such as by ranking, rating and Analytic Hierarchy Process.Many discussion on AHP in ecotourism by researchers in which Bali et al. (2015) in Iran give comments the criteria like slope and aspect is significant for ecotourism development and could not be neglected.Likewise, same reported from Buruamkaew and Muruyam (2011) in Thailand.A review represent the steep areas with a slope more than >45% is a limiting factor for ecotourism activities.They conclude slope with >15% could be appropriate criteria.According to Geremew and Hailemeriam (2015) in Ethiopia, mostly ecotourism take place in historical resources, natural area and traditional culture.Thus, the criteria of road distance must be considered with regard to the natural area.Another research result by Kumari et al. (2010) in Malaysia suggest that ecotourism activities influenced by climatic condition.So, the climate factor must be investigate to obtain the suitable ecotourism land.Soil features also give greatly effect to tourist activities.So, the type and texture of soil need to determine first to avoid and spoil the tourist activities.
Based on AHP method, the weight of influencing factor have determined through pairwise comparison matrix.According to Saaty (1980) in USA, construct a matrix where each factor need to compared with the other factor and make a scale from 1 to 9 which is 1 is equal preference and 9 is extreme importance.The table below show the fundamental scale for factor used by Saaty (1980).Then, make the weight normalization and calculate the consistency ratio (CR).The formula as shown in equation ( 1) and (2).
Si
Where (Si) is composite suitable score, (n) is number of factor, (Wi) refer to weight of factor and (Ri) is a rating for determine the class of i factor.
If CR larger than > 0.10, then the value of pairwaise need to be repeate in evaluation process until get the acceptable value of CR smaller than < 0.10 for suitability analysis. The weighting process of each factor involved multi criteria map and standardize and weighted the factor by decision maker that need to decide based on knowledge and fair judgement to produce the result of ecotourism region.
METHOD AND FRAMEWORK OF LAND SUITABILITY ANALYSIS
Determine the criteria for selecting appropriate ecotourism area need more measurable and alternative.Based on past papers, a lots of input data or factors need to collect such as topography map, climate data, types of soil knowledge and land use include distance from road, water resources and population.All data can obtained from any sources like private and government company that fullfill and related with ecotourism area characteristics.Then, standardize the criterion value to common scale to make the comparison and continue it by using AHP method to detemine the weight factor.The last one, the suitability map of ecotourism area is created and verify the result by conduct a field survey or ground truth.In the map below (figure 4) shows the hilly forest region which is highly suitable area but has some parts not highly suitable due to low density of forest.For this case, decision makers select the suitable ecotourism area base on the priority of nature (forest).Potential to be ecotourism area must be considered by any factors where has existing development like infrastructures, accomodation and tourist's civic services and high population.
It prooved that the purple colour fall near the red bullet (existing tourist spot).
SUMMARY
In order to evaluate the ecotourism place, the effective criteria should be identified first.Those multiple criteria evaluation is very significant for making decision that can be used in any methods in GIS tool other than AHP method.MCE also is a best complex scenarios in which involve a lot of consideration.
The criteria of evaluation need to standardized and weighted to achieved the best location for ecotourism area and successful applied by many researchers.
Figure 2 :
Figure 2 : The framework of land suitability assessment
Figure 3 :
Figure 3 : Land suitability for ecotourism development in Anzali Watershed copyright from Bali et al. (2015)
Figure 4 :
Figure 4 : Land suitability for map ecotourism in Cox' Bazar, Source by Ullah and Hafiz (2014)
Table 1 :
The fundamental scale for factor used, source by Aburas et al. (2015) from Asian Journal of Applied Sciences | 2017-11-30T14:48:35.932Z | 2016-09-30T00:00:00.000 | {
"year": 2016,
"sha1": "293c459514302ca6aa66bd2bc7b0a4affab5e06f",
"oa_license": "CCBY",
"oa_url": "https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLII-4-W1/351/2016/isprs-archives-XLII-4-W1-351-2016.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "293c459514302ca6aa66bd2bc7b0a4affab5e06f",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Geography"
]
} |
14197935 | pes2o/s2orc | v3-fos-license | Transformation of β-Ni(OH)2to NiO nano-sheets via surface nanocrystalline zirconia coating: Shape and size retention
Shape and size of the synthesized NiO nano-sheets were retained during transformation of sheet-like β-Ni(OH)2to NiO at elevated temperatures via nano-sized zirconia coating on the surface of β-Ni(OH)2. The average grain size was 6.42 nm after 600 °C treatment and slightly increased to 10 nm after 1000 °C treatment, showing effective sintering retardation between NiO nano-sheets. The excellent thermal stability revealed potential application at elevated temperatures, especially for high temperature catalysts and solid-state electrochemical devices.
Introduction
Handling shapes and sizes of nanostructured materials is of great scientific interest as much of their superior properties are directly linked to high chemical or electrochemical active sites or specific nanostructures. However, it is technologically difficult to apply nanostructured materials at high temperatures since serious sintering of materials would cause loss of active sites. In order to preserve nanostructured natures of mate-rials at elevated temperatures, retardation of sintering behaviors was considered. However, only few reports emphasized stabilization of the nanostructured materials at high temperatures [1][2][3][4][5]. Pang et al. showed the size of SnO 2 nanoparticles could be controlled as small as 3.5 nm even heat treatment at 600°C [3]. Wu et al. found that surface-modified methylsiloxyl groups on the metal oxide gel could prevent grain growth of metal oxides during high temperature treatments and excellent gas sensing properties were shown [4]. Lyu et al. also showed that the thermal stabilities of mesoporous metal oxides were improved by introducing silicon-contained hybrid Gemini surfactants as a nano-propping agent [5].
Nevertheless, in electrochemical systems, dimension of three-phase-boundary (TPB) length (the region that reactants, electrode and electrolyte materials coexist) plays an important role in the corresponding performances. In solid-state electrochemical devices, such as gas sensors and solid oxide fuel cells (SOFCs), high temperature treatment could not be avoided during fabrication and operation. Consequently, surface area and TPB length are significantly decreased. Studies aiming on how to retain surface areas and microstructures of electrodes during heat treatment are highly needed [6][7][8]. Ozin et al. synthesized a series of mesoporous materials for SOFC electrodes [9][10][11][12][13] and showed better electrochemical performance on oxygen reduction [13]. However, thermal stability of the mesoporous materials needed to be further improved [9,13]. Liu et al. have also synthesized nanostructured electrodes by a combustion CVD technique for SOFC application and the electrode material about 50 nm in grain size with higher performances were disclosed at low operating temperatures [14].
Herein, we attempted to synthesize electrode materials with high thermal stability. Nickel oxide nanosheets were coated with small zirconia clusters on the surface. Details of the nanostructure were also explored.
Experimental
The standard process for synthesizing ZrO 2 -coated NiO nano-sheets was described as follows. 0.03 mol Ni(NO 3 ) 2 (Acros) was dissolved in 50 ml de-ionized water. Meanwhile, 0.12 mol NaOH (Acros) was dissolved in 50 ml de-ionized water followed by dropping in to a Ni(NO 3 ) 2 solution to form precipitates. The precipitates were filtered and washed with de-ionized water for several times and dried at 80°C. The crystalline structure of the dried precipitates was characterized by XRD and was confirmed as hexagonal b-Ni(OH) 2 . Then, 0.03 mol as-prepared b-Ni(OH) 2 was transferred into 100 ml 1-propanol (Acros) and mixed for 5 h to form a well-dispersed solution. Later, 0.001 mol zirconium-1-propoxide (70 wt% in 1-propanol, Aldrich) was added and the reactor was sealed immediately followed by stirring for 3 days. Finally, the solution was heated at 90°C in an oil bath to remove the solvent. The obtained dried gels were transferred to furnace immediately for heating at the target temperatures (600, 700, 800, 900 and 1000°C) with a heating rate of 5°C/min and the holding time was 1 h. For comparison, b-NiOH 2 mixed with the same amount of zirconia was calcined at different temperatures.
For characterization of the synthesized samples, XRD (Rigaku D/Max-RC, Japan) with CuKa as radiation source (k = 1.5406 A) was performed at 40 kV and 100 mA. TEM images were obtained by JSM 1010 with accelerating voltage of 80 kV. High resolution images were obtained by TECNAI F20 FEGTEM operated at the accelerating voltage of 200 kV. For TEM sample preparation, 0.01 g powder was added into 20 ml ethanol followed by ultrasonic treatment for 30 min. Later, 0.05 ml solution was dropped on to a carbon-coated Cu grid and then dried at 80°C for TEM analysis. For nitrogen absorption analysis, the samples were heated at 250°C in vacuum to removed absorbed water before analyzing. The surface areas were estimated according to BET equation.
Results and discussion
The basic concept for synthesis of the ZrO 2 -coated NiO nano-sheets is illustrated in Fig. 1. First, b-nickel hydroxide was well dispersed in the organic solvent. Appropriate amount of zirconium n-propoxide was then added to the solution. The added zirconium n-propoxide reacted with the surface hydroxyl groups of the Ni(OH) 2 : In Eq. 1, Ni surface represents the Ni 2+ ions on the surface of the Ni(OH) 2 . The referred reaction was due to the strong hydrolysis nature of inorganic alkoxide with the hydroxyl groups. Then, the obtained Ni(OH) 2 underwent drying and heat treatment to form the NiO materials with small ZrO 2 clusters coating on the surface. The as-synthesized Ni(OH) 2 precursors were first examined by XRD (Fig. 2a). From the obtained XRD pattern, it indicated the brucite-like crystalline structure of b-Ni(OH) 2 was obtained [15][16][17][18]. The sharp (1 0 0) and (1 1 0) peaks with the average diameters of the corresponding planes were respectively 9.23 nm and 9.84 nm by the Debye-Scherrer equation, indicating the shape of the synthesized hexagonal b-Ni(OH) 2 grains close to regular hexagon with almost the same dimensions of the two diagonals. Further, the average thickness of the hexagonal grains could also be estimated to be 2.65 nm according to (0 0 1) peak. The results were also confirmed with the literatures [19,20].
The prepared Ni(OH) 2 precursors were first heated at 300°C for 1 h to form NiO (denoted as 300-NiO). The XRD pattern was shown in Fig. 2(b1), indicating the formation of NiO cubic structure. By estimating the full width at half maximum (FWHM) of (2 0 0) peak, the thickness of 300-NiO was 3.95 nm. Further, the crystalline thickness estimated by (1 1 1) and (2 2 0) peaks were 4.06 and 4.74 nm, respectively. From the estimated crystalline thicknesses by different directions, the shape of the 300-NiO grain was close to spherical. However, from the TEM image of 300-NiO (Fig. 3a), tremendous amounts of sticks and platelets were observed, implying that the morphology of the synthesized b-Ni(OH) 2 was maintained after 300°C treatment with the thickness and length of the sticks around 2-3 nm and 10 nm, respectively, close to that of the synthesized b-Ni(OH) 2 . The inconsistency between XRD and TEM results suggested that the observed nano-sheets were possibly formed by stacking of NiO nano-grains. After heating the synthesized b-Ni(OH) 2 at 600°C, nano-sized nature of the synthesized b-Ni(OH) 2 was no longer retained in the formed NiO (denoted as 600-NiO), which was first revealed by the sharp peaks in the XRD pattern shown in Fig. 2(b2). The calculated grain size according to (2 0 0) peak was around 25.41 nm, which was also evidenced by TEM image (Fig. 3b). Besides, grains of 600-NiO were cubic-like instead of nano-sheets, which was due to the sintering of the nano-sheets after 600°C heat treatment.
On the other hand, the XRD pattern for the 600°C-treated ZrO 2 -coated NiO was shown in Fig. 2(b3). First, pure cubic NiO phase was revealed and monoclinic ZrO 2 was not observed. It was possible that the amount of the ZrO 2 was much less compared to that of NiO. The broadening of the peaks indicated small NiO grains were maintained by the surface ZrO 2 . Again, according to the FWHM of (2 0 0) peak of the ZrO 2 -coated NiO, the grain size was 6.42 nm. TEM images of the ZrO 2 -coated NiO also showed sticks and hexagonal sheets the same as that of 300-NiO, except some small spherical grains were observed (Fig. 3c). Even after higher temperature treatment, the sticks and hexagonal sheets were still observed, excluding the thickness of the nanosheets increased (Fig. 3d).
To clarify the nanostructure of the materials, high resolution TEM image of the 600°C-treated ZrO 2coated NiO was taken and analyzed. It was clearly shown that hexagonal sheet was coated by small (Fig. 4a). The size of the particle was around 4-6 nm. Furthermore, the lattice image of the coated spherical particle showed the d-spacing was 0.36 nm. It indicated (0 1 1) direction of the monoclinic ZrO 2 (Fig. 4b). The d-spacing of the hexagonal sheet was 0.24 nm, which indicated (1 1 1) direction of the cubic NiO (Fig. 4c). Further analyzing the composition of spherical-particle-coated nano-sheet by EDX spectra, it revealed that the composition of the portion without spherical particles was nickel-rich (Fig. 4d) and that with spherical particles was zirconium-rich (Fig. 4e). Since the amount of ZrO 2 is only 1/30th of that of NiO (in mol), the thickness of ZrO 2 layer would be much smaller than the size of NiO nanoparticles. It is suggested that the ZrO 2 particles shown in Fig. 4a and b are exceptional, rather than typical, in size. Zirconium 1-propoxide would mainly react with the hydroxyl group on the surface of b-Ni(OH) 2 via hydrolysis and condensation reactions to form ZrO 2 layer. However, the ZrO 2 particles are extraordinary formed via the hydrolysis and condensation reactions of zirconium 1-propoxide in solution.
The estimated NiO grain sizes of the synthesized ZrO 2 -coated NiO at different temperatures were shown in Fig. 5. It was obvious that the retention of NiO grain size was effective when comparing with that of NiO by heating physical-mixed Ni(OH) 2 -ZrO 2 . After 1000°C treatment, the grain size of ZrO 2 -coated NiO was retained around 10 nm where that of the physical-mixed NiO-ZrO 2 was larger than 50 nm. Even the sintering among the ZrO 2 -coated NiO nano-sheets was still occurred, however, it was effectively retarded by the surface ZrO 2 .
The fact of the nano-sized nature of the ZrO 2coated NiO could also be evidenced by nitrogen absorption measurement. For the 600°C-treated ZrO 2 -coated NiO, the BET surface area is as high as 120.53 m 2 g -1 and almost seven times higher than that of the 600-NiO (17.52 m 2 g -1 ). Even after 1000°C treatment, BET surface area of the ZrO 2 -coated NiO was still as high as 42.64 m 2 g -1 .
From the above analysis, the formation and sintering behavior of NiO were concluded. First, the hexagonal b-Ni(OH) 2 was treated at low temperature of 300°C to form the hexagonal NiO nano-sheet which was stacked by NiO grains. However, the size of NiO was seriously increased and the sheet-like shape was collapsed soon after heat treatment at the temperature slightly higher than 300°C. On the other hand, the ZrO 2 -coated b-Ni(OH) 2 was also treated at temperature of 300°C to form the ZrO 2 -coated NiO. Retention of nano-sheets was clearly shown even heating at higher temperature compared to the uncoated one. The particular behavior of the synthesized ZrO 2 -coated NiO nano-sheets was proposed by the surface ZrO 2 nanoparticles that limited the sintering among the NiO nano-sheets. Even after 1000°C treatment, the shape and size of the nano-sheets were still preserved expect the slightly increase of the thickness of the nano-sheets. However, the estimated size of the NiO grains was not consistent with the size and shape observed by TEM. It was proposed to be caused by the observed nano-sheet was stacked by NiO grains.
Conclusions
In this work, we have demonstrated that the retention of shape and size of b-Ni(OH) 2 nano-sheets during its transformation to the NiO nano-sheets was achieved by the developed method. The nanostructure was retained even after 1000°C treatment, which was due to the existence of ZrO 2 clusters on the surface of the NiO nano-sheet. The development also opens up a new way to control the shape and size of metal oxides at high temperatures, which is a critical issue in the development of anode materials for SOFC application. | 2014-10-01T00:00:00.000Z | 2006-11-11T00:00:00.000 | {
"year": 2006,
"sha1": "26214b22cb868432113c514a92c6cc94362c489f",
"oa_license": "CCBY",
"oa_url": "https://nanoscalereslett.springeropen.com/track/pdf/10.1007/s11671-006-9025-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "26214b22cb868432113c514a92c6cc94362c489f",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
251470108 | pes2o/s2orc | v3-fos-license | Modulation of osteogenic differentiation by Escherichia coli-derived recombinant bone morphogenetic protein-2
Recombinant human bone morphogenetic protein-2 (rhBMP-2), a key regulator of osteogenesis, induces the differentiation of mesenchymal cells into cartilage or bone tissues. Early orthopedic and dental studies often used mammalian cell-derived rhBMP-2, especially Chinese hamster ovary (CHO) cells. However, CHO cell-derived rhBMP-2 (C-rhBMP-2) presents disadvantages such as high cost and low production yield. To overcome these problems, Escherichia coli-derived BMP-2 (E-rhBMP-2) was developed; however, the E-rhBMP-2-induced signaling pathways and gene expression profiles during osteogenesis remain unclear. Here, we investigated the E-rhBMP-2-induced osteogenic differentiation pattern in C2C12 cells and elucidated the difference in biological characteristics between E-rhBMP-2 and C-rhBMP-2 via surface plasmon resonance, western blotting, qRT-PCR, RNA-seq, and alkaline phosphatase assays. The binding affinities of E-rhBMP-2 and C-rhBMP-2 towards BMP receptors were similar, both being confirmed at the nanomolecular level. However, the phosphorylation of Smad1/5/9 at 3 h after treatment with E-rhBMP-2 was significantly lower than that on treatment with C-rhBMP-2. The expression profiles of osteogenic marker genes were similar in both the E-rhBMP-2 and C-rhBMP-2 groups, but the gene expression level in the E-rhBMP-2 group was lower than that in the C-rhBMP-2 group at each time point. Taken together, our results suggest that the osteogenic signaling pathways induced by E-rhBMP-2 and C-rhBMP-2 both follow the general Smad-signaling pathway, but the difference in intracellular phosphorylation intensity results in distinguishable transcription profiles on osteogenic marker genes and biological activities of each rhBMP-2. These findings provide an extensive understanding of the biological properties of E-rhBMP-2 and the signaling pathways during osteogenic differentiation.
Introduction
Urist and Cols were the first to identify the phenomenon of bone formation by auto-induction in a mineralized bone matrix in 1965; they named the matrix glycoprotein that induced the differentiation of mesenchymal cells into bone marrow cells and bone as bone morphogenetic protein (BMP) (Urist 1965;Urist and Strates 1971). Various studies on the isolation and purification of BMPs followed, and BMP-2 was the first to be derived from mammalian cells via cloning in 1988 (Wozney et al. 1988). Analyses of cDNA and amino acid sequences identified BMP-2 as a protein that belongs to the transforming growth factor-β (TGF-β) superfamily, whose proteins show biological activity in homo-dimeric form when binding to type I and type II serine-threonine kinase receptors (BMP receptor type 1 and 2, respectively) (Quaas et al. 2018).
Recombinant human BMP-2 (rhBMP-2) induces the differentiation of mesenchymal cells into cartilage or bone tissues and is a key regulator of osteogenesis (Kirker-Head 2000). Various clinical applications utilizing the functions of rhBMP-2 have been attempted in orthopedics and dentistry (Namikawa et al. 2005). The most widely recognized commercial product based on rhBMP-2 is Infuse ™ , which has already received FDA approval (Ong and Bouazza-Marouf 2000;Even et al. 2012).
Early studies often used mammalian cells-derived rhBMP-2, especially from Chinese hamster ovary cells . However, C-rhBMP-2 presents disadvantages such as high cost of culture and purification as well as low production yield. Some studies have attempted to address these disadvantages by producing rhBMP-2 in Escherichia coli to achieve high production and were able to refold rhBMP-2 in an active dimeric form through in vitro renaturation (Ruppert et al. 1996;Kübler et al. 1998). The most notable difference between C-rhBMP-2 and E. coli-derived rhBMP-2 (E-rhBMP-2) is the presence or absence of post-translational glycosylation. Proteins produced from Chinese hamster ovary (CHO) cells contain glycans in their final expressed form, whereas proteins produced from E. coli do not contain glycans. Various studies have reported that E-rhBMP-2 exhibits biological activities similar to those of C-rhBMP-2, despite structural differences. According to these studies, C-rhBMP-2 has a higher in vitro potency but similar in vivo efficacy compared to that of E-rhBMP-2 (Fung et al. 2019;Kim et al. 2011;Suzuki et al. 2020).
The BMP-2 signaling pathway starts with the BMP-2 ligand forming a complex with BMP type I and II receptors on the cell surface. Subsequently, the serine/threonine kinase of the BMP type I receptor is activated and intracellular receptor-regulated Smads (R-Smads) are phosphorylated. Phosphorylated R-Smad subsequently forms a complex with the common mediator Smad, Smad4. The complex is then transferred into the nucleus and initiates the expression of transcription factors that induce differentiation into osteocytes and chondrocytes (Wang et al. 2014;Pasero et al. 2012;Miyazono et al. 2010;Prashar et al. 2014). Moreover, it has been reported that E-rhBMP-2 may have the same signaling pathway as C-rhBMP-2, based on a phosphorylation assay of R-Smads and osteogenic gene expression studies (Yano et al. 2009). Recently, whole genome RNA sequencing (RNA-seq) based on high-throughput next-generation sequencing (NGS) technology was applied to identify the transcriptomes associated with osteogenesis (Han et al. 2018;Shigemizu et al. 2020;Shaik et al. 2019;Zhou et al. 2021). Additionally, the rhBMP-2-dependent gene regulatory network was analyzed by RNA-seq to characterize the role of differentially expressed transcription factors on osteoblast differentiation, but the difference induced by E-rhBMP-2 and C-rhBMP-2 in biological activity and osteogenic gene expression profile during whole in vitro osteogenesis remains unclear (Yu et al. 2021;Wei et al. 2020).
This study aimed to provide information on the characteristics of E-rhBMP-2 during osteogenic differentiation in an in vitro system compared with the characteristics of C-rhBMP-2. Herein, the signaling pathway induced by E-rhBMP-2 was investigated and the equilibrium dissociation rate constants (K D ) with three types of BMP-2 receptors were determined. The phosphorylation profile of R-Smad 1/5/9 in the cytoplasm and the expression profile of osteogenic marker genes were also evaluated. Moreover, we conducted an in vitro comparison of alkaline phosphatase (ALP) expression levels between E-rhBMP-2 and C-rhBMP-2, which are representative osteogenic marker proteins. Our results provide information on the osteogenic signaling pathway and biological characteristics induced by E-rhBMP-2.
Preparation of rhBMP-2
C-rhBMP-2 was purchased from National Institute for Biological Standards and Control (Hertfordshire, UK). The C-rhBMP-2 was prepared by reconstitution with 1 mL of distilled water in accordance with the manufacturer's protocol. E-rhBMP-2, a homo-dimeric protein with a mature rhBMP-2 sequence, was provided by Daewoong Pharmaceutical Co., Ltd. (Seoul, Republic of Korea). The E-rhBMP-2 was prepared in a buffer containing 25 mg/mL glycine, 3.7 mg/mL glutamic acid, 5 mg/ mL sucrose, 0.1 mg/mL NaCl, and 0.1 mg/mL polysorbate 80.
Surface plasmon resonance analysis
Surface plasmon resonance (SPR) measurements were performed using a Biacore T200 instrument (GE Healthcare, Pittsburgh, PA, USA). BMP receptor type 1A (BMPR1A), BMP receptor type 1B (BMPR1B), and BMP receptor type 2 (BMPR2) were purchased from Thermo Fisher Scientific Co. (Rockford, IL, USA). A mixture of 0.4 M 1-ethyl-3-(3-dimethylaminopropyl) carbodiimide and 0.1 M N-hydroxysuccinimide was injected over both the sample (E-rhBMP-2 and C-rhBMP-2) and reference (BMPR1A, BMPR1B, and BMPR2) channel for 7 min at a flow rate of 10 µL/min. The immobilization target level was set to 500 RU for each ligand (E-rhBMP-2 and C-rhBMP-2). E-rhBMP-2 stock samples and C-rhBMP-2 stock samples were diluted to 5.9 µg/mL and 0.5 µg/mL, respectively, in 10 mM sodium acetate (pH 6.0) buffer. The prepared ligand samples were only injected and immobilized on the sample channel. To deactivate the surface, 1 M ethanolamine (pH 8.5) was injected into both the sample and reference channels for 7 min at 10 µL/min. All analyte samples (BMPR1A, BMPR1B, and BMPR2) were diluted in a concentration range of 10-2000 nM. Samples were injected into the sample and reference channels at a flow rate of 30 µL/min for an association phase of 120 s, followed by 120 s of dissociation to determine the association time, dissociation time, and concentration range of analysis. To test the regeneration conditions, NaOH in the concentration range of 10-50 mM was injected for 10 s at a flow rate of 30 µL/ min. A 1:1 binding model describing one molecule of analyte binding to a single ligand molecule was used to calculate the K D .
C2C12 cell culture
Myoblastic C2C12 cells were purchased from the American Type Culture Collection (Rockville, MD, USA). C2C12 cells were grown in Dulbecco's modified Eagle's medium (DMEM) containing 10% fetal bovine serum (FBS) and 1% penicillin-streptomycin (PS) at 37 °C and 5% CO 2 humidified air using T75 flasks. When the cells reached confluence, they were used for other experiments. All reagents for cell culture were purchased from Gibco ™ (Langley, VA, USA).
Western blot analysis
Cultured C2C12 cells were collected and seeded in 6-well plates (2 × 10 5 cells/well). The plates were then incubated at 37 °C and 5% CO 2 for 24 h. E-rhBMP-2 and C-rhBMP-2 solutions were prepared at a concentration of 100 ng/mL by diluting with DMEM containing 2% bovine serum (BS). The prepared solutions were added to each well of a 6-well plate and incubated at 37 °C and 5% CO 2 . The cells were collected at 0, 3, and 6 h after rhBMP-2 treatment. The collected cells were lysed using a lysis buffer (50 mM Tris-HCl (pH 6.8), 2% SDS, 0.001% bromophenol blue, and 10% glycerol). The cell lysate was centrifuged (12,000×g) at 4 °C, after which the supernatant was recovered and protein concentration in the supernatant was measured using a BCA protein assay kit (Thermo Fisher Scientific Co) according to manufacturer instructions. Electrophoresis was carried out at 200 V for 30 min after loading 200 μg of protein into each well of 4-12% Bis-Tris Gels (Invitrogen, Carlsbad, CA, USA). The separated proteins from the gel were then transferred onto PVDF membranes (Thermo Fisher Scientific Co.). The membranes were blocked for 1 h with 5% skim milk, followed by reaction with anti-Phospho-Smad1/ Smad5/Smad9 rabbit antibody (Danvers, MA, USA) diluted 1:1000 at 4 °C for 24 h. The membranes were then incubated with anti-rabbit HRP secondary antibody (Cambridge, MA, USA) diluted 1:2000 at 25 °C for 2 h.
qRT-PCR analysis
Cultured C2C12 cells were collected, and 2 mL of the cell solution (2 × 10 5 cells/mL) was seeded in 6-well plates. The cells were incubated at 37 °C and 5% CO 2 for 24 h. E-rhBMP-2 and C-rhBMP-2 solutions were prepared at a concentration of 1000 ng/mL by diluting with DMEM containing 2% BS. After treating the cells with the prepared rhBMP-2 solutions, they were incubated at 37 °C and 5% CO 2 and recovered after 18 and 24 h of incubation. After using the PureLink ™ RNA mini kit (Invitrogen) to extract mRNA from the collected cells, a High-Capacity RNA-to-cDNA ™ kit (Thermo Fisher Scientific Co.) was used to obtain cDNA.
The primers for osteogenic genes (Runt-related transcription factor 2 (Runx2) and osteocalcin (OCN)) and β-actin gene (internal control) were purchased from Thermo Fisher Scientific Co. (Runx2: Mm00501580_m1, OCN: Mm03413826_mH, β-actin: Mm00607939_s1). PCR was performed using TaqMan ™ Fast Advanced Master Mix (Applied Biosystems, Foster City, CA, USA), and the expression levels of Runx2 and OCN genes relative to β-actin were measured according to the manufacturer's instructions. For qRT-PCR analysis, experiments were performed in triplicate and values were expressed as the mean ± standard deviation (SD). Minitab 14 (Minitab Ltd, USA) was used to perform statistical analyses. Student's t-test was used to analyze differences in the expression level of the Runx2 and OCN genes between E-rhBMP-2 and C-rhBMP-2 at each time point. A value of P < 0.05 was considered significant.
ALP assay
Cultured C2C12 cells were collected, and the cell suspension was diluted with culture media to reach a concentration of 1 × 10 5 cells/mL. After adding 100 µL of the C2C12 cell solution (1 × 10 5 cells/mL) to each well of a 96-well plate, the cells were fixed by incubation at 37 °C and 5% CO 2 for 24 h. E-rhBMP-2 and C-rhBMP-2 were prepared to a concentration of 5-1000 ng/mL (5.0, 23.3, 37.3, 59.6, 95.4, 152.6, 244.1, 390.6, 625.0, and 1000.0 ng/ mL) by diluting with DMEM containing 2% BS. The prepared rhBMP-2 solutions at each concentration were used for treating the C2C12 cell solutions in triplicate in a 96-well plate. After 72 h of incubation, all solutions in the 96-well plate were removed and 100 µL of lysis buffer (100 mM glycine, 1 mM magnesium chloride, 1 mM zinc chloride, and 1% tergitol, pH 9.6) was added to each well to lyse the cells. Upon completion of lysis, 50 µL of p-nitrophenyl phosphate was added to each well followed by incubation at 25 °C for 30 min. Upon completion of the reaction, the absorbance at 405 nm was measured using a SpectraMax ® M3 microplate reader (Molecular Devices LLC, Sunnyvale, CA, USA). Parallel-line assays were performed based on the measured dose-response curves using PLA 2.0 (Stegmann Systems GmbH, Germany). The experiments were performed on three separate plates with three replicates each.
Library construction and sequencing
Cultured C2C12 cells were collected and seeded in 6-well plates (2 × 10 5 cells/well). The plates were incubated at 37 °C and 5% CO 2 for 24 h. E-rhBMP-2 and C-rhBMP-2 solutions were prepared at a concentration of 100 ng/mL by diluting with DMEM containing 2% BS. After treating each well of the 6-well plate with the prepared solutions, the plate was incubated at 37 °C and 5% CO 2 . The cells were collected after 3, 6, 12, and 24 h of incubation and mRNA was extracted as described in the qRT-PCR analysis section.
The acquired mRNA was purified using the Dynabeads ® mRNA Purification Kit (Invitrogen) to deplete rRNA and enrich poly(A) + RNA using oligo d(T). Enriched mRNA was used for library construction using the MGIEasy RNA Directional Library Prep Kit (MGI, Shenzhen, China), according to the manufacturer's instructions. The directional RNA libraries were then sequenced using the DNBSEQ-T7 sequencing instrument (BGI Genomics, China), according to the manufacturer's instructions, yielding 150 bp paired-end reads. Sequencing results from this study have been deposited in the NCBI Short Read Archive under the accession number PRJNA812069.
Sequence alignment and gene expression analysis
Adapter sequence and low-quality bases were removed using the Cutadapt tool (version 2.9) (Martin 2011) and sequences under 36 bp were discarded by using the Trimmomatic tool (version 0.39) (Bolger et al. 2014). After trimming, reads were aligned to the mouse reference genome (mm10) and annotated by Ensembl v.102 using STAR (version 2.7.3a) with default parameters (Dobin et al. 2012). The read count and transcript per million values for individual transcripts were produced using the RSEM (version 1.3.1) software tool with default parameters (Li and Dewey 2011). The Bioconductor package edgeR was applied to identify differentially expressed genes (DEGs) with a P-value of 0.05 and fold change ≥ 2 criteria (Robinson et al. 2010).
Effects of rhBMP-2 on Runx2 and OCN expression
The mRNA expression levels of commonly-used marker genes, Runx2 and OCN, were identified in C2C12 cells using qRT-PCR to investigate the effects of E-rhBMP-2 Table 1 Binding kinetic parameters of E-rhBMP-2 and C-rhBMP-2 k a , association rate; k d , dissociation rate; K D , equilibrium dissociation rate constant; R max , analyte binding capacity of the surface
Effects of rhBMP-2 on ALP activity
ALP is a typical osteogenic marker known to be expressed during osteoblastic differentiation. The ALP activity of C2C12 cells was measured according to the concentrations of E-rhBMP-2 and C-rhBMP-2 after 72 h. The ALP activity of C2C12 cells increased in a dose-dependent manner from 10 to 500 ng/mL with an increase in the concentration of E-rhBMP-2 and C-rhBMP-2 (Fig. 3). The EC 50 values of E-rhBMP-2 and Fig. 1 Effects of E-rhBMP-2 and C-rhBMP-2 on intracellular receptor-regulated Smads in C2C12 cells. A total of 0.2 mg protein was loaded onto each lane and anti-phospho-Smad1/5/9 antibody was applied to detect the phospho-Smad1/5/9. Bands for Smad1 and β-actin are shown as the loading control. Bands were detected using Amersham Imager 600 (GE Healthcare). a E-rhBMP-2, Escherichia coli-derived recombinant human bone morphogenetic protein-2; b C-rhBMP-2, rhBMP-2 derived from CHO cell; c Relative phosphorylation was calculated normalizing the quantification values of phospho-Smad1/5/9 and Smad1 band intensity to the β-actin band intensity Kim et al. AMB Express (2022) C-rhBMP-2 determined from 4-parameter curves based on a total of three assays were 78.6 ± 6.69 ng/mL and 22.1 ± 2.55 ng/mL, respectively. Moreover, the potency of E-rhBMP-2 relative to that of C-rhBMP-2 determined by parallel line assay was 0.306 ± 0.02 (Table 2).
Overall gene expression profile induced by rhBMP-2
RNA-seq analysis was performed for a more in-depth understanding of the overall gene expression following E-rhBMP-2 and C-rhBMP-2 treatment. The RNA-seq data were analyzed in two ways. First, DEG analysis was performed to compare the gene profile of C2C12 cells between E-rhBMP-2 and C-rhBMP-2 treatment for each time. Out of the 21,868 genes in total, only two DEGs between E-rhBMP-2 and C-rhBMP-2 were identified at 3, 6, and 12 h. The number of DEGs rapidly increased to 85 at 24 h, including 11 up-regulated genes in E-rhBMP-2 and 74 up-regulated genes in C-rhBMP-2 (Table 3).
Next, the expressions of osteogenic marker gene in C2C12 cells not treated with rhBMP-2 (control) and that of C2C12 cells treated with C-rhBMP-2 or E-rhBMP-2 were compared. Significant changes were observed in expression levels of the nine osteogenic marker genes at 24 h after E-rhBMP-2 and C-rhBMP-2 treatment ( Table 4). The osteogenic marker genes showed a similar profile in both the E-rhBMP-2 and C-rhBMP-2 groups. However, the expression level of each gene was lower under E-rhBMP-2 treatment than C-rhBMP-2 treatment. Moreover, the expression levels of nine osteogenic marker genes at 24 h after C-rhBMP-2 treatment were significantly different compared to those in the control, whereas the expression levels of three genes (TGF-β-inducible early gene Fig. 3 Effect of E-rhBMP-2 and C-rhBMP-2 concentration on ALP activity in C2C12 cells. Optical density values were recorded at 405 nm after 72 h of culture Table 2 Comparison of in vitro biological activity between E-rhBMP-2 and C-rhBMP-2 a EC 50 , half-maximal effective concentration, values were calculated by analyzing the results using a 4-parameter curve E-rhBMP-2 C-rhBMP-2 Relative potency of E-rhBMP-2 based on C-rhBMP-2 Slope EC 50 a (ng/mL) Slope EC 50 a (ng/mL) 0.93 ± 0.14 78.6 ± 6.69 1.46 ± 0.13 22.1 ± 2.55 0.306 ± 0.02 Table 3 Differentially expressed genes between E-rhBMP-2 and C-rhBMP-2 in C2C12 cells DEG, differentially expressed genes a DEG ratio was presented as percentage of total DEGs out of the total gene counts (21,868) at each time point (TIEG1), Runx2, and OPG) at 24 h after E-rhBMP-2 treatment were not significantly different. The expression profiles of nine osteogenic marker genes at 3-24 h after rhBMP-2 treatment were similar to those of C-rhBMP-2 treatment (Fig. 4). Changes in the expression levels of immediate early response genes were similar between the E-rhBMP-2 and C-rhBMP-2 groups, whereas the expression level of the old astrocyte specifically induced substance (OASIS) gene after 24 h was approximately 4 times higher under treatment with C-rhBMP-2 than under E-rhBMP-2 treatment.
Discussion
This study aimed to characterize the biological properties and signaling pathways of E. coli-derived rhBMP-2 during osteogenesis. The study used SPR analysis, western blot assays, qRT-PCR, RNA-seq analysis, and ALP assays to analyze and identify the binding affinity between each rhBMP-2 and BMP receptors on the cell surface (BMPR1A, BMPR1B, and BMPR2), subsequent intracellular phosphorylation, expression profiles of osteogenic genes in the nucleus, and ALP (differentiated osteoblast marker) activity.
The major structural difference between E-rhBMP-2 and C-rhBMP-2 is the absence or presence of an N-glycan on the protein surface. However, the absence of N-glycan on E-rhBMP-2 demonstrates little or no impact on binding with BMP receptors, since the K D values of each E-rhBMP-2 were determined in the nanomolar range (10 -7 to 10 -9 M) and no significant difference on binding affinity was observed. Similar results were also reported in a molecular perspective study, claiming that N-glycans were not directly related to the rhBMP-2 epitope for binding BMP receptors (Kirsch et al. 2000).
Western blot and qRT-PCR analysis revealed that the binding of E-rhBMP-2 to BMP receptors could stimulate the osteogenic response via the classical Smad signaling pathway similar to that of C-rhBMP-2. Phosphorylation of R-Smads (Smad1/5/9) was detected and the intensity was increased up to 6 h in the C2C12 cells after treatment with E-rhBMP-2 and C-rhBMP-2. Since the phosphorylated R-Smads subsequently form a complex with Smad4 and are transferred into the nucleus, up-or downregulation of the specific genes should be observed (Wang et al. 2014;Pasero et al. 2012;Miyazono et al. 2010;Prashar et al. 2014). In this study, a significant increase in the mRNA expression level of the osteogenic marker genes, Runx2 and OCN, was achieved indicating that E-rhBMP-2 was able to serve as an activator for osteogenic differentiation.
However, the intensity of R-Smads phosphorylation and the marker gene expression profile induced by E-rhBMP were clearly different from those of C-rhBMP. Smad1/5/9 phosphorylation at 3 h after C-rhBMP-2 treatment was higher than that on treatment with E-rhBMP-2. Based on these findings, it was determined that E-rhBMP-2 follows the signaling pathway regulated by Smads, but the initial intracytoplasmic phosphorylation rate is lower than that of C-rhBMP-2. In the case of Runx2 and OCN, the gene expression level according to treatment time showed no statistically significant differences based on the type of rhBMP-2, but the expression level of Runx2 at 18 h after C-rhBMP-2 treatment was higher than that on E-rhBMP-2 treatment. Similar to the Western blot results, these results demonstrate that the rate of osteogenic gene expression on E-rhBMP-2 treatment was slower than that on C-rhBMP-2 treatment.
ALP activity was measured to compare the potency of E-rhBMP-2 and C-rhBMP-2 in vitro. The relative potency of C-rhBMP-2 was approximately three times higher than that of E-rhBMP-2, which is consistent with the findings of previous studies (Fung et al. 2019;Suzuki et al. 2020). This difference in potency could be attributed to fast phosphorylation and the high expression levels of osteogenic genes found in the C-rhBMP-2 group from the phosphorylation results of R-Smads and the profiles of gene expression reported in a previous study (Yano et al. 2009). However, although the C2C12 cells were treated with high concentrations of E-rhBMP-2 and C-rhBMP-2, both groups showed similar levels of ALP activity, indicating that there was no difference between E-rhBMP-2 and C-rhBMP-2 with respect to efficacy. Many studies have reported that under in vivo conditions, when a sufficient amount of rhBMP-2 was exposed to a wound site with an effective carrier system like collagen or biodegradable polymer, E-rhBMP-2 and C-rhBMP-2 showed similar bone-inducing activities (Huh et al. 2011;Bessho et al. 2000). The influence of E-rhBMP-2 and C-rhBMP-2 on overall gene expression was revealed by RNA-seq in a time-dependent manner. The number of DEGs between E-rhBMP-2 and C-rhBMP-2 groups at 3, 6, and 12 h after rhBMP-2 treatment was only two out of a total of 21,868 genes. On the contrary, 85 DEGs were found at 24 h after rhBMP-2 treatment; this indicates that the difference in gene expression profiles according to rhBMP-2 types occurred dramatically at 24 h post-treatment. Among the DEGs, several osteogenic markers were identified and classified into two groups depending on the response time. Immediate early response genes include inhibitors of differentiation or inhibitor of DNA binding 3(Id3), OASIS, and TIEG, and late response genes include type 1 collagen (Col1), osterix (OSX), osteoprotegerin (OPG), and osteopontin (OPN), along with Runx2 and OCN (Miyazono et al. 2010;Sun et al. 2015).
The expression profiles of nine typical osteogenic marker genes were similar between the E-rhBMP-2 and C-rhBMP-2 groups. However, the level of gene expression at each time point was higher in the C-rhBMP-2 group compared to that in the E-rhBMP-2 group. In particular, the expression level of OASIS at 24 h after C-rhBMP-2 treatment was approximately four times higher than that in the E-rhBMP-2 group. OASIS is a transcription factor that activates the expression of type 1 collagen during osteogenesis (Kondo et al. 2012); therefore, the expression level of Col1 at 24 h was higher in the C-rhBMP-2 group compared to that in the E-rhBMP-2 group. It is believed that this difference in the expression level of immediate-early response genes has been affected by intracytoplasmic phosphorylation rate, as confirmed by western blot analysis. Similarly, rapid intracytoplasmic phosphorylation rate determined in C-rhBMP-2 was related to the relatively high expression of osteogenic markers genes like late response osteogenic marker genes OCN and OPG in C-rhBMP-2 as compared to E-rhBMP-2, as confirmed by qPCR and RNA-seq analysis.
In Conclusion, the biochemical analysis confirmed that the osteogenic signaling pathway induced by E-rhBMP-2 or C-rhBMP-2 followed the Smad signaling pathway. Moreover, the difference in potency found between E-rhBMP-2 and C-rhBMP-2 could be attributed to the differences in intracellular phosphorylation and transcription rate of osteogenic genes, rather than the interaction between rhBMP-2 proteins and BMP receptors. As this study focused on the in-vitro results, we were unable to describe the signal pathway of E-rhBMP-2 during osteogenesis in-vivo. Despite this limitation, our study contributes to enhancing the understanding of the mechanism of E-rhBMP-2-induced osteogenesis and enables the application of E-rhBMP-2 in the field of bone regeneration medicine. | 2022-08-11T06:16:10.610Z | 2022-08-10T00:00:00.000 | {
"year": 2022,
"sha1": "9ad37b1ba1ccc193d31180958b9fe177cf4bd24b",
"oa_license": "CCBY",
"oa_url": "https://amb-express.springeropen.com/track/pdf/10.1186/s13568-022-01443-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "60150ad77c3746ecc446759e97b6b9fcdb19920c",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
30969779 | pes2o/s2orc | v3-fos-license | Anaesthetic Management for Cataract Surgery in VACTERL Syndrome Case Report
Summary V: Vertebral anomalies, A: Anal malformation, C: Cardiovascular defect, TE: Trachealand esophageal malformation, R:Renal agenesis, L: Limb anomalies. underwent cataract surgery under general anaesthesia. She had multiple congenital anomalies like esophageal atresia, imperforate anus (corrected), single kidney & radial aplasia. Anticipating problems of gastro -esophageal reflux & chronic renal failure, successful management was done.
Introduction
VATER Association is a set of congenital malformations occurring in different combinations (at least two). These present asmajor functionalimpairment and appear during first year of life. 1 VATER is an acronym or abbreviation representing the first letter of each feature in association. Because of somenon-causalmalformations whether this pattern is sequence or syndrome is stillunknown. For such situations the word association has been coined. The VATER has been now abbreviated as VACTERL.
[V:Vertebralanomalies, A:Anal malformation, C: Cardiovascular defect 2 , TE:Tracheal and esophageal malformation, R:Renal agenesis 3 , L:Limb anomalies.] This malformationpredominantly occurs in association and with sporadic presentation in families without previous history. 4 Here we report the anaesthetic management of a rare case of VACTERLsyndrome for cataract surgery.
Case report
Eight year old girl, weighing 14kgpresented with diminished vision for six months. Cataract was diagnosed in both the eyes. She was full term normally delivered child, detected to have esophageal atresia & imperforate anus after fewhours of birth. She also had absent radius.Asinglekidney wasaccidentally detected on sonography.
At 30hours oflife she had undergone feeding gastrostomy, colostomy & esophageal pouch was created. At the age of 1½ years, she had undergone anal pull through surgery & colostomy closure was done.At the age of 2 years, esophageal atresia was corrected by formation of tube ofcolon anastomosingwith stomach; subsequently esophagealpouch wasclosed. At the age of 6 years she presented with anuria & diagnosed to have renalhypertension and chronic renalfailure due to single kidney. She wason hemodialysisinitially once a weeknow twice a week. Prior to the day of surgery, she had undergone hemodialysis. She was on oral amlodepin 5mg BD & Losartan 50mg ½ HS since 2 years. She haddeafness since 8 months, using hearing aids and now presented with diminution of vision of both the eyes, diagnosed as bilateral cataract.
On clinicalexamination, (Fig.1-4) she was malnourished,weighing 14Kg& height of 100cm.She was deaf,non-cooperative andunable to stand dueto weakness. Her leftarm was short dueto Radialaplasia. Right arm & forearm had AV fistula. Her pulse rate was 120/ min regular; blood pressure 160/100mmHg measured on left upper arm and body temperature was normal. She was pale, no icterus or cyanosis was observed. Her spine was normal. In cardiovascular system, S1\S2 was normalwith soft systolic murmer. On respiratory system examination, air entry was equal on both the sides; there were no foreign sounds. Mallampati classification could not be elicited, as child was un-cooperative.
Anaesthesia management
Patient was classified asASA grade IV in view of multiple anomaliesand chronicrenalfailure.Patient was dialysed day prior to the surgery & blood investigations weredone. Patient was anoperated caseof esophageal atresia, in which tube of colon was connected from esophagus to stomach. In this operated patient since esophageal sphincter does not exists, so there is always risk of regurgitation. Anticipatingthe risk of regurgitation, patient was kept nil per orally for 12hrs. Antacid (Tab. ranitidine 2-4mg.Kg -1 , 1/4 th tablet) and anti-hypertensive drugs were given with sips of water two hrs before surgery. Peripheral IV cannulation with 22gcannula was done, DNSinfusion was started. Highrisk consent was obtained and patient shifted to operation theatre. Nasogastric tube could not be passed, as child was uncooperative. Cardioscope, pulse oxime-ter, NIBPwere attached to the patient. Glycopyrrolate 60mcg & ondansetron 1mg was given IV. She was preoxygenated with 100% oxygen for 5min, after that midazolam 0.4mg, tramadol 30mgIV were given. Patient was induced with thiopental 100mg IV & atracurium 8mg IV was given. At the time of induction through the transparent mask, reflux of regurgitated materialwas seen in oralcavity.Patient was still breathing spontaneously as relaxant effect wasyet tocome & no IPPV was given. Immediately head low was done & suction done with large bore catheter. Under direct laryngoscopy, orotracheal intubation was done with 5 mm I.D. ETT and ventilated with Jackson & Rees circuit with 100% oxygen. Air entry was checked, tube fixed, throat pack kept. Saturation dropped up to 92%, so endotrachealsuction was done with saline, through endotrachealtube, no foreign material was retrieved. Hydrocortisone 20mg & dexamethasone 2.8mg IV were given. Saturation improved upto 98%, there were no foreign sounds on chest auscultation. Then patient was ventilated with oxygen (50%), nitrous (50%) & halothane intermittently. Surgery was allowed to start, as patient was stable maintaining saturation98% -99%. Intra-operatively vitals were stable, all care to avoid hypothermia & fluid over load was taken in view of CRF. Total surgery time was 45min. X-ray chest on table showed clear lung field. After regaining spontaneous respiration, throat pack was removed, suction done & reversed with neostigmine 0.8mg& atropine 0.3mg. Patient was extubated aftershe wasfully awake. Vitals were stable & shifted to PICUfor observations. Post operatively she was kept on antibiotic & humidified oxygen on venti-mask She did not require bronchodilatoror nebulization.
In 1975 VATER was expanded to VACTERL 5 which includes V: Vertebral anomalies 70%, A:Anal anomalies 80%, C: Cardiac problems 53%, TE: Tracheo-esophageal atresia 70%, R: Renal anomalies 53% & L: Limb anomalies 65%. Radial defect is commonly detected at birth so it is mandatory 6 to rule out other anomalies. Sporadic association of specific birth defects 7 characterizesthe VACTERLsyndrome. Quan & Smith first developed the term VATER association in 1973. VATER association is genetically considered a polytopic change, thus related to blastogenesis & differentfrom monotopicchange related to organogenesis 8 .Although the uncertainty aboutmalformation origin, whether genetic or predisposed 9 ,allinvolved structures are genetically normal. Even parts secondarily affectedby them (spinabifida) are genetically normal. 10 Our patient presented with bilateralcataract. She also had single kidney & radialagenesis. In early child hood patient had under gone correction of esophageal atresia where colon was used for reconstruction of esophagus. The disadvantage of this is colon does not have sphincteric action & motility is also slow. So the risk of gastro-esophageal reflux 11,12 is 50%. Threat to the life can occur duringinduction due to reflux of regurgitated particles & soilingof lungsparenchyma. With the anticipation of above problem, patient was kept nil per orally for 12h. Head up was done. Pro kinetic were not given in view of extra-pyramidal side effects. Adequate pre-oxygenation was done. The large bore suction & transparent mask were kept ready.Also vigilant checkwas keptfor foodparticles regurgitatingin mouth. Thus all steps to avoid aspiration of food particle in lungs were done. With this we could avoid soiling of thelungs. Butsituation may not be the samealways. To avoid the risk of aspiration rapid sequence induction with suxamethonium is an ideal technique. Our patient was the case of CRF ondialysis withmultiple congenital defects & muscle weakness so we could not use suxamethonium,due topotential riskof hyperkalemia leading to cardiac arrest. In these patients care must be takento avoid hyperkalemia, fluid overload, hypothermia. Also due care of AV fistula is needed to avoid injury.Abnormalities in bleeding profile& coagulation can lead to intra-ocular bleeding, as the patient was on dialysis. Patient was anaemic, however she had lost her vision so the riskwas taken to improve her vision. Afterexplainingtheanaesthesia riskto the relatives with informed consent patient was accepted for surgery.
Patient with tracheo-esophageal fistula has susceptibility to respiratory infections due to weakness in tracheal muscles & hyper reactive airways. These patients need active respiratory care with physiotherapy & antibiotics, pre & post operatively. When the colon is used for anastomosis, since esophageal sphincter does not exists, there is always riskof regurgitation. A silent regurgitation is a real threat to life at induction of anaesthesia even if patientis adequatelystarved. Hence history of regurgitation even at rest after food should be ruled out pre-operatively, on repeated enquiry relatives came out with this history in our patient. This demands vigilance & extra care to avoid aspiration.
In conclusion the pre-existinganomalies with potential risk of regurgitation in patient with VACTERL syndrome demands careful pre-operative assessment and use of skillful anaesthetic technique to avoid fatal complications. | 2018-04-03T01:47:55.442Z | 2009-02-01T00:00:00.000 | {
"year": 2009,
"sha1": "f81743445741e8d3a0c1a8725ff7dc73fb947e7c",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "5d6a9083e8c0e9ab65796ba1ec3aa47c7c59606a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
263978495 | pes2o/s2orc | v3-fos-license | How to print a crystal structure model in 3D †
We present a simple procedure for the conversion of Crystallographic Information Files (CIFs) into Virtual Reality Modelling Language (VRML2, .wrl) files, which can be used as input files for three-dimensional (3D) printing. This procedure permits facile production of customized full-colour 3D models of X-ray crystal structures of segments of extended structures, including metal – organic frameworks (MOFs) as well as small molecules. The method uses freely available software that runs under Microsoft Windows, MacOSX and Linux operating systems.
Introduction
Three-dimensional (3D) printing or additive manufacturing is a process of making a three-dimensional solid object of virtually any shape from a digital model.This powerful technology is no longer futuristic, as the cost of 3D printing has dropped dramatically over that past decade, making it affordable even to a hobbyist.3D printing has been touted as ushering "the third industrial revolution"-that of mass customization, wherein virtually any consumer object could be tailored to a previously unprecedented level. 1 This technology will likely revolutionize numerous fields of human activity; as just one example, prosthetic medicine could soon count on implants perfectly matching each individual patient.3D printing is beginning to affect chemical research as well: Cronin et al. have recently reported the creation of customized 3D-printed "reactionware", the composition and shape of which allow its active participation in the reaction and analysis of products. 2hemistry is full of concepts that require three-dimensional understanding, and representing those in two-dimensional PowerPoint slides, journal articles, or on chalkboards inevitably leads to a loss of detail.Crystallography is even more dependent on 3D representations, and most readers of this journal have likely spent numerous hours turning crystal structure models on their computer screens to produce a view that sacrifices the least information.An ability to easily build 3D models of crystal structures is thus clearly needed.In this contribution, we provide step-by-step instructions on how to convert a .ciffile-which is a typical end product of crystal structure refinement-into a 3D-printed physical model of a crystal structure.We use MOFF-3 (Fig. 1)-one of our previously published extended metal-organic framework (MOF) structures 3 -as an illustration for this method.This procedure is broadly applicable to many other structures; in our two labs, we have printed 3D models of more than thirty different small molecules and MOFs.
Our procedure is neither the only nor the first method for achieving the conversion of crystallographic information files into 3D printed models. 4Its advantages are: (a) the ability to produce models of both discrete molecules and segments of extended "infinite" structures such as MOFs; (b) the use of freely available and highly intuitive software packages (vide infra) with ample helpful documentation available online; and (c) its reliance on a commercial 3D printing service provided by Shapeways, which obviates the need for an in-house 3D printer.
The main disconnect between the crystal structure manipulation programs and the 3D printing software lies in the mutual incompatibility of file formats.Out of the commonly used crystal structure processing programs, only PyMOL 5 is able to directly export crystal structure data contained in .ciffiles into the VRML (.wrl) format most commonly used for colour 3D printing. 6This feature enables the printing of small molecule models via (see the ESI † for photos and movies of 3D models of selected small molecules: triazolophanes, 7 cyanostars 8 and cyanostar-[3]rotaxane).However, many crystal structure processing operations-e.g.expansion, addition of multiple unit cells, etc.-are rather difficult to do in PyMOL.Our protocol thus resorts to using two programs which together offer greater flexibility in manipulating both the crystal structure and the 3D model.Mercury 9 is used to produce a .pdbfile of the crystal structure, which is then imported into Blender 10 -a semi-professional 3D printing program-for additional processing and conversion into a .wrlfile.This combination is necessary since Blender appears to be unable to import .pdbfiles produced by programs other than Mercury.
Procedure
Four separate pieces of software are required for this conversion: Mercury 3.3, 9 Blender 2.62 and 2.69, 10 and the opensource embedded Python Molecular Viewer (ePMV) plugin, which runs molecular-modelling software directly in Blender. 11In our work, we used the Windows versions of these programs as well as MacOSX (Mercury 3.1.1and Blender 2.62 and 2.70); since all of the requisite programs are also available for the Linux operating system, it is reasonable to assume that a very analogous procedure should function on this platform as well.
Instructions are as follows: 1. Set user preferences in Blender 2.62.Open Blender 2.62 then click on File > User Preferences > Addons.Make sure that under the Import-export tab, option Web3D X3D/VRML format is checked.Under the System tab, options autoPack, ePMV and ePMV synchro should all be checked.Click on Save as Default.As a result, ePMV and autoPACK buttons 2. Set user preferences in Blender 2.69.Open Blender 2.69 then click on File > User Preferences > Addons.Make sure that under the Import-export tab, options Web3D X3D/VRML format and VRML2 (Virtual Reality Modelling Language) are both checked.Under the Mesh tab, option 3D Print Toolbox should be checked.Click on Save User Settings.
3. Open the crystal structure's CIF file in Mercury and produce the desired packing (one or more molecules/unit cells).At this stage, it may be also useful if disordered atoms or side chains are deleted so that only one orientation remains-unless the objective is to highlight the disorder.To delete undesired features of the structure, click on Edit > Edit Structure… > Remove, and then click on the atoms or molecules that need to be removed.
4. The resulting data should be saved as a PDB file (File > Save As…).In our case, only PDB files produced by Mercury could be successfully used in the subsequent steps.
5. Open Blender 2.62 and click on the ePMV button on the top right.As a result, the ePMV interface will appear on the leftside panel.Delete the cube, camera, and light objects in the main Blender window (this is done by simply right-clicking on those objects followed by pressing the Delete button).
6.In the ePMV panel, choose Browse and navigate to the PDB file produced in step 4. Upon loading, a series of dots will appear in the main Blender window; these represent individual atoms.
7. In the Atom/Bond Representation subpanel on the left, choose the desired structural representation; for most organic and inorganic structures, Atoms or Sticks representations are the most appropriate.The ensuing calculation will take between several seconds and several hours depending on the complexity of the structure.Values for the cpk_scale, bs_scale, and bs_ratio, as well as element colours should be adjusted at this point (if desired), since further changes are not permitted after the file is saved.Changing these parameters will affect all atoms (or bonds) of a given kind; individual atoms can also be modified using the description given in step 10 below.to save it then apply it to any item that you want to have the same color.This feature is particularly useful if certain parts of the structure need to be emphasized.
11.The produced model will most likely need to be resized to be of dimensions that are practical for printing.To do so, click on the Scene button on the right-side panel (third button from the left; highlighted in red in Fig. 2, bottom).Change Units to Metric.Then, select all atoms by pressing A (with the cursor located in the view window), which should result in the entire structure being highlighted in the main Blender window.Click on Join on the left-side panel to connect all of the separate parts, and then use the Scale button in the left-side panel to adjust the size of the structure.We have typically found the originally imported structures to be too large, and most of them needed to be scaled down.It should be noted that the dimensions provided by Blender do not correlate well with the size of the printed model (vide infra), so adjustment of dimensions for 3D printing requires some trial and error.
12. Once satisfied with the model size, click on the Object Mode button on the bottom left and switch the selection to Edit Mode.Then, press W (with the cursor located in the view window) and click on Remove Doubles, which should remove artefact vertices in the structure.
14.The created .wrlfile can now be handled by many commercial and academic 3D-printing facilities.Models presented in this work have been printed by the popular website Shapeways. 12After brief user registration, the source file can be uploaded onto the Shapeways website by clicking on Make + Sell > Upload > Select 3D File and then choosing the produced .wrlfile.Units should be set to inches, and the Upload Now button should be clicked.The Shapeways website will then perform the upload, estimate the model size, and confirm whether the model is indeed printable.The two most commonly encountered problems during the upload are the large file sizes (Shapeways imposes a limit of 64 megabytes, which can be somewhat expanded by uploading compressed ZIP files) and a large number of polygons.The latter are created during conversion of the structure in Blender and their number can be checked by consulting the Tris number on the top-right bar in the Blender window (Fig. 2, bottom).Shapeways limits the complexity of the uploaded models to 1 000 000 polygons.
15.If the model size is not satisfactory, the model should be scaled again (steps 11 and onward) and the process repeated until a desired size is obtained as the estimate.Once the model size is finalized, the Shapeways team will check the printability of the proposed model and inform the user if there are potential issues.If no error is reported, the model can be printed.All that remains is to choose a material for the 3D model.At the time of this writing, Shapeways offered a wide variety of plastic, ceramic and metallic substrates (steel, silver, brass, bronze), but the only material offering full-colour functionality was Full Color Sandstone, a proprietary mixture of plasters, vinyl polymers, and carbohydates. 13ncidentally, this is one of the least expensive 3D printing materials offered.If a monochromatic model is desired, we anticipate that most other materials would function well (although we have not tested them).
16.If printability errors are reported, they are most commonly related to the physical limitations of the sandstone material used in printing.Thus, structures with many bonds may make those bonds too thin (<2 mm) to support themselves; in such cases, either a smaller fragment of the structure should be chosen for printing or the representation should be switched to Space-filling, with larger cpk_scale values used in step 7.
An example of a finished model is shown in Fig. 3.At current (February 2014) prices, a model similar to the one shown in Fig. 3 will cost between $20 and $100 depending on the size. 14ecause sandstone is essentially plaster, models produced from it are rather fragile (they will easily shatter if dropped), This journal is © The Royal Society of 2014 thermostable only up to 60 °C, not resistant to water and have grainy surfaces.There are two solutions to the last two problems.First, the printed model can be dipped in glue (e.g.ZPrinter Z-Bond 90 Infiltrant) to form a coating that gives a strengthened material with a glossy finish when dried.The usage and safety instructions for this product should be followed.For example, when the models are dipped into a plastic container full of the glue, there is a large temperature increase (exothermic) and outgassing is significant enough to warrant the use of personal protective equipment and a ventilated area (fume hood).The model is then patted dry to remove excess glue.The glue penetrates ~2 mm into the model; excess glue will pool and deteriorate the finish.A simpler method that gives similar results makes use of a glossy acrylic spray that can be applied with repetitive spray-dry cycles (4-5 times).
Conclusions
In conclusion, we presented here a set of guidelines on how to convert any small-molecule or extended material crystal structure into a 3D printed model.The value of these models should be in facilitating communication of crystal structure details both in the classroom and between experienced practitioners in the field.Our set of instructions uses freely available software, requires no programming knowledge and no knowledge of 3D printing techniques, and produces models using a commercial easy-to-use website.
As with many rapidly developing technologies, we expect these instructions to be outdated within several years, as 3D printers enter the mainstream and crystal structure processing software becomes better integrated with this obviously very relevant technology.Until then, we hope that our colleagues will find this protocol-and its 3D-printed products-useful and educational.
Note added in proof
During the production of this paper, two related papers also in production came to our attention: 2014, 16, 5488-5493 This journal is © The Royal Society of Chemistry 2014 Teng-Hao Chen Teng-Hao Chen received a BS degree in Chemistry from National Taiwan University in 2006.After graduation, he worked as a research assistant in the Industrial Technology Research Institute in Hsinchu, Taiwan for one year.He is currently pursuing a PhD in Chemistry at the University of Houston in Prof. Ognjen Miljanić's group.His research focuses on the synthesis and applications of novel porous materials.Semin Lee Semin Lee received his BS (2006) and MS (2008) degrees at Sogang University, South Korea, under the supervision of Bongjin Moon.In fall 2009, he joined Professor Amar Flood's laboratory at Indiana University as a PhD student.His research is focused on designing new anion receptor molecules, such as cyanostars, and investigating their materials properties.
Fig. 1
Fig.1The starting point: 2D representation of the MOFF-3 structure produced directly from its .ciffile using Mercury 3.3.
8 .
Save the file and export it as an .x3dfile: click on File > Export > X3D Extensible 3D (.x3d).9. Close Blender 2.62 and open Blender 2.69.Import the .x3dfile created in the previous step: click on File > Import > X3D Extensible 3D (.x3d).10.In Object Mode, you can select and delete individual atoms and bonds and adjust the size of any item (Fig. 2, top).For example, to adjust the size of an atom, right-click on the desired atom and choose Scale in the Object Tools on the interface on the right.To change the color of atoms and bonds, click on the Material icon .Click (−) to remove the original material and (+) to add a new one.The color can be adjusted using the Diffuse option.You can rename the material and click (a) P. Kitson, A. Macdonell, S. Tsuda, H. Zang, D.-L.Long and L. Cronin, Cryst.Growth Des., 2014, 10.1021/cg5003012.(b) V. F. Scalfani and T. P. Vaid, J. Chem.Educ., 2014, 10.1021/ed400887t. | 2019-04-05T03:28:46.097Z | 2014-06-04T00:00:00.000 | {
"year": 2014,
"sha1": "2fd33675c09e80c692c61300c5d153b3636e9da3",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2014/ce/c4ce00371c",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "756dd20b05420e12d1cac86ed6d94a0e02d5c49d",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
85423046 | pes2o/s2orc | v3-fos-license | Development and characterisation of microsatellite markers for the fungus Lasiodiplodia theobromae
Lasiodiplodia theobromae is an important fungal pathogen of higher plants from tropical and sub-tropical regions. The fungus infects divergent hosts in a wide range of environmental conditions, suggesting that it is highly variable. The aim of this study was to develop new polymorphic microsatellite markers from a Brazilian isolate of L. theobromae that can be used in population studies of this Cardoso, J.E.; Wilkinson, M.J. Development and characterisation of microsatellite markers for the fungus Lasiodiplodia theobromae. Summa Phytopathologica, v.34, n.1, p.55-57, 2008 RESUMO Lasiodiplodia theobromae é um importante patógeno de vegetais superiores nas regiões tropicais e sub-tropicais, sendo capaz de infectar diferentes hospedeiros sob as mais variadas condições ambientais, o que sugere uma grande variabilidade genética. O objetivo deste estudo foi o de desenvolver marcadores moleculares microsatéli tes polimórficos de um isolado brasileiro que possa ser usado na caracterização da população de L. theobromae e fungos correlatos. Cardoso, J.E.; Wilkinson, M.J. Desenvolvimento e caracterização de marcadores microsatélites para o fungo Lasiodiplodia theobromae. Summa Phytopathologica, v.34, n.1, p.55-57, 2008 Palavras chave: seqüência simples repetida (SSR), marcador molecular. Nove microsatélites foram identificados no genoma deste isolado, o que permitiu o desenvolvimento de seis marcadores polimórficos em uma população de nove isolados patogênicos de plantas tropicais do Brasil. Testes preliminares revelaram uma acentuada variabilidade genética entre os isolados testados, confirmando a grande variabilidade da população brasileira deste fungo. Estes marcadores poderão ser usados em estudos evolutivos em epidemiológicos desta espécie. Lasiodiplodia theobromae (Sin. Botryodiplodia theobromae, teleomorphic form Botryosphaeria Rhosdina (Berk. & M.A. Curtis)Arx AMX is one of the most important fungal plant pathogens in northern Brazil. This fungus is both ubiquitous and plurivorous and is able to infect over 500 plant species; causing symptoms ranging from seed rot to the discolouration of timber (7, 8). Infection is generally limited to wounded or stressed-weakened plants (1, 4, 5, 6). The species reproduces mainly by asexual mitospores, with the sexual stage rarely being observed under field conditions (7). The current taxonomic treatment of L. theobromae is based entirely on culture characteristics and asexual reproductive structures (e. g. pycnidium, conidiophore, and conidiospores) (7). The extensive list of synonyms that apply to the species illustrates the confusion that exists over its taxonomic and phylogenetic status. Therefore, it is important to determine whether different genetic variants have evolved to specialise towards the infection of particular hosts or if the entire aggregate is characterised by genotypes that all possess the capacity for a broad host range. In order to distinguish between these alternative scenarios, it is first important to develop the capability to discriminate between closely related pathotypes. Simple Sequence Repeat (SSR) or microsatellite analysis is widely acknowledged as the method of choice for molecular studies of population genetic structure, kinship, genotype diagnosis and genetic evolution. Whilst some SSR markers have recently been developed for species closely related to L. theobromae from South Africa (2, 3, 9), the aim of this study was to develop polymorphic microsatellite markers from Brazilian isolates that can be used in population studies of L. theobromae and related fungi. A mycelial mat (200mg) was cultured from an isolate (Ao1) from a gummosis-infected cashew plant from northeastern Brazil (Piauí state) and grown in liquid still culture of potato dextrose broth (PDB) for 15 days. The culture was then macerated and its DNA extracted using the DNeasy kit (QiagenR) according to the manufacturer’s protocol. A genomic library was constructed by first double-digesting and related fungi. The nine microsatellite markers developed included six that revealed allelic polymorphisms among nine isolates of the disease collected from infected plants in Brazil. Preliminary evaluation of the markers suggested substantial genetic variability among Brazilian L. theobromae populations. These markers have potential utility for evolutionary and epidemiologic studies of this fungus.
Lasiodiplodia theobromae (Sin.Botryodiplodia theobromae, teleomorphic form Botryosphaeria Rhosdina (Berk.& M.A. Curtis)Arx AMX is one of the most important fungal plant pathogens in northern Brazil.This fungus is both ubiquitous and plurivorous and is able to infect over 500 plant species; causing symptoms ranging from seed rot to the discolouration of timber (7, 8).Infection is generally limited to wounded or stressed-weakened plants (1, 4, 5, 6).The species reproduces mainly by asexual mitospores, with the sexual stage rarely being observed under field conditions (7).
The current taxonomic treatment of L. theobromae is based entirely on culture characteristics and asexual reproductive structures (e. g. pycnidium, conidiophore, and conidiospores) (7).The extensive list of synonyms that apply to the species illustrates the confusion that exists over its taxonomic and phylogenetic status.Therefore, it is important to determine whether different genetic variants have evolved to specialise towards the infection of particular hosts or if the entire aggregate is characterised by genotypes that all possess the capacity for a broad host range.In order to distinguish between these alternative scenarios, it is first important to develop the capability to discriminate between closely related pathotypes.
Simple Sequence Repeat (SSR) or microsatellite analysis is widely acknowledged as the method of choice for molecular studies of population genetic structure, kinship, genotype diagnosis and genetic evolution.Whilst some SSR markers have recently been developed for species closely related to L. theobromae from South Africa (2, 3, 9), the aim of this study was to develop polymorphic microsatellite markers from Brazilian isolates that can be used in population studies of L. theobromae and related fungi.
A mycelial mat (200mg) was cultured from an isolate (Ao1) from a gummosis-infected cashew plant from northeastern Brazil (Piauí state) and grown in liquid still culture of potato dextrose broth (PDB) for 15 days.The culture was then macerated and its DNA extracted using the DNeasy kit (Qiagen R ) according to the manufacturer's protocol.A genomic library was constructed by first double-digesting and related fungi.The nine microsatellite markers developed included six that revealed allelic polymorphisms among nine isolates of the disease collected from infected plants in Brazil.Preliminary evaluation of the markers suggested substantial genetic variability among Brazilian L. theobromae populations.These markers have potential utility for evolutionary and epidemiologic studies of this fungus.fungal genomic DNA using restriction enzymes Sau3AI and Hind III to generate fragments typically 600-800bp and then ligation of the products directly into the pGEM vector using the manufacturer's instructions (Promega, Madison, Wis, USA) before transforming the plasmids into Escherichia coli (Stratagene), by heat shock.Clones with inserts containing SSRs were selected by colony blotting with a variety of P 32 radioactively labelled oligonucleotide probes (e g CA 15 , AGG 15 , GA 15 ), and exposing to X-ray film overnight at -80 o C.
Inserts of 'positive' colonies were amplified by PCR using the plasmid-specific M13 universal forward and reverse primers.The reaction mix for the PCR comprised of 1.0ìl 10x r e a c t i o n B u f f e r , 0 .3 ì l M g C l 2 , 0 . 2 ì l d e o x y n u c l e o t i d e triphosphate (dNTP), 0.1ìl Taq DNA polymerase, 0.3ìl each primer pairs as described, 1 ìl bacterial suspension, and 6.80ìl deionised water.PCRs were performed on a Gene Amp â PCR System 2700 (Applied Biosystems) thermal cycler using an initial 94°C denaturing step for 1 min followed by 35 cycles of 1 minute to the 94 o C, 1 minute at 55 o C (annealing) and 4 minutes at 72 o C(extension), and finally 10 minutes at 72 o C. PCR amplicons from specific clones were size-estimated after electrophoresis through a 1.5% (w/v) agarose (midi) gel at 120V for around 2h 30minutes.
Strong amplicons in the expected size range were purified using QIAprep(-Qiagen) protocol and subjected to cycle sequencing after PCR using the M13 primers and BigDye terminator (Perkin-Elmer Applied Biosystems) cycle sequencing kit (4.0ìl BigDye 3.1, 1.6ìl 10ìM M13, SSR primers, and 4.4ìl of template DNA).The reaction was programmed for 30s denaturation at 95 o C, followed by 30 cycles of 30 s (annealing) at 50 o C, and 4 min at 60 o C (extension).Sequencing was p e r f o r m e d o n a A B I P r i s m T M ( P e r k i n -E l m e r A p p l i e d Biosystems) and base sequence was analysed using ChromasPro 2.3 software (http://www.technelysium.com.au/chromas.html).A total of 41 clones were sequenced, 11 sequences containing microsatellites were identified and flanking primers were designed to amplify these regions by using the Primer3 software (Whitehead Institute Biomedical Research, available at: http:// frodo.wi.mit.edu/cgi-bin/primer3/primer3_www.cgi.).
Ten SSRs were found to contain 6 to 15 uninterrupted repeats and so specific primer pairs were designed to target the flanking regions of these loci by reference to the insert sequence.A gradient thermocycler (Whatman-Biometra) was then used to determine the optimal annealing temperature for each SSR primer pair; genomic DNA of isolate Ao1 acted as template for all PCRs.Nine of the ten primer pairs generated amplicons, with annealing temperatures ranging from 52 to 62 o C (Table 1).Primers targeting locus 7 failed to generate any PCR products.These SSR markers were then evaluated for their ability to reveal allelic polymorphisms between genotypes of the fungus using the following nine Brazilian isolates of L. theobromae collected from divergent plant hosts: Ao (Anacardium occidentale, cashew), Am (Annona muricata.soursop), Cp (Copernicius prunifera, wax palm), Mi (Mangifera indica, mango), Cl (Citrus limon, lemon), As (Annona squamosa, sweetsop), Nl (Nephelium lappacearum, rambotam), and two isolates of Tc (Theobromae cacao, cocoa).For this, one of the primer pair was fluorescently labelled with HEX or FAM (CMWG), submitted to PCR and fractionated by capillary electrophoresis on a ABI Prism TM , automated sequencer.The size of all amplicons detected was determined using the Genotyper® package software.
In spite of the limited number of samples included in this test, the PCR products yielded by seven primer pairs included polymorphisms in allele length among genotypes (Table 1).Data from Genotyper were then compiled into Excel files (Microsoft),with a data matrix of characters being compiled for each isolate by scoring the presence or absence of each allele at each locus.These data were then used to generate an Unweighted pair-group method with arithmetic mean (UPGMA) genetic distance tree based on Nei and Li's unbiased genetic distance and using the statistical package MVSP (Multi-Variate Statistical Package v.3.1)(Kovach Computing Services, Anglesey, Wales).
The resultant dendrogram revealed variability among all nine isolates.Interestingly the two isolates collected from Theobroma cacao showed markedly lower levels of genetic divergence than that between isolates from distantly related hosts (Fig. 1).
Thus, the primers developed in this study can be used to determine the genetic diversity of L. theobromae isolates and will also have value to provide diagnostic tests to distinguish ecotypes of this fungus.
Annealing temperature intervals.(**) Number of alleles among nine isolates of the fungus.
Table 1 .
Primers designed for the amplification of microsatellites of Lasiodiplodia theobromae. | 2019-02-15T11:10:27.673Z | 2008-02-01T00:00:00.000 | {
"year": 2008,
"sha1": "e315cee233f23d1c1b2b8b6a953b117543f1f8ad",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/sp/a/4bGfD5PKvK7ZtxR8dhDK6Nm/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e315cee233f23d1c1b2b8b6a953b117543f1f8ad",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
55427272 | pes2o/s2orc | v3-fos-license | Random Walk Models Classifications: An Empirical Study for Malaysian Stock Indices
This article studied the Random Walk models introduced by Campbell et al. for Malaysian stock market. The analysis is implemented under the possible drastic economics structural change using an iterative structural change test. After the break-date identification, the random walk hypothesis is tested by multiple variance ratios test in two separate periods. We further examined the serial correlations of return’s squared innovations for random walk classifications. Our empirical results evidenced the random walk type 3 dominating most of the Malaysian stock indices.
INTRODUCTION
Informationally efficiency is often used to describe how all relevant information is impounded into the security prices of financial markets. Fama's [1] weakform efficient market hypothesis (EMH), which is our focus, stated that the current asset price is determine only by its historical prices (information set) of that particular asset. If stock price failed to reject the random walk hypothesis (RWH), this implies that the future returns are unpredictable by using information on past returns. On the other hand, if the stock prices is characterised by a mean reverting (trend stationary) process, then there is a tendency for the price level to return to its trend path. This suggests that the presence of predictable component based on the historical information. Campbell et al. [2] further this study by distinguishing the random walk processes into three sub-hypotheses. The random walk 1(RW1) is the most restrictive model which requires independent and identically distributed (i.i.d) of the price changes. In random walk 2(RW2), the restriction of identically distributed condition is not imposed. Therefore, the unconditional heteroscedasticity is allows in the increments. Finally, by relaxing the independence restriction of RW2, the RW3 model is obtained. RW3 is actually further examines the possible of serial correlation in the squared increments which leads to the presence of conditional heteroscedasticity. This conditional volatility is well modeled by ARCH models introduced by Engle [3] and Bollerslev [4] among others. It is worth noted that RW3 contains RW1 and RW2 under special conditions. The implication of RWH sometimes can provide insight understanding of efficiency market hypothesis (EMH). However, the RWH can only be treated as equivalent to EMH under the risk neutrality condition. Investors, economists and econometricians have shown considerable interests to identify these processes which are important in their investments, implication of market efficiency and developing correct model specifications.
As an emerging stock market, Kuala Lumpur stock exchange (KLSE) has received great attentions from researchers and investors [5][6][7][8] as the case studies and potential investment alternatives. For structural break analysis, Chaudhuri and Wu [9] reject the null hypothesis of random walk in KLSE using 12 years monthly stock prices among the other 17 emerging markets. Goh et al. [10] relates the structural break and Asian financial crisis for five ASEAN countries included Malaysian stock index. They discovered a structural break on the date of 1 st September after the implementation of currency USD and RM using the Vogelsang [11] approach. Similarly Lean and Russell [12] applied the LM unit root test to examine the random walk hypothesis for eight Asian countries including Malaysia using the natural log of weekly stock indices from 1991 to 2005. Chin and Zaidi [8] investigate the recovery period (after Asian crisis) of KLSE and found that the composite index follows a mean-reverting process. To the author's best knowledge, relative less studies has been carried out at the sectoral level under the presence of structural break. Thus, it is interesting to investigate the underlying processes under the nine sectoral markets in Malaysian stock market.
This study evaluates the possible type of random walk in the Kuala Lumpur Stock Exchange (KLSE) and the nine sectoral indices before and after the periods of Asia financial crisis. The ten selected stock indices consist of KLCI and nine major multi-sectors such as industrial(IND), industrial product(INP), consumer products(COP), construction(CON), mining(MIN), finance(FIN), properties(PRO), plantation(PLA) and trading/services(TRA) respectively. The Technology and Syariah indices are not included in our empirical analysis due to the unavailability (begin at year 2000) and overlapping listed companies which initially exist in other sectors respectively. We firstly examine the random walk hypothesis using the multiple variance ratios. Each of the indices is group accordingly follows the Campbell et al. [2] random walk processes namely the homoscedasticity RW (RW1), heteroscedasticity RW (RW2) respectively. For RW 3, even though the returns series commonly follow the random walk hypothesis with uncorrelated innovations, but their squared innovations often exhibit the presence of serial correlations. This phenomena is also named as the conditional heteroscedasticity effect which introduced by Engle [3] . After the classifications of RW, we can utilize the predictable component based on the historical information that may assist investors to produce above average profits.
Daily closing transaction price indices have been selected from 1996 to 2006 which covers before and after the Asian crisis. All the data are collected from the DATASTREAM with 2578 observations for each series. During this period, the KLSE composite index (CI) come under severe pressure and encountered drastic fall in price level. One of the objectives of this study is to examine whether the nine sectoral indices follow the behaviour of the KLCI. The specific day of the structural break is determined by an iterative Chow's test introduced by Andrews [13] tests. Alternatives for unknown break location test are such as CUSUM test by Brown et al [14] and Bai and Perron [15] using UDMax and WDMax statistics among others. [16] focus on individual variance ratio test for a specific interval in random walk hypothesis. LOMAC conclude that in a finite sample the increments in the variance are linear in the observation interval for a random walk. Lets a sample size of nq+1 observations(p 0 , p 1 , …., p nq ) at equally spaced intervals, where q is any integer greater than 1 and nq is the number of observations of p t . The variance of a qth-differenced variable is q time as large as the first differenced variable. The idea can be illustrated in the form of:
Multiple Variance Ratios test: Lo and MacKinlay
where q is any positive integer. The variance ratio can further written as: ( The unbiased estimators σ 2 (1) and σ 2 (q) are denoted as: Two alternative test statistics Z(q) and Z*(q) with the assumption of homomskeastic and heteroskedastic increments random walk respectively. If the null hypothesis of homoskedastic increments of random walk is not rejected, the associated test statistics has an asymptotic standard normal distribution as follows: On the other hand, the associate test statistic for heteroskedastic increments random walk, Z*(q) is given as follows: Chow and Denning [17] extend the test to cover all possible intervals which is consistent with the random walk hypothesis.
The propose multiple variance ratios(MVR) test generated a procedure for multiple comparison of the varied set of variance ratio estimates with unity. Consider a set of variance ratio estimates, } ,..., , the random walk null hypothesis test a set of sub-hypothesis, H 0i : M r (q i ) = 0 and H 1i : M r (q i ) ≠ 0. Since any rejection of H 0i will lead to the rejection of the random walk null hypothesis, only the maximum absolute value in the set of test statistics is highlighted. The largest absolute value of the test statistics are defined using the following probability inequality: (9) where the SMM(α;m;N) is the upper α point of Studentized Maximum Modulus(SMM) distribution with parameter m and N(sample size) degrees of freedom.
Asymptotically, when sample size is extremely large(∞), the limiting SMM is: This rejection concludes that the presence of either heteroskedastic or/and serial correlation in the equity price series. In the following test of heteroskedastic random walk (RW2), the rejection of heteroskedastic random walk leads us to the existence of autocorrelation in equity price series.
The presence of RW3 can be further justified using LM correlation test and Engle ARCH test respectively. The RW3 is tally with the conditional volatility which often occurs in financial asset pricing. The risk can be identified by including the conditional volatility in the mean equation as suggested by Engle et al. [18] ARCHmean model. Fig. 1 illustrates the plots of natural log price index for all the studied indices and indicate that a similar structural change happened in all the indices within the year of 1997 to 1998. Other than that, the movements of the prices indices seem to be stable in general. In order to determine the exact location of the break date due to the Asian financial crisis, we run the Andrews [13] test. Except the trading & service index, significant break points are found for all the other indices. The earliest break is experienced by the Property index (12/25/1997) during the crisis, follows by Finance (8/18/1998) and simultaneously by other indices at 9/2/1998, a day after the Malaysian government imposed the currency policy. Most of the indices rebounded significantly after the currency controls, for example, the KLCI radically increase by 12%, Construction 13%, Finance, Plantation, Industrial product, 9% and the lowest is 3% by Property. Table 1. These findings imply that each stock indices consists a large permanent or random walk component. The presence of positive serial correlation does not necessary implies the market inefficiency as suggested by Urrutia [20] where it can merely represents the growth of a particular stock market. Table 1 reports that the homoscedastic random walks are rejected at 1% significance level for all indices before the structural change.
RESULTS AND DISCUSSIONS
We further examine the heteroscedastic random walk and find that half of the indices namely the KLCI, Industrial product, Industrial, Mining and Plantation indices failed to reject the null hypothesis. Whereas the other five indices such as Consumer product, Construction, Property, Finance and Trading&services reject the presence of heteroscedastic properties and suggest the autocorrelation in the daily return's increments. As indicated in Table 1, their first After the structural break: After the economic crisis and currency control, some significant changes are observed where the VR(q) for KLCI, Industrial product, Industrial and Mining indices are less than unity. The negatively serial correlation indicates that these indices are converted to mean-reverting processes as compare to mean-aversion before the structural change. Table 2 shows similar results before the structural break where only the Construction index failed to reject the homoscedastic random walk. After the structural change, all the indices failed to reject the heteroscedastic random walk. We conclude that only the Construction follows an independent and identical distributed random walk (RW1).
Whereas, the remaining indices evidence the heteroscedastic random walks (RW2). Next, all the indices strongly reject the null hypothesis of no serial correlation in squared innovations and suggest the presence of RW3.
CONCLUSION
This paper studies the classification of random walk processes in the Malaysian stock markets. Our results demonstrate that Asian Crisis and currency control show instantaneous impacts to the Malaysian stock market in general. Some significant changes happened in KLCI, Industrial product, Industrial and Mining indices where the mean-aversion processes observed before the structural changes have transformed to mean-reverted processes. These findings imply that there are tendency for the price levels to return to its trend paths. This suggests that after the structural change, the mentioned indices consist of predictable component based on their historical information.
Our results (before and after break) also indicate that the RW2 unconditional heteroscedasticity increments are found to be correlated (squared increments) and lead us to conclude the presence of RW3. The RW3 suggests that even the squared increments of the returns series are predictable but the return increment still following an unpredictable random walk process. The dependence property of squared increments is widely used in financial time series analysis such as risk management, portfolio analysis and derivative pricing. The most prominent application is the measurement of value-at-risk(VaR) in risk and portfolio analysis. As a result, we conclude that most of the Malaysian stock indices are in favour of RW3. For future work, we would like to look into the estimation and prediction of VaR for the Malaysian stock indices. | 2019-05-16T13:06:44.103Z | 2008-04-30T00:00:00.000 | {
"year": 2008,
"sha1": "41e0a946ddc18187a9253ba14723906fd832e3ec",
"oa_license": "CCBY",
"oa_url": "http://thescipub.com/pdf/10.3844/ajassp.2008.411.417",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "9abbd0123f9a8cd731824d5134fd50d715159b3b",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Economics"
]
} |
213516354 | pes2o/s2orc | v3-fos-license | Isolated intracranial Rosai Dorfman Disease without nodal involvement.
Background: Rosai Dorfman disease (RDD) is a benign lymphohistiocytosis that often involves lymph nodes and present as massive painless lymphadenopathy with sinus histiocytosis. Usually systemic involvement and with rare intracranial and extremely rarely intracranial involvement without the involvement of lymphadenopathy. Case Presentation: We presented a case of 60 years old female with seizures and left side weakness and no lymphadenopathy. The magnetic resonance imaging (MRI) revealed contrasting right frontal homogenously enhancing convexity mass. Management & Results: The patient was kept on antiepileptic medications but soon presented with fits and slight expansion in frontal mass. Surgery was performed (right frontal craniotomy), the mass was surgically resected and biopsy indicated RDD. Conclusion: By now only seven of such cases are reported and prognosis of the disease is not poor if surgically treated however other measures including radiotherapy, chemotherapy, steroids are not very effective for treating the disease. And due to the rarity of disease suspicion of such disease should always be kept as a key differential in homogenously enhancing lesions with dural involvement with or without lymphadenopathy.
Introduction
RDD also called sinus histiocytosis is one of the rare benign proliferative pathological condition characterized by massive, painless, bilateral, cervical lymphadenopathy associated with fever, leukocytosis, elevated erythrocyte sedimentation rate (ESR), and weight loss massive lymadenopathy 1 . Most of the cases with intracranial involvement include nodal involvement along with extra nodal component of disease including orbit, head and neck region, upper respiratory tract, skin, bone and testis. With prevalence of 43%, isolated intra cranial involvement without nodal involvement is extremely rare 1-3 and by now only 7 cases had been reported 4,5 . The disease usually affects young adults and can have a protracted course lasting from several months to years and commonly results in complete recovery 6 . Central Nervous System (CNS) RDD typically presents during the 4th to 5th decade and is more prevalent among males with the mean age of 39.4 years old. In this study we have presented a case with an extremely rare presentation of intracranial RDD without nodal involvement in a female with 60 years of age.
Case Presentation
A 60 years old unmarried lady presented to walk-in clinic in February 2017, she previously had hypertensive history, current observation at the time of presentation at the clinic included focal fits with secondary generalization and left side weakness for one day. According to patient and attendant she had developed fits which were focal initially and then transformed into secondary generalization lasting, five mins without aura. She had no previous history of fits, nor any significant family history.
Management & Results
Patient was rushed to the hospital and got admitted to accident and emergency (A&E), where she was initially treated symptomatically and taken to the walk-in clinic on next day with weakness of left side of body, power was 0/5 in left side. In addition to hypertension, she also had past history of allergic rhinitis therefore, systemic enquiry, clinical and systemic examination was insignificant. Her initial MRI brain scan showed homogenously enhancing right frontal convexity lesion, considered temporarily as convexity meningioma, patient was kept on antiepileptics and follow-up was recommended. But two months later in April-2017, the patient was readmitted due to fits and there was slight expansion in frontal mass. However, a surgical procedure was planned, taking all ethical measures and attaining consent from the family of the patient. Right frontal craniotomy was carried out with excision of lesion completely and irregular intact nodular tissue covered with frontal tissue piece specimen was sent for histopathology. Histologically, lesion revealed dense fibrocollagenous tissue with inflammatory infiltrates, scattered lymphocytes, plasma cells along with histiocytes and extensive areas of emperipolesis. Specimen with cluster of differentiation-68 (CD-68) highlighted histiocytes and immunohistochemical stain S-100 was positive while CD-1a was negative hence indicating "Intracranial RDD".
International Journal of Endorsing Health Science Research
Int. j. endorsing health sci. res.
Discussion
RDD is a histioproliferative disorder that mainly involves cervical lymphadenopathy 7 , the disease was first described in 1965 1 , predominantly affecting children and young adults and characterizing massive painless bilateral cervical lymphadenopathy with fever, raised white blood count (WBC) count, high ESR, and polyclonal hypergammaglobulinemia 8 .
Microscopically, lymph nodes show dilated sinuses containing foamy histiocytes and plasma cells. Many of these histiocytes contain intact hematopoietic cells, mostly lymphocytes within their cytoplasm, a phenomenon known as emperipolesis 2,3,9 . These histiocytes are positive for S-100 protein and CD-68 and negative for CD1a based on immunohistochemical examination 2,7,10 . Literature describes there may be association of episten bar virus or herpes simplex virus (HSP)-VI associated with this condition 2 .
Our case was initially radiologically diagnosed as meningioma based on pathology but histology revealed RDD. Histologically, Langerhans cell histiocytosis is similar to RDD occurrence except the fact that Langerhans cells have folded nucleus with longitudinal grooves which is absent in RDD histiocytes and Langerhans histiocytes are also positive for CD-1a 11 , on contrary RDD is negative for CD-1a 11 . In our case patient had a convexity lesion over right frontal region causing seizures and left sided weakness while there wasn't any systemic or nodal involvement, which made the case extremely rare. As most of the cases usually involve some kind of systemic or nodal involvement. Most of the patients with disease underwent total or subtotal surgical excision. However, radiochemotherapy and steroids role in treatment is still doubtful.
Conclusion
Isolated intracranial RDD without nodal involvement is an exceptional rare disease and the diagnosis for the lesion is still challenging. The histological displays and immunohistochemical analysis are still the only reliable basis for diagnosis of the disease. Surgical resection demonstrated an effective treatment while other treatment strategies are still controversial for managing the disease.
Conflicts of Interest
None. | 2020-02-27T09:10:14.581Z | 2019-12-01T00:00:00.000 | {
"year": 2019,
"sha1": "8c0d76861fb1d9b3fef695cc9e4954341731e531",
"oa_license": "CCBY",
"oa_url": "http://aeirc-edu.com/ojs14/index.php/IJEHSR/article/download/91/540/",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "567d6a6dc3697cddb1627d6d0d6d090cd4d11750",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
102930704 | pes2o/s2orc | v3-fos-license | Current induced magnetization switching in PtCoCr structures with enhanced perpendicular magnetic anisotropy and spin-orbit torques
Magnetic trilayers having large perpendicular magnetic anisotropy (PMA) and high spin-orbit torques (SOTs) efficiency are the key to fabricate nonvolatile magnetic memory and logic devices. In this work, PMA and SOTs are systematically studied in Pt/Co/Cr stacks as a function of Cr thickness. An enhanced perpendicular anisotropy field around 10189 Oe is obtained and is related to the interface between Co and Cr layers. In addition, an effective spin Hall angle up to 0.19 is observed due to the improved antidamping-like torque by employing dissimilar metals Pt and Cr with opposite signs of spin Hall angles on opposite sides of Co layer. Finally, we observed a nearly linear dependence between spin Hall angle and longitudinal resistivity from their temperature dependent properties, suggesting that the spin Hall effect may arise from extrinsic skew scattering mechanism. Our results indicate that 3d transition metal Cr with a large negative spin Hall angle could be used to engineer the interfaces of trilayers to enhance PMA and SOTs.
I. INTRODUCTION
In recent years, current induced spin-orbit torques (SOTs) in trilayer structures, where ultrathin ferromagnets (FM) is sandwiched by a heavy metal (HM) and an oxide, have attracted abundant research interests for highly effective magnetization switching [1][2][3] and fast domain wall motion [4][5][6][7][8][9][10]. In such a SOT-based device, when an in-plane charge current (J e ) flows through HM with strong spin-orbit coupling (SOC) including 5d-metal Pt [2], Ta [11], W [12], and Hf [13], etc., it can be converted into a pure spin current (J s ). Then, J s injects into FM and generates a torque to act on magnetic moments. As a result, if the torque is sufficiently strong, the magnetization could be switched. It is well established that the SOT switching efficiency is directly related to the magnitude of spin Hall angle (θ SH ). So, considerable efforts have been devoted to obtain a large θ SH . An enhancement of the SOT switching efficiency was reported in a Pt/Co/Ta structure [14], in which the spin Hall angles of Pt and Ta have opposite signs.
Moreover, as the perpendicular magnetic anisotropy field (H 0 an ) is also one of the key parameters in SOT systems with perpendicular magnetic anisotropy (PMA), so far, considerable research efforts have also been devoted to obtain an enhanced H 0 an to improve the thermal stability of spintronic devices [15,16], such as engineering the interface quality [17,18,19], changing identified SOC materials [16], and utilizing different post processing methods [20]. In a word, all these reports with views are not only to increase the magnitude of θ SH , but also to enhance the PMA.
In this work, we explore the PMA and SOTs in Pt/Co/Cr structures as a function of Crthickness (t), where 3d transition metal Cr with a large and negative θ SH has been experimentally confirmed recently [21,22]. Moreover, Pt and Cr were grown at an optimized sputtering condition in order to obtain large θ SH as reported in W [23] and Pt [24]. In those conditions, the pure spin currents generated from Pt and Cr are enhanced and expected to work in concert to improve the SOT switching efficiency. Here, we use anomalous Hall effect (AHE) measurement setup [19] to characterize the PMA and SOT switching efficiency in Pt/Co/Cr stacks. Firstly, a remarkable enhancement of H 0 an up to 10189 ± 116 Oe is obtained with t = 5 nm. Secondly, SOT induced magnetization switching was achieved under a relatively small critical current density (J crit ) due to the improved SOT switching efficiency. Simultaneously, the relation between effective spin Hall angle (θ eff SH ) and longitudinal resistivity (ρ xx ) clarifies that the antidamping-like torque may mainly arise from extrinsic skew scattering mechanism in our systems. In addition, by measuring the extraordinary Hall resistance (R Hall ) under various applied direct currents (I), an obvious effect of Joule heating on magnetization switching was observed. The thermally activated domain wall (DW) nucleation and propagation mechanism is identified from the temperature dependent switching field (H sw ) measurements over a wide temperature (T) range from 150 to 475 K.
Moreover, the PMA still remains with the H sw around 170 Oe when T reaches to 475 K. Our findings suggest that Pt/Co/Cr systems could potentially be applied as the spintronics devices due to its significantly enhanced PMA and SOTs.
II. EXPERIMENTAL DETALLS
The samples, with the structures Ta(3)/Pt(5)/Co(0.8)/Cr(t)/Al(1) (thickness number in nanometer and t = 1− 5) were deposited on Corning glass substrate by direct current magnetron sputtering. The growth was carried out at a base pressure less than 5 × 10 -5 Pa. The relatively low sputtering power (P) with high Ar gas pressure (p Ar ) can increase the resistivity of Pt and Cr, which was identified to enhance θ SH [23]. The optimized powers for 2 and 3 inches diameter Pt and Cr targets are 8 and 10 W, respectively, and the sputtering pressures for Pt and Cr are 0.53 and 0.99 Pa, respectively. We also note that the morphology and surface roughness of the sputtered films are known to depend on P and p Ar , atomic force microscopy (AFM) observation reveals that the sputtering condition employed here dose not significantly influence these parameters as shown in the supplementary materials. A Ta target was used to produce 3 nm seed layer and an Al target was used to produce 1 nm protective overcoat for our devices. All the structures were prepared at room temperature. The thickness of films was determined by the deposition time and sputtering rate, which was calibrated by X-ray reflectivity. The thin-film stacks were patterned into Hall bars by photolithography and Ar ion milling as shown in Fig. 1 Optical photograph of the patterned Hall bar structure and the DC Hall resistance experimental configuration is shown in Fig.1 (a). Fig.1 (b) shows the schematic illustration of Pt/Co/Cr trilayers for magnetotransport measurement. When a charge current (J e ) passing through Pt and Cr along the length of Hall bar (x-axis), a spin current (J s ) will be generated perpendicular and SOTs, a quite small current with I = 0.1 mA was employed during the measurements. One can clearly see the square-like loops even for samples at high temperature, indicating that a robust PMA is sustained over the whole temperature rang. It must also be mentioned that the H sw and R 0 (the Hall resistance with H z = 0) both decrease significantly as temperature increases. Additionally, as can be seen from Fig. 2(d), H sw displays a linear dependence on the square root of T. The strong temperature dependence in H sw is an indication of the mechanism of thermally activated domain wall (DW) motion, it can be expressed as [25,26]
III. RESULTS AND DISCUSSIONS
where, H sw (0) is the switching field at 0 K, and the constant a depends on the activation energy of DW motion. The solid lines in the Fig. 2 To further confirm the magnetization reversal mechanism, we also summarize H sw against the angle β (β is the angle between the H ext and z-axis in x-z plane) in Fig. 3(a) obtained from the angular dependent R Hall -H ext loops which are measured at I = 0.1 mA to reduce the spin Hall torque and Joule heating effect. Kondorsky firstly pointed out a magnetization switching mechanism model for uniaxial highly anisotropic system with a depinning of 180 o domain wall or nucleation and growth of reverse domains using the angular dependent H sw [27]. In this model, the From the above magnetization switching mechanism, one can see that the switching only occurs when the reversed magnetic field along the z direction reaches the H sw . Keeping this in mind, we now analysis the current induced magnetization switching by SOTs with the assistant of an in-plane x-directional magnetic field (H x ). We first consider the equilibrium equation of the magnetization using a simple macrospin model considering the external field torque, the anisotropy torque and the spin orbital torque [1,30,31]. When J e flowing along + direction, the SHE induced J s is generated perpendicular to the Co layer (along the direction from Pt layer and along the direction from Cr layer) as shown in Fig. 1(b) Fig. 3(b) shows the H x dependence of R Hall under a low-direct current (I = +0.5 mA). We can obtain the H 0 an by fitting the data using Eq. (6) as shown in Fig. 3(c). As expected, a larger H 0 an ≈ 10189 ± 116 Oe with t = 5 nm is obtained, H 0 an of several SOT devices with PMA are shown in the supplement materials. As far as we know, H 0 an obtained in this work is the largest value compared to other PMA structures, such as Pt/Co/Pt [32], Pt/Co/Ta [14], Ta/CoFeB/MgO [30].
Although, a quite strong PMA is obtained in our Pt/Co/Cr stacks, the current-induced SOTs could overcome the anisotropy barrier to achieve the switching of magnetization according to Eq.
(3). However, as discussed above, the magnetization reversal mechanism for our devices is governed by the nucleation and/or depinning mechanism. Thus, the effective magnetic field along the z direction from SOT only needs to reach the H sw , which is much smaller than the anisotropy field, to achieve the magnetization switching. Exhilaratingly, current-induced magnetization switching (CIMS) not only can be achieved with H x = 180 Oe, but also under relatively small critical current density ~4.5 × 10 6 A/cm 2 as depicted in Fig. 3(d). It can be ascribed to the improved SOTs in Pt/Co/Cr systems. Moreover, the directions of current and H x determine the polarity of the switching, which is consistent with the model of the SHE switching [11].
The current-induced SOT effective fields were measured by the harmonic Hall voltage measurement technique. The measurement diagram is depicted in Fig. 1(b). A sinusoidal alternate current (AC) with a frequency of 133 Hz along x-axis was passed through the Hall bars and the first (V ω ) and second (V 2ω ) harmonic Hall voltage were collected using a Analog-Digital/Digital-Analog card by the frequency-spectra analysis. The current-induced longitudinal antidamping-like field (H DL ) and the transverse field-like field (H FL ) can be obtain by the H ext applied along the x (longitudinal-field H L ) and y (transverse-field H T ) directions, respectively. Then, the AC-induced H DL(FL) along a given in-plane field direction can be determined as [33] where, ξ is the ratio of planar Hall resistance (∆R P ) over anomalous Hall resistance (∆R A ). ∆H DL(FL) is the corrected antidamping-like (field-like) field considering ξ. Figs. 4(a) Since the measured Hall voltages simultaneously include the contributions from the planar Hall effect (PHE) and AHE, the R Hall can be expressed by [33], 2 11 cos sin sin 2 22 where the first and second terms respectively represent AHE and PHE. θ is polar angle, φ is azimuthal angle in a spherical coordinate system as shown in Fig. 1(b). ∆R P was obtained by measuring R Hall as a function of φ with θ = 90 o and ∆R A was obtained from R Hall -H z loops. A series of ξ were calculated as shown in Fig. 4(b). Large values of ξ indicate that the influence of PHE to the SOT fields is more significant and should not be neglected in our systems.
Based on Eqs. (7), the ∆H DL were obtained, then, we quantitatively calculated the θ eff SH to compare the strength of the antidamping-like torque in the present systems using the following equation [ Finally, we investigate the temperature dependence of SOTs. Similarly, the harmonic Hall voltage measurement technique was used to quantify the current-induced SOT effective fields over a large temperature range from 200 to 400 K. Fig. 5(a) shows the temperature dependence of β DL and ξ for the sample with t = 1 nm. One can see that β DL increases with temperature increasing, but ξ shows a weak temperature dependence. Notably, although the H FL is negligible, but the ∆H FL can not be ignored due to the large H DL according to Eq. (7). Fig. 5(b) depicts the corrected antidamping-like and field-like effective field for per unit current density (∆β DL/FL = ∆H DL/FL /J e ) as a function of T, both types of torques exhibit nearly linear dependence on T. Among them, the temperature dependence of ∆β DL is similar to that in CuAu/FeNi/Ti stacks [34], in which extrinsic SHE is the dominant mechanism rather than intrinsic SHE. In other words, the extrinsic SHE i.e. the scattering events are increasing with the temperature increasing, however, intrinsic SHE is not significantly affected. We attributed the temperature dependence of ∆β DL to the similar mechanism, which will be further discussed below.
In metals with low resistivity, the scaling relation of spin Hall angle θSH ∝ρ xx is most likely.
The typically weak T dependence of ρ xx is presented in Fig. 6(a). A small variation ~ 12.2% of ρ xx is obtained ranging from 200 to 400 K. It is evident that temperature dependent phonon electron scattering is not the main source of the change in ρ xx . Instead, scattering mechanism may play the dominant role. Fig. 6(b) shows the relation between θ eff SH and ρ xx . The linear correlation may be the best evidence to illustrate that the skew scattering may be the dominant source for the SHE in our samples [30].
The temperature dependence of corrected field-like torque (∆β FL ) also deserves some discussion. As shown in Fig. 5(b), ∆β FL as the scenario of interfacial Rashba torque increases with temperature, the increase could be attributed to an increase in bulk resistance with temperature increasing, as a result, increasing the current flowing through the interface. Thereby, the enhancement of corrected field-like torque can be observed. We emphasize that this explanation remains speculative and requires further experiments to confirm.
IV. CONCLUSIONS
In summary, we have performed a comprehensive study on the perpendicular magnetic anisotropy, magnetization switching mechanism and spin-orbit torques of perpendicularly magnetized Pt/Co/Cr trilayers. The obtained perpendicular anisotropy field reaches up to 10189 ± 116 Oe, which is the largest value observed to date in metallic sandwich systems. We ascribe the enhanced perpendicular anisotropy field to the interface coupling between Co and Cr layers.
In addition, a spin-orbit torque induced magnetization switching is achieved under relatively small critical current density in the order of 10 6 A/cm 2 Our results indicate that 3d transition metal Cr with a large negative spin Hall angle could be used to engineer the interfaces of trilayers to enhance PMA and SOTs, which may be benefited to future spintronic devices. | 2019-04-09T13:04:45.571Z | 2017-08-30T00:00:00.000 | {
"year": 2017,
"sha1": "ae6fe62c0d062cb893d973e1f1149c099e68596e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1708.09174",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ae6fe62c0d062cb893d973e1f1149c099e68596e",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Physics",
"Chemistry"
]
} |
268729199 | pes2o/s2orc | v3-fos-license | Biosynthesis and assessment of antibacterial and antioxidant activities of silver nanoparticles utilizing Cassia occidentalis L. seed
This research explores the eco-friendly synthesis of silver nanoparticles (AgNPs) using Cassia occidentalis L. seed extract. Various analytical techniques, including UV–visible spectroscopy, transmission electron microscopy (TEM), scanning electron microscopy (SEM), X-ray diffraction (XRD), and energy dispersive X-ray spectroscopy (EDX), were employed for comprehensive characterization. The UV–visible spectra revealed a distinct peak at 425 nm, while the seed extract exhibited peaks at 220 and 248 nm, indicating the presence of polyphenols and phytochemicals. High-resolution TEM unveiled spherical and oval-shaped AgNPs with diameters ranging from 6.44 to 28.50 nm. The SEM exhibiting a spherical shape and a polydisperse nature, thus providing insights into the morphology of the AgNPs. EDX analysis confirmed the presence of silver atoms at 10.01% in the sample. XRD results unequivocally confirm the crystalline nature of the AgNPs suspension, thereby providing valuable insights into their structural characteristics and purity. The antioxidant properties of AgNPs, C. occidentalis seed extract, and butylated hydroxytoluene (BHT) were assessed, revealing IC50 values of 345, 500, and 434 μg/mL, respectively. Antibacterial evaluation against Bacillus subtilis, Staphylococcus aureus, and Escherichia coli demonstrated heightened sensitivity of bacteria to AgNPs compared to AgNO3. Standard antibiotics, tetracycline, and ciprofloxacin, acting as positive controls, exhibited substantial antibacterial efficacy. The green-synthesized AgNPs displayed potent antibacterial activity, suggesting their potential as a viable alternative to conventional antibiotics for combating pathogenic bacterial infections. Furthermore, potential biomedical applications of AgNPs were thoroughly discussed.
Pathogenic bacteria
The fabrication of nanoparticles derived from noble metals has garnered significant attention in recent decades, with gold and silver emerging as primary candidates for synthesis.Among these, silver nanoparticles (AgNPs) have garnered particular interest due to their exceptional attributes including conductivity, catalytic activity, stability, and antimicrobial properties 1,2 .Notably, AgNPs serve as effective antibacterial, antiviral, and antifungal agents, mitigating surgical infections.Moreover, in contemporary research, AgNPs have emerged as promising candidates for anticancer therapeutics 3,4 , facilitating both diagnosis and treatment across various anticancer potential and apoptosis studies against Pa-1 (Human ovarian teratocarcinoma) cell line 5 .
The use of AgNPs in various fields, particularly in medicine and antimicrobial applications, has gained significant attention due to their unique properties.Pathogenic bacterial infections continue to pose a significant threat to public health, and the growing issue of antibiotic resistance underscores the need for alternative antimicrobial agents.AgNPs have demonstrated noteworthy antibacterial properties, and their green synthesis
UV-visible spectroscopy
The biologically synthesized AgNPs from Cassia occidentalis L. seeds extract were analyzed using a UV/Visible Spectrophotometer (Shimadzu 1800) to determine their absorption maxima within the range of 300-600 nm.The resulting data were plotted as wavelength (X-axis) against absorbance (Y-axis) on a graph.
Scanning electron microscopic analysis
The shape, morphology, and distribution of the synthesized AgNPs were assessed using a SEM.For SEM analysis, a minute amount of AgNPs was placed on conductive carbon tape affixed to an aluminum stub, followed by gold sputtering for 3-4 min.
Transmission electron microscopic analysis
The size and surface morphology of the synthesized AgNPs derived from Cassia occidentalis L. seeds extract were characterized using a transmission electron microscope.A droplet of the AgNPs solution was deposited onto a carbon copper grid, and images were captured at magnifications ranging from 6000 to 8000× using a Hitachi instrument (Model: S-3400N) operated at 80 kV voltage.
Energy dispersive spectroscopy analysis
This method is employed for assessing the elemental composition of substances, such as silver nanoparticles.The sample is inserted into a scanning electron microscope fitted with an EDX.Through EDX analysis, researchers gain crucial insights into the elemental makeup of AgNPs, facilitating the characterization and comprehension of their properties across diverse applications.
X-ray diffractometric analysis
The crystalline structure, lattice parameters, and grain size of the synthesized AgNPs were assessed using XRD.The powdered sample of AgNPs was carefully placed in a cavity slide and gently compressed to create a uniform surface.The XRD instrument, operated with data scan software, employed a scan rate of 1.2° per minute.Spectra were recorded within the 5° to 80° range using a CuKα filter (λ = 0.15418 nm) in 2θ/θ scanning mode.The size of the nanoparticles was determined utilizing Scherrer's formula.
Evaluation of antioxidant effect of AgNPs from C. occidentalis
To assess the antioxidant effects, 2 mL of 100 μM DPPH dissolved in methanol was mixed with 2 mL of different concentrations of AgNO 3 , C. occidentalis, and AgNPs.These mixtures were allowed to stand at room temperature for 30 min.Butylated hydroxytoluene (BHT) served as the positive control.Afterward, the absorbance of the samples was measured at 520 nm using a spectrophotometer.The DPPH free radical scavenging percentage was calculated using the following formula: Here, the test/control sample comprised 2 mL of DPPH and 2 mL of AgNO 3 , C. occidentalis, AgNPs, and BHT at various concentrations, while the control consisted of 2 mL of methanol.
Analysis of antibacterial properties
For the extract and nanoparticle sensitivity tests, E. coli, B. subtilis, and S. aureus were employed.The antibacterial properties were studied using agar disc/well diffusion techniques.A Pasteur pipette was used to form 6 mm wells on the culture medium with consistent spacing in the well diffusion method.In the disc diffusion technique, 6 mm blank discs were used on agar culture medium.The wells and discs were then filled with 60 μL of different dilutions of AgNO 3 , C. occidentalis extract and AgNPs.Tetracycline (10 mg/mL) and ciprofloxacin (10 mg/mL) were employed as positive controls in this investigation, with distilled water performing as a negative control (PC-1 and PC-2).After 24 h of incubation at 37 °C, the growth inhibition zone was measured.
Statistical analysis
The experiments were replicated three times, and the data obtained were entered into STATASTICA 7.0 (STASOFT) for analysis.
Synthesis of AgNPs from C. occidentalis
During the synthesis of silver nanoparticles, the addition of AgNO 3 to the prepared extract induced a noticeable shift in the solution's color, turning it into a yellowish brown color, which indicated the formation of AgNPs.The pH of the reaction mixture was recorded as 8.0.This change in color serves as a primary indicator of nanoparticle formation in the suspension.Following this, the resulting mixture underwent centrifugation at 12,000 rpm for 20 min to facilitate phase separation.The sediment obtained was then washed three times with deionized water and once with ethanol to remove any residual biological impurities effectively.
UV-visible spectrophotometer
The AgNPs from C. occidentalis were synthesized in solution and confirmed in the 200-700 nm range using a UV-visible spectrophotometer (Shimadzu UV-1800).The spectra of C. occidentalis seed extract are shown in DPPH free radical scavenging (%) = (control − test) × 100.
Fig. 1A, whereas the spectra of an aqueous solution containing AgNPs are shown in Fig. 1B.The color of AgNPs in aqueous solution was yellowish brown due to that also depends upon the size of particles.There was a single strong peak at 425 nm in AgNPs, but two peaks at 220 and 248 nm in the extract spectra, suggesting he presence of polyphenols and phytochemicals in the solution.
SEM analysis of AgNPs
Surface morphological and nanostructural analyses were conducted using SEM, as depicted in Fig. 2. The SEM micrographs revealed the presence of numerous small aggregates of AgNPs, exhibiting a spherical shape and a polydisperse nature, thus providing insights into the morphology of the AgNPs.
TEM analysis of AgNPs
The size and morphology of the nanoparticles were determined using TEM.TEM images depicted the AgNPs as round, spherical, and occasionally oval-shaped, with slight agglomeration observed at specific locations, as illustrated in Fig. 3A,B.Synthesized AgNPs have particle diameters ranging from 6.44 to 28.50 nm.The AgNPs histograms in C. occidentalis are shown in Fig. 3C.The size of the particles differs significantly.
EDX analysis
The elemental composition of AgNPs is presented in Fig. 4, with EDX measurements conducted at 1-10 keV revealing the presence of Ag (10.01%),P (0.65%), S (0.45%), Cl (0.46%) and C (88.43%).The elemental peaks of Ag were identified at both 1 and 3 keV, providing comprehensive insights into the composition of the studied nanoparticles.
XRD analysis
The XRD analysis was employed to investigate the crystalline nature and composition of AgNPs, as well as the phase purity of the synthesized AgNPs.As illustrated in Fig. 5, the XRD pattern exhibited well-defined
Antibacterial potential of AgNPs
The antibacterial efficacy of plant-derived AgNPs has been thoroughly explored against various microorganisms, as documented in previous studies 37,38 .In this investigation, three pathogenic bacteria were employed to evaluate the antibacterial properties of AgNPs, plant extract, and standard antibiotics, namely tetracycline and ciprofloxacin (PC-1 and PC-2), as summarized in
Possible mechanism of the antibacterial activity of silver nanoparticles
Antibacterial properties of AgNPs arise from various mechanisms, as illustrated in Fig. 8.
A. Disruption of cell membrane:
AgNPs can interact with and disrupt the cell membrane of bacteria.This interaction destabilizes the membrane integrity, leading to leakage of cellular contents and eventual cell death.
B. Generation of reactive oxygen species (ROS):
AgNPs can induce the generation of reactive oxygen species (ROS) within bacterial cells.ROS, such as superoxide radicals and hydrogen peroxide, cause oxidative damage to proteins, lipids, and DNA, ultimately leading to bacterial cell death.
C. DNA damage: AgNPs can penetrate bacterial cells and interact with DNA, leading to DNA damage.This interference with DNA replication and transcription processes can inhibit bacterial growth and viability.
D. Protein denaturation:
AgNPs can interact with proteins in bacterial cells, leading to their denaturation and loss of function.This disruption of essential cellular processes can impair bacterial growth and survival.
E. Inhibition of enzymatic activity:
AgNPs can inhibit the activity of essential bacterial enzymes, such as those involved in energy metabolism and cell wall synthesis.This disruption of enzymatic activity can compromise bacterial viability and survival.with AgNO 3 at a ratio of 1:9 (v/v).The shift from light yellow to dark brown served as an indicator of the surface plasmon resonance (SPR) of metallic silver, indicating the formation of AgNPs.This synthesis process suggested that the plant extract, rich in diverse phytoconstituents, acts as both reducing and capping agents 39 .The pH of the reaction mixture was recorded as 8.0 at that point, aligning with findings from other studies.The highest SPR absorption, observed at 425 nm, indicated that the reaction was concluded when the color transitioned from light yellow to dark yellowish brown.The excitation of the UV-visible band imparts a yellowish-brown color to AgNPs in aqueous solution, with the specific shade dependent on the particle size 29 .In previous studies, the SPR range of silver nanoparticles, typically between 410 and 450 nm, was associated with spherical nanoparticles 40,41 , As per publications, another study 42 identified a peak at 461.02 nm during the synthesis of AgNPs using the seed extract of C. occidentalis.Similarly, the utilization of Pyrostegia venusta and Passiflora vitifolia leaf extracts, containing a variety of phytochemicals, has been proposed for AgNPs synthesis 43,44 .
The SEM micrographs revealed the presence of numerous small aggregates of AgNPs, exhibiting a spherical shape and a polydisperse nature, thus providing insights into the morphology of the AgNPs.Anandalakshmi et al. www.nature.com/scientificreports/ reported similar shapes, observing even-shaped, spherical AgNPs in SEM images derived from biosynthesized AgNPs from Pedalium murex leaf extract 45 .Hemalata et al. also noted comparable shapes in SEM images of biosynthesized AgNPs from a Cucumis prophetarum leaf extract 46 .
The TEM images depicted the AgNPs as round, spherical, and occasionally oval-shaped, with minor agglomeration observed at specific sites.Particle sizes ranged from 6.44 to 28.50 nm.As per other reports, AgNPs synthesized from D. indica exhibited a spherical morphology with size ranges of 10.0 to 23.24 nm.Although aggregation was evident in the AgNPs, a small fraction displayed dispersion and variations in size 39 .Previous studies indicated that AgNPs derived from I. balsamina and L. camara leaf extracts exhibited spherical shapes with size ranges of 10-30 nm and a polydisperse nature 47 .
The EDX spectra of AgNPs indicated that the sample comprised 10.01% of silver, with a significant peak observed at 3 keV, suggesting the reduction of Ag + ions to Ag°.Additionally, the EDX spectrum revealed the presence of carbon, sodium, chlorine, and other elements, along with the identification of supplementary metallic elements.Similarly, AgNPs derived from R. serrata flower buds extract exhibited a prominent signal for silver, as well as elemental peaks corresponding to phytomolecules, with additional peaks of carbon and oxygen observed 48 .The EDX analysis of AgNPs demonstrated a notable signal for silver, along with other elemental peaks.These additional peaks, apart from silver, may be attributed to the presence of phytomolecules on the external surface of the nanoparticles, playing a crucial role in capping and stabilization.Peaks indicating carbon, oxygen, and other elements may be attributed to atmospheric moisture content.
X-ray diffractometry confirmed the face-centered cubic crystal structure of AgNPs.The XRD patterns exhibited reflection peaks at 33.15°, 39.02°, 45.65°, 65.19°, and 78.90° 2 theta, corresponding to the 101, 111, 200, 220, and 311 Bragg's plane faces, respectively.These results indicate the crystalline nature of the AgNPs suspension, consistent with findings from other studies.Another investigation into the production of silver nanoparticles from C. sativus revealed a similar XRD pattern, with crystalline phases associated with inorganic plant extract components present on the surface of the synthesized AgNPs 5 .
The antioxidant potential of the synthesized AgNPs, aqueous C. occidentalis seed extract, butylated hydroxytoluene (BHT), and AgNO 3 was investigated using the DPPH free radical assay, a widely recognized method for assessing antioxidant activity.DPPH, being a stable compound, serves as a valuable tool in evaluating antioxidant capacity, as it readily accepts hydrogen or electrons.The IC 50 value obtained from this assay serves as an indicator, with lower values indicating stronger DPPH scavenging activity.Our findings revealed that both the synthesized AgNPs and the aqueous extract possess significant free radical scavenging abilities.Interestingly, AgNPs exhibited remarkable scavenging activity, comparable to that of BHT, and surpassed C. occidentalis seed extract and AgNO 3 .These findings are consistent with previous research demonstrating the considerable antioxidant properties of Ag nanoparticles, which effectively neutralize various free radicals, including DPPH 49,50 .Antioxidants play a crucial role in combating free radicals 51 .The DPPH antioxidant assay is a well-established method known for its ability to assess the capacity of compounds to reduce free radicals 52,53 .Stable free radical scavengers, such as DPPH, exhibit an absorbance at 517 nm and undergo a color change from violet to yellow during the reduction process 54 .Free radicals induce cellular damage, posing health risks to both humans and animals 55 .
According to the findings, AgNPs are a good material for use as antibacterial agents against pathogenic bacterial species as also evident by previous research [56][57][58] .Recent findings indicate that silver and copper nanoparticles possess biocidal properties, making them suitable for use as antibacterial coatings on consumer goods 59 .Research has demonstrated that silver nanoparticles can serve as effective antibacterial agents against both gram-positive and gram-negative bacterial infections 60,61 .Silver nanoparticles interact with the bacterial cell wall in a natural manner, disrupting its integrity and causing the breakdown of phosphodiester linkages, ultimately leading to the bacterium's demise.Additionally, silver ions bind to crucial biological components such as sulfur, oxygen, and nitrogen, thereby impeding bacterial growth 62 .
Potential biomedical applications
The versatility of AgNPs in diverse applications, including anti-diabetic, antiviral, antifungal, antibacterial, DNA cleavage, anti-aging, dye degradation, environmental assay indicators, plant growth, and antioxidants, as well as their protective role, is illustrated in Table 2.For example, AgNPs have exhibited promise in regulating glucose levels, presenting therapeutic advantages in diabetes management.Research suggests that AgNPs can modulate insulin signaling pathways, potentially enhancing insulin sensitivity and cellular glucose uptake, thus offering a novel approach for treating diabetes 63,64 .AgNPs also demonstrate significant antiviral properties by disrupting viral attachment and entry into host cells.This mechanism has been investigated against viruses like HIV, influenza, and herpes simplex virus, indicating potential applications in the development of antiviral agents and coatings for medical equipment to mitigate viral transmission 65,66 .The antifungal efficacy of AgNPs has been proven against various fungal pathogens, including Candida species.This suggests their potential utility in antifungal formulations for the treatment of fungal infections, especially in topical applications [67][68][69] .Renowned for their potent antibacterial properties, AgNPs demonstrate effectiveness against both Gram-positive and Gram-negative bacteria.This renders them promising candidates for the development of antimicrobial coatings, wound dressings, and antibacterial agents in medical settings 70,71 .The capability of AgNPs to cleave DNA strands holds significant implications for genetic and molecular research, offering potential applications in targeted drug delivery, gene therapy, and as a tool for understanding DNA structure and function 72,73 .The anti-aging properties of AgNPs, attributed to their capacity to scavenge free radicals and alleviate oxidative stress, present opportunities for potential utilization in skincare formulations.This application could aid in diminishing signs of aging, including wrinkles and fine lines 74 .In environmental contexts, AgNPs have shown the capability to degrade synthetic dyes, proving valuable in environmental remediation endeavors.Potential
Figure 3 .
Figure 3. TEM micrograph of the AgNPs using C. occidentalis at the scale bar corresponds to (A) 20 nm at 100,000× and (B) 20 nm at 40,000×.(C) Particle size histogram (nm) of AgNPs.
Figure 7 .
Figure 7. Antibacterial potential of AgNPs and aqueous extracts of C. occidentalis, positive controlantibiotic tetracycline and ciprofloxacin against B. subtilis, E. coli and S. aureus.
Figure 8 .
Figure 8.A hypothetical illustration of the possible mechanisms of antibacterial activities of silver nanoparticles against bacterial cells.
Table 2 .
Impact of silver nanoparticles from Cassia occidentalis L. and seed extract on multiple functions, including anti-diabetic, antiviral, antifungal, antibacterial, dna cleavage, anti-aging, dye degradation, environmental assay indicators, plant growth and antioxidants and protective role. S. | 2024-03-29T06:18:32.710Z | 2024-03-27T00:00:00.000 | {
"year": 2024,
"sha1": "38e28a4d45b6d6ae1ec528b4dd38ffec596af34a",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-024-57823-3.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "65145ef488e8d3dc75ba798670e76dcea0dad1a0",
"s2fieldsofstudy": [
"Environmental Science",
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
158045052 | pes2o/s2orc | v3-fos-license | The influence of internal climate variability on heatwave frequency trends
Understanding what drives changes in heatwaves is imperative for all systems impacted by extreme heat. We examine short- (13 yr) and long-term (56 yr) heatwave frequency trends in a 21‐member ensemble of a global climate model (Community Earth System Model; CESM), where each member is driven by identical anthropogenic forcings. To estimate changes dominantly due to internal climate variability, trends were calculated in the corresponding pre-industrial control run. We find that short-term trends in heatwave frequency are not robust indicators of long-term change. Additionally, we find that a lack of a long-term trend is possible, although improbable, under historical anthropogenic forcing over many regions. All long-term trends become unprecedented against internal variability when commencing in 2015 or later, and corresponding short-term trends by 2030, while the length of trend required to represent regional long-term changes is dependent on a given realization. Lastly, within ten years of a short-term decline, 95% of regional heatwave frequency trends have reverted to increases. This suggests that observed short-term changes of decreasing heatwave frequency could recover to increasing trends within the next decade. The results of this study are specific to CESM and the ‘business as usual’ scenario, and may differ under other representations of internal variability, or be less striking when a scenario with lower anthropogenic forcing is employed.
Introduction
Heatwaves (prolonged periods of anomalously warm temperatures; Perkins and Alexander 2013) inflict disastrous impacts on human health, infrastructure, and ecosystems (McMichael and Lindgren 2011, Welbergen et al 2008, Coumou and Rahmstorf 2012, Perkins 2015. Since at least 1950, increases in heatwaves have been observed over numerous regions (Della-Marta et al 2007, Perkins et al 2012, Russo et al 2014, Ding et al 2010. These observed trends in heatwaves are predominantly statistically significant (Perkins et al 2012), and anthropogenic climate change is a main contributor (e.g. Stott et al 2004, Christidis et al 2015, with projected future changes consistently indicating increasing trends (e.g. Meehl When investigating climate projections, traditional analysis is generally supported by multi-model ensembles such as the 5th Climate Model Intercomparison Project (CMIP5) global climate model archive (Taylor et al 2012). While CMIP5 and similar ensembles provide an estimate of structural and parametric uncertainties surrounding climate projections (Taylor et al 2012), the internal variability of each participating model is almost certainly underrepresented. Numerous studies have demonstrated that Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence.
Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. even slight perturbations in a model's initial conditions, when all external forcings are constant, can result in very different trend estimates (e.g. Deser et al 2012, Perkins and Fischer 2013, Deser et al 2014 and overall changes in heatwaves (Kay et al 2015, Teng et al 2016. This is very important, since due to the inherent variability in the climate system, the same principle undoubtedly applies to observations. Therefore, just because we have measured one set of observed heatwave changes does not mean it is the only possibility. The frequency, duration and intensity of heatwaves vary markedly on interannual and interdecadal scales due to climate variability phenomena (Kenyon and Hegerl 2008, Parker et al 2014, Hoerling et al 2013, Perkins 2015. Thus, different representations of internal variability will likely influence resulting trends. The present study explores the distribution of historical trends of global and regional heatwave frequency when accounting for the influence of internal climate variability. We consider short-and long-term trends to quantify the effect of variability on rates of change over different temporal periods (Martozke and Forster, 2015), as well as whether short-term trends can be indicative of the longer-term signal. Additional to previous studies (e.g. Deser et al 2012, Deser et al 2014, Marotzke and Forster 2015, Kay et al 2015, Teng et al 2016 we examine whether such trends are unprecedented due to the presence of human influence. We utilize observations and a 21-member ensemble of the global climate Community Earth System Model (CESM; see Fischer et al 2013), and consider regional and grid-box trends.
Data
To measure observed changes in heatwaves, we use the HadGHCND observational record, a 3.75°Â2.5°q uasi-global daily dataset of maximum (T max ) and minimum (T min ) land temperatures (Caesar et al 2006, Perkins et al 2012. Since HadGHCND is incomplete in space and time, we only use grid boxes that have at least 55% of the total period between 1955-2009, and 5% of the total during 2000-2011 (Perkins et al 2012), The overall time period used, 1955-2011, is common between observations and CESM (see below). We extract daily T min and T max from version 1.0.4 of the CESM climate model, which includes the Community Atmosphere Model version 4 at 1.875°Â 2.5°global resolution (see Gent et al 2011. In addition to a 982 yr control simulation under no external forcing and greenhouse gas concentrations are set to pre-industrial levels, this ensemble has 21 members, each driven by identical external forcings. From 1950From -2005 all members are forced with historical anthropogenic greenhouse gas and aerosol concentrations, and natural forcings. From 2006-2100 prescribed RCP8.5 forcings are employed. Each member only differs in their initial conditions, where on the 1st of January 1950 random perturbations on the order of 10 À13 are imposed on atmospheric temperature . Despite this minute alteration, a substantial amount of variability is induced across the ensemble providing an ideal platform for this study. We exclude the first 5 yr of each historical simulation for spin-up, to which we concatenate the respective RCP8.5 simulation to provide data from 2006-2011 matching the length of observations (herein referred to as 'forced' simulations). Employing a scenario with less anthropogenic greenhouse forcing (e.g RCP4.5) would likely yield more subtle findings. However we are limited to RCP8.5 as it is the only future scenario applied to CESM.
Calculating heatwaves
We use the Excess Heat Factor (EHF) heatwave definition (Nairn et al 2009, Perkins et al 2012, Nairn and Fawcett 2013, an operational heatwave index employed by the Australian Bureau of Meteorology. Comparisons of heatwave trends calculated from the EHF and indices based on T min and T max are detailed in Perkins and Alexander (2013). EHF is based on two excess heat indices: EHI(accl.) and EHI(sig.), that are combined to derive EHF: T i is the average temperature for day i, and T 90i is the calendar day 90th percentile, calculated from a 15 d window centered on T i . The average temperature is the average of T min and T max within a 24 h cycle (9 A.M.-9 A.M.). EHI(accl.) describes the anomaly over a 3 d window against the preceding 30 d, and EHI(sig.) describes the anomaly of the same 3 d window against a climatological extreme threshold and flags particularly warm conditions. For a heatwave to occur, EHF must be positive for at least three consecutive days (i.e. i, i þ 1 and i þ 2). For observed data and the CESM realizations, a base period of 1961-1990 was used to define T 90i. For the control simulation, a 30 yr base period was selected at random as there were no detectable differences between T 90i values from 500 randomly selected 30 yr periods. We consider heatwaves over a 5 month summer-May-September in the northern hemisphere and November-March in the southern hemisphere (Perkins et al 2012). The resulting record spans events commencing between 1955-2010 in the observations and realizations, and for 981 yr in the control, since we omit the last year in the northern hemisphere to match the same timespan in the southern hemisphere. We analyse heatwave frequency using the seasonal total of heatwave days, where a heatwave day is part of at least three consecutive days of Environ. Res. Lett. 12 (2017) 044005 positive EHF values. Section S1 in the supplementary material provides a regional evaluation of CESM against HadGHCND observations.
Trend analysis
Heatwave trends were calculated per decade using Sen's Kendall slope estimator that is robust against outliers and non-normally distributed data (Sen 1968, Zhang et al 2005, Caesar et al 2011, which are common characteristics of extremes. Grid box and regional average trends were calculated at the native resolution for HadGHCND and each model realization. Trends for all 21 'Giorgi' regions (Giorgi and Francisco 2000) were originally calculated. However, we discuss only Western North America, Northern Europe, East Asia and Australia, representing a variety of climates and differing influences of internal variably, as well as balancing spatial constraints of this study. Trends are deemed significant at the 5% level, where the null hypothesis is no detected trend (i.e. a magnitude of 0).
The bulk of our analysis on forced trends is based on 1955-2010 (56 yr) and 1998À2010 (13 yr); the former is the longest possible period common across all datasets, and the latter covers a similar period where the observed global average temperature trend was smaller than the long-term (e.g. Liebmann et al 2010, Trenberth and Fasullo 2013, Kosaka and Xie 2013, Marotzke and Forster 2015. These periods were also selected to analyse the role of internal variability on short-and long-term rates of change under observed external forcings. We first present rank histograms (Hamill 2001, Haughton et al 2014 to determine if CESM can capture the spatial pattern of observed trends. Rank histograms (figure 1) show the position of the observed trend against the 21 ensemble members in descending order.
To determine whether forced trends are unprecedented against background internal variability, we respectively compare them to all 56 yr and 13 yr trends calculated from the control. Regionally, we also investigate the minimum length for which a trend must be calculated to be indicative of the long-term (1955À2010) change in CESM. While previous methods have used trend significance (e.g. Liebmann et al 2010, Lewandowsky et al 2015) or other signal to noise analysis (e.g. Santer et al 2011), we adopt an alternate method analyzing trend magnitudes across all available temporal lengths. For each realization, we compute trends of 5 to 56 yr, where all trends truncate in 2010. To adequately sample the range of longerterm trends in CESM, trends of 51 to 56 yr length were aggregated across all realizations, resulting in a sample of 105 trends. For each realization we then calculate the trend commencement year from which all trends starting prior to this year consistently lie within the 1st and 99th percentile of the aggregated sample. For example, if the resulting year was 1975, a trend spanning at least 1975-2010 is necessary to provide an adequate and stable estimation of overall long-term changes in heatwave frequency, for the specific realization in question. The red dotted line indicates the percentage of grid boxes where CESM members are expected to be greater than the observed trends by chance. The ensemble tends to over/under estimate some long-term trends, and over estimate some short-term trends. However, over 75%-80% of all common areas, the model estimates observed changes in heatwave frequency reasonably well.
Environ. Res. Lett. 12 (2017) 044005 Section S3 in the supplementary material details regional heatwave trends in CMIP5 ensemble (Taylor et al 2012). While the spread in CMIP5 trends is greater, this is likely due to the larger sampling of model configurations (e.g. physics, resolution, etc). The variability of CESM trends is within that of CMIP5, and centered on a similar median. The overall conclusions of our study are very similar across both ensembles, however quantitative results detailed below are specific to CESM, and could differ if another climate model (with an adequate number of realizations) was used.
Results and discussion
Rank histograms of short-and long-term heatwave frequency trends (figure 1) indicate that the ensemble is under-dispersive when compared to the observed spatial trend pattern. Considering long-term trends, almost 14% of grid boxes the observed trend is larger and 12% are smaller than the entire CESM ensemble ( figure 1(a)). This indicates an underestimation in the range of forced changes by CESM. In figure 1(b), observed trends are smaller than the ensemble over almost 18% of grid boxes, indicating that CESM overestimates short-term changes in heatwave frequency. However, for the majority of grid boxes (the 76% or 82% not affected by an over-or under-estimation) the ranking of observations against CESM is within the model's uncertainty envelope (see supplementary material available at stacks.iop.org/ERL/12/044005/ mmedia). This corresponds well to where the observed long-and short-term trends are within the CESM ensemble range (figures 2(a) and (b)), with the exception of parts of Eastern (Central) Asia in figure 2 (a) and (b) where the observed trend is higher (lower). While some improvements could be made in the same as (a) but for short-term; (c) 1 st percentile of longterm trends from the externally forced 21 member CESM ensemble; (d) same as (c) but for short term trends. Units of these graphs are days/decade. (e) percentage of forced long-term trends greater than the control; (f) same as (e) but for short-term trends; (g) percentage of significantly increasing forced long-term trends; (h) same as (g) but for short-term trends.
Environ. Res. Lett. 12 (2017) 044005 simulation of the entire large-scale spatial pattern of heatwave frequency trends, CESM is appropriate in demonstrating the influence of internal variability on heatwave frequency over most global regions.
Spatially, there are clear differences in observed heatwave frequency trends across the two time periods (1955-2010 and 1998-2010)-both the direction and magnitude of change can be drastically different--indicating that shorter-term trends ( figure 2(b)) are not indicative of long-term changes ( figure 2(a)). It is clear that, for most regions, there is an increase in heatwave frequency over 1955-2010, but this is not always reflected in short-term trends. Moreover, even regional long-term trends can be anomalously small. For instance, a 'warming hole' in heatwave frequency trends is detected over the U.S., although over a different area than previously documented for mean temperature (Pan et al 2004. The absence of pronounced increases are also evident elsewhere in both short-and long term trends (figures 2(a) and (b)). The cause of the warming hole in seasonal mean temperatures is currently debated, with some studies suggesting this phenomenon exists because of a change in variability , Meehl et al 2015. Our results indicate a similar influence of variability on heatwave trends.
Further, figure 2(c) demonstrates that no or very small trends (±0.5 d decade À1 ) in heatwave frequency were, although improbable, feasible during 1955-2010 over large regions, as indicated by the ensemble 1st percentile. This suggests that, under recent anthropogenic forcing, internal climate variability could have masked the underlying increasing trend, where he median trend of CESM is 1-4 d decade À1 , and the 99th percentile trend is 2-6 d decade À1 (not shown), Similarly, figure 2(d) demonstrates that largely decreasing trends, generally between 5-20 d decade À1 , were possible over 1998-2010. Note that the respective trends in figures 2(c) and (d) are not physically consistent, where the trends in one region may not occur under the same internal variability conditions as another. However, our results suggest that internal variability can render short-term declines and longerterm pauses in heatwave frequency physically possible under observed anthropogenic forcing.
For large parts of Africa, the Maritime Continent, Central and North America, the Mediterranean, and Eurasia, all longer-term forced trends are unprecedented compared to pre-industrial conditions (figure 2(e)). Over all other regions, typically 30%-70% of forced trends exceed the range of trends under pre-industrial conditions. Short-term forced trends are less likely to be outside the range of unforced trends (figure 2(f)), though there are instances when a notable percentage of forced trends are (the Tropics and central Russia). For both long-and short-term trends, regions where forced trends are largely outside the range of the unforced trends are also more likely to be significantly positive (2(g) and 2(h) respectively). Over tropical regions, where internal variability is typically low, trends do not have to be significantly increasing to be outside the range of unforced trends.
Over the Middle East, southern Russia, western Africa, the tropics and north America (figure 2(e)), forced long-term heatwave trends are very likely (>90% occurrence) increasing faster than what would be expected without anthropogenic influence. For all other regions most of the long-term forced trends are unprecedented, though a small number are indistinguishable from the pre-industrial control. The percentage of unprecedented forced trends is of course smaller for short-term trends (figure 2(f)). However, >50% of short-term trends are unprecedented over some regions (e.g. Central America and central Russia). Figure 3 demonstrates when short-and long-term heatwave frequency trends are consistently unprecedented against pre-industrial conditions. Similar to figure 2(e), most long-term trends commencing in 1955 or later are already unprecedented, however trends in regions higher than 60 N are unprecedented when commencing between 1990-2005 or later ( figure 3(a)). Over central Australia, this applies to trends generally commencing between 1985-2015, and 1975-1990 over the eastern United States. For short-term trends ( figure 3(b)), consistently unprecedented trends appear between 2010-2030 over tropical regions, and 1990-2010 for most other regions. Therefore, as anthropogenic forcing increases, short-and long-term changes in heatwave frequency will be exceptionally more rapid-not only will we experience completely new climates in the coming decades, we will reach novel heatwave conditions at unmatched speeds. This result is additional to emergence studies (e.g. Diffenbaugh and Scherer 2011, Hawkins and Sutton 2012, King et al 2015 where new seasonal climate regimes are expected by 2020-2040 over the tropics and 2060-2070 over the mid-latitudes (Diffenbaugh and Scherer 2011).
The stark differences between short-and longterm trends are further evident at the regional level (figure 4). For selected regions (Giorgi and Francisco 2000), the ranges are larger for forced short-term trends (figures 4(a), (d), (g) and (j)) than long-term (figures 4(b), (e), (h) and (k). Moreover, forced shortterm trends display a larger spread than corresponding pre-industrial trends, indicating anthropogenic influence increases uncertainty in short-term changes in heatwaves, despite a general skewness towards positive trends. The occurrence of unprecedented trends (modest, or little overlap between forced and preindustrial trends) is more evident than at the grid box level in figure 2, since smaller-scale variability is removed.
Similar to figure 2, it is unlikely that observed short-term trends exceed those expected under preindustrial conditions, even in cases where the regionally-averaged observed trend is relatively large Environ. Res. Lett. 12 (2017) 044005 (e.g. Australia, 4(j)). The opposite is true for regional long-term trends (figures 4(b), (e), (h) and (k)), where observed changes are mostly unprecedented. For Western North America (4b), East Asia (4h) and Australia (4k), there is almost no overlap between the forced and pre-industrial distributions, indicating an unprecedented shift towards faster rates of change in regional heatwave frequency.
It is clear that short-term trends are not indicative of the long-term trends, so how long does a heatwave trend need to be in order to be robust? Across all regions there is considerable spread within the ensemble on the latest year a trend can commence to represent the long-term trend (4c, 4f, 4i, 4l). In all cases, there is at least one realization where the latest starting year occurs before 1960; indicating trends should be measured over at least 50 yr to be a stable representative of the regional long-term change. Over North Europe (4f ) and Australia (4l ), some realiza-tions estimate long-term representative trends from as late as 1995, where the respective influence of internal variability is likely much smaller. While outside the scope of this study, future work could examine physical reasons and the role of climate modes (e.g. El Nino/Southern Oscillation, Pacific Decadal Oscillation) on the ranges of regional heatwave trends under identical anthropogenic forcing.
It is logical that over larger regions shorter trends may be sufficient to detect a robust change, as variability is averaged out over larger spatial scales. Moreover, regions at lower latitudes could also produce shorter, yet stable trends, since the climate is less variable. Conversely, regions at higher latitudes could require in longer trends to detect a signal (i.e. earlier commencement years) since variability is larger, However, figure 4 indicates that such situations are not the case, and the opposite of what these expected patterns. The latest commencement year for Australia, figure 4 shows that within a region, the latest commencement year is likely longer than the 17 yr minimum for global average temperature (Santer et al 2011, Lewandowsky et al 2015. However, the large spread in CESM conjectures the representation of internal variability is a crucial factor in determining the time required in measuring a clear regional signal. This should be carefully considered when declaring observed trends as representative of an overall signal. Small or decreasing heatwave trends under historical forcing are consistent with the observational record of average temperature (Liebmann et al 2010, Trenberth and Fasullo 2013, Risbey et al 2014, Marotzke and Forster 2015. However, will shortterm regional declines in heatwaves last under anthropogenic influence? Exclusive of Alaska and Northern Europe, all regions show an increasing 13 yr heatwave trend on average, 5 yr after either no or a declining trend is detected (table 1). Similarly, almost all regions display at least a 50% chance of an increasing heatwave frequency trend 5 yr after a shortterm decline commences. These results strengthen 10 yr after a short-term decline, where all regions display increasing trends on average. The chance of an increasing trend 10 yr after a short-term decline is mostly between 85%-95%. This striking result indicates that short-term periods of no or decreasing heatwave frequency are transitory under anthropogenic forcing.
Conclusion
This study researched the effects of internal climate variability on heatwave frequency trends under preindustrial and forced conditions. It built upon previous research investigating short-and long-term average temperature trends where internal variability dominates on short timescales, and anthropogenic forcing on longer timescales (e.g. Liebmann et al 2010, Meehl et al 2014, Risbey et al 2014Marotzke and Forster 2015. While there is some evidence that the employed version of CESM underestimates the role of variability on the global scale (figure 1), this study demonstrates for trends in heatwave frequency: The failure of short-term trends to be robust indicators of longer-term changes; That small or decreasing short-and long-term trends are possible under historical anthropogenic forcing over most global regions due to internal variability; Where historically-forced long-term CESM trends are unprecedented against background climate variability; The disparity among ensemble members on the required length of a regional trend to be considered indicative of the long-term signal; and The high likelihood of regional trends to regain an increasing signal within 5-10 yr of a shortterm decline commencing.
Despite the uniqueness of CESM in assessing trends over different realizations of internal variability, the quantitative results are specific to this model. Based on other physical representations of internal variability and climate sensitivity, the separation of forced trends from internal variability could occur at different dates in other models (see supplementary material; Hawkins and Sutton 2012). So while this study has demonstrated that anthropogenic influence will override heatwave trends that climate variability alone dictates (figure 3), the timing of such a change will ultimately be model-specific.
In conclusion, this study has demonstrated the considerable effect internal climate variability has on trends of heatwave frequency. It is clear that shortterm trends vary in magnitude and direction more than long-term trends, and that short periods of decreasing heatwave frequency are possible under anthropogenic influence. However, anthropogenic influence is forcing heatwave trends, especially over the long-term, towards unprecedented rates of increase. The study has found that the actual rate of change and its robustness largely depends on the realization of internal variability of the specific sample, and not just the physical in-built variability of CESM. Lastly, over all global regions, short-term declines are followed by increasing trends within 5-10 yr, suggesting regions that experienced a decrease in heatwave frequency over 1998-2010 will see an increase within the next decade. Table 1. Average 13 yr trend 5 yr (column 1) and 10 yr (column 3) after a regional (rows, for region bounds, see Giorgi and Francisco 2000) 13 yr hiatus in heatwave frequency. Percentage of positive 5 yr and 10 yr trends are in columns 2 and 4, respectively. | 2019-05-19T13:05:00.688Z | 2017-03-28T00:00:00.000 | {
"year": 2016,
"sha1": "e9a29c6662a7059a1b08f4ce9e30d32741d2fb35",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1748-9326/aa63fe",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "dc53586759b84870637b5469583226c11ca9aa01",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Physics",
"Environmental Science"
]
} |
262083029 | pes2o/s2orc | v3-fos-license | The effect of Pilates on pain during pregnancy and labor: a systematic review and meta-analysis
SUMMARY OBJECTIVE: This systematic review and meta-analysis study was conducted to reveal the effect of Pilates on pain during pregnancy and labor. METHODS: The PubMed, ScienceDirect, MEDLINE, Ovid, EBSCO, CINAHL Plus, Cochrane Library databases, and Google Scholar databases were used to access the articles published in international journals, and the Dergipark, Turkish Clinics, and ULAKBİM databases were scanned to access the articles published in national journals between October 30 and November 30, 2022. The data were analyzed using Review Manager 5.4. RESULTS: This study included four articles. According to the meta-analysis results, it was elucidated that Pilates exercise during pregnancy was not statistically effective in reducing pain during pregnancy (Z=0.61, p=0.54), but it was effective in reducing pain intensity during labor (Z=11.20, p<0.00001). CONCLUSION: This study concluded that Pilates exercise was not effective in reducing pain during pregnancy but was effective in reducing labor pain. There is a need for more research on the subject. PROSPERO Registration no: CRD42023387512
INTRODUCTION
Pilates is mainly a mind-body exercise done for muscle strength, flexibility, breathing, and posture.It concentrates on actively using the trunk muscles to stabilize the pelvic-lumbar region 1,2 .Regular Pilates exercise has been shown to strengthen the transverse abdominal and pelvic floor muscles and enhance their structural function.Moreover, Pilates is considered an exercise of low-to-moderate intensity to relieve pain 3 .It ensures flexibility, dynamic balance, and muscle endurance in healthy populations.It positively affects back pain, quality of life, balance, and physical and mood states 4 .
As a result of postural changes caused by weakness of joints and ligaments and muscle-tendinous stretching during pregnancy, pregnancy-related musculoskeletal problems may arise 1,2,5 .Furthermore, pregnancy-related musculoskeletal problems can be affected by the degree of physical activity, cultural influences, and environmental and hormonal changes.Relaxin, a hormone secreted by the placenta, especially in the late stages of pregnancy, loosens the ligaments in the pelvis for the labor process.Meanwhile, it triggers pregnancy-related pain by loosening the ligaments that support the spine 1,2 .Pain is seen especially in the back, lumbar, pelvic, and extremity regions 1,[6][7][8] .Pain, which significantly affects the daily lives, mobility, and sleep of pregnant women and reduces their quality of life substantially, can reach quite serious dimensions with the progression of pregnancy 9,10 .Pilates movements can be performed according to the physiological changes during pregnancy to overcome musculoskeletal problems 3,7 .Pilates exercise during pregnancy prepares a woman for labor.Improving the flexibility of the trunk and pelvic floor muscles and ensuring correct breathing can facilitate the labor process 3 .Moreover, Pilates has been shown to reduce labor pain 3,11 .
It is assumed that the pain suffered during pregnancy and labor will decrease with correctly performing muscle strengthening and breathing, which is the basis of Pilates.When the effectiveness of the method is determined, it may be recommended to be used more often to reduce low back pain in pregnant women and labor pain.This systematic review and meta-analysis study was conducted to reveal the effect of Pilates exercise on pain during pregnancy and labor.
Research questions
• What is the effect of Pilates exercise on pain during pregnancy?• What is the effect of Pilates exercise on labor pain?
The effect of Pilates on pregnancy and labor
METHODS
A systematic review and meta-analysis study was conducted to reveal the effect of Pilates on pain during pregnancy and labor.The primary outcome of this study was the level of pain during pregnancy and labor, which was measured by a valid and reliable tool.The secondary outcome was an adverse event, which was also measured by a valid and reliable tool.In the preparation of systematic review and meta-analysis, the criteria from PRISMA 12 and the Cochrane Experiments Systematic Reviews Handbook were used.Prior to the study, the subject of the study and whether it was among the previously completed or ongoing studies were checked from the PROSPERO system.
The review of the articles included, the selection of the articles, the acquisition of the data, and the quality assessment were conducted independently by the first and second researchers, and all stages were checked by the third and fourth researchers.In case of any disagreement about the study, a meeting was held in which four researchers participated together, disagreements were discussed, and a consensus was reached.Moreover, a pilot study was conducted, and a common road map was determined regarding all these stages in a session with the participation of four researchers before initiating the study.Quasi-experimental studies, reviews, case reports, qualitative studies, unpublished theses, congress papers, and descriptive studies constituted the exclusion criteria of the study.
Review strategy
This systematic review and meta-analysis study was conducted between October 30 and November 30, 2022, in the form of a retrospective review of publications on the subject.The PubMed, ScienceDirect, MEDLINE, Ovid, EBSCO, CINAHL Plus, Cochrane Library databases, and Google Scholar databases were used to access the articles published in international journals, and the Dergipark, Turkish Clinics, and ULAKBİM databases were scanned to access the articles published in national journals.
The search was done in Turkish and English over Istanbul University-Cerrahpaşa Internet access network using keywords such as (pregnancy OR antenatal period OR labor OR birth) and (women OR pregnant women OR pregnancy) AND (Pilates) and (pain OR low back pain OR labor pain).Furthermore, the reference lists of the studies included in the study were reviewed to identify additional studies.
Selection of studies
Studies to be included in this study were determined and selected independently by two researchers, considering the inclusion and exclusion criteria.The titles and abstracts of all studies were reviewed.The articles selected independently by the first and second authors were compared.In case of different views on articles, a joint decision was made by considering the views of the third and fourth authors.
Acquisition of study data
A data extraction form was created by the researchers to obtain the same information from each study included in the systematic review.The data extraction form included information about the author, year of the study, country, type of study, sample size, data collection tools, mean age of the pregnant women included in the study, data on the intervention, and information about pain.
Evaluation of the evidence quality of studies
Each study selected to be included in the study was assessed by the first two authors with a critical appraisal checklist and checked by the third and fourth authors.The quality of the articles in randomized controlled trials was assessed via the Second Version of the Cochrane Risk-of-Bias tool for randomized trials (RoB 2) 13 .
Data analysis
In the meta-analysis, data analysis was performed using Review Manager 5.4 (The Nordic Cochrane Center, Copenhagen, Denmark).The heterogeneity between the studies reviewed was assessed by Cochran's Q and Higgins' I 2 tests, and it was accepted that I 2 higher than 50% indicated a significant heterogeneity.Accordingly, the Random Effect result was obtained if I 2 was higher than 50%, and the Fix Effect result was obtained if it was <50%.To evaluate the study data, Standardized Mean Difference (SMD) and Mean Difference (MD) were used for continuous variables.All tests were calculated as two-tailed, and p<0.05 was accepted for statistical significance.
Yilmaz T et al.
Review findings
As a result of the literature review, 293 studies were reached at the first stage.As a result of excluding duplicate records and the literature that met the exclusion criteria and analyzing the titles and abstracts, 29 articles were selected for full-text review.After reviewing the full texts according to the inclusion criteria and adding other studies, four studies were determined for meta-analysis: two for pain during pregnancy and two for pain during labor.The primary outcome of this study was the level of pain during pregnancy and labor.There were no studies reporting adverse effects for secondary outcome.Figure 1 shows the PRISMA flowchart for the selection process of the studies.
Quality assessment results of studies
The articles included in the study were assessed with the RoB 2 tool.While no high level of bias was observed in the studies, some concerns about bias were seen in one study, and a low risk of bias was observed in the other three studies (Table 1).
Characteristics of the studies
The total sample size of the studies included in the systematic review and meta-analysis is 204.All the studies included in this research were randomized controlled trials and published in English language (Table 1).RCT: randomized controlled trial; VAS: Visual Analog Scale; PMI: Pregnancy Mobility Index; N/A: not applicable; SD: standard deviation.The effect of Pilates on pregnancy and labor
Characteristics of the intervention
The time of starting Pilates exercise differed in the studies.
In the studies, Pilates practice during pregnancy was performed in the second and third trimesters, and in three-quarters of the studies, it was performed as two sessions per week for 8 weeks 2,3,11 .In the studies, the duration of pregnant women's Pilates exercise differed between 30 and 70 min 2,3,11,14 (Table 1).
Meta-analysis findings
The Visual Analog Scale (VAS) was used for pain evaluation in three studies included in the meta-analysis 2,3,11 , and the Pregnancy Mobility Index (PMI) 14 was used in one study (Table 1).
The effect of Pilates during pregnancy on pain during pregnancy
Two studies reviewed in this study included data on pain levels in pregnant women who did and did not do Pilates during pregnancy 2,14 .According to the combined results of these studies, it was seen that Pilates exercise during pregnancy was not statistically effective in reducing pain during pregnancy (SMD: -0.55, Z=0.61, p=0.54) (Figure 2).
The effect of Pilates during pregnancy on labor pain
Two studies reviewed in this study included data on labor pain in pregnant women who did and did not do Pilates during pregnancy 3,11 .According to the combined results of these studies, Pilates exercise during pregnancy was found to statistically significantly reduce pain intensity during labor (MD: -1.21, Z=11.20, p<0.00001) (Figure 2).
DISCUSSION
This systematic review and meta-analysis study analyzed the effect of Pilates exercise on pain during pregnancy and labor.Pregnancy and labor are among the important and special periods experienced by women in their lives.Many changes considered normal may occur during these periods.However, these changes may lead to pain during pregnancy and labor 15,16 .
Musculoskeletal system pain may be experienced during pregnancy, affecting the lower back, pelvic region, back, hip, and even wrists.As pregnancy progresses, a decrease is observed in the strength of the pelvic floor muscles and abdominal muscles.Pilates exercise contributes to strengthening the pelvic floor and preventing and treating dysfunctions caused by pregnancy 7 .The current guidelines recommend moderate-intensity exercise during pregnancy 17,18 .
According to the results of this meta-analysis, it was seen that Pilates exercise was not effective in reducing pain during pregnancy.Similar to the results of this study, Mazzarino et al. reported that there was insufficient evidence that Pilates relieved low back pain during pregnancy 19 .However, in the meta-analysis study by Mendo and Jorge, it was expressed that Pilates was useful against pain during pregnancy 7 .The study by Sonmezer et al. stated that Pilates had a positive effect on reducing pain during pregnancy 2 .The study by Canarslan and Albayrak revealed that Pilates reduced pain during pregnancy 20 .In another study, Pilates was demonstrated to be an effective, healthy, and applicable method to reduce pain during pregnancy 1 .Although there are studies stating that Pilates provides benefits regarding its effect on pain during pregnancy 1,2,7,20 , more randomized controlled trials are needed on this subject.
It is reported that Pilates exercise during pregnancy is beneficial in terms of preparing low-risk pregnant women for labor 14 .As a result of enhancing the flexibility of the trunk and pelvic floor muscles and improving proper breathing, Pilates exercise can facilitate the labor process and reduce pain during labor 3,21 .
In the current meta-analysis study, Pilates exercise during pregnancy was found to be effective in reducing pain intensity during labor.In the study, Rodríguez-Díaz et al. showed that 8-week Pilates exercise during pregnancy reduced pain during labor and the use of analgesics, which resulted in significant improvements in labor 21 .A study emphasized that regular Pilates exercise during pregnancy strengthened the pelvic floor muscles, reduced pain, and decreased the need for epidural anesthesia during labor 3 .Another study elucidated that the Pilates group felt less pain during labor compared to other groups 11 .
Limitations
The strengths of this study are that there is no high level of bias in the studies included in the systematic review and meta-analysis, the results are based on reliable analysis methods, the subject is evaluated from different perspectives, and the results obtained are supported by the results reported in previous studies.A limitation of this study is that the search was done only in Turkish and English languages.
CONCLUSION
As a result of this systematic review and meta-analysis, it was found that Pilates exercise during pregnancy was not effective in reducing the pain during pregnancy, but it was effective in reducing labor pain.These results are valuable since they include the results of randomized controlled trials on the subject.
Pilates, which is a low-and moderate-intensity exercise, is a low-cost, easily applicable, non-pharmacological method with no side effects.Therefore, it is important to increase the use of Pilates exercise during pregnancy and inform and educate healthcare professionals on this subject.To identify the effect of Pilates exercise on pain during pregnancy and labor, it is recommended to conduct randomized controlled quantitative and qualitative studies that reveal the experiences of pregnant women.
Inclusion and exclusion criteria were set considering the components of the research problem (PICOS).Accordingly, Population (P); Pregnant women Intervention (I); Pilates Comparison (C); Women who did not do Pilates during pregnancy Outcomes (O); Pain Study design (S); Randomized controlled trials published in Turkish and English between 1984 and 2022.
Figure 2 .
Figure 2. Meta-analysis results regarding the effect of Pilates on pain during pregnancy and labor.
Table 1 .
Characteristics of the studies. | 2023-09-22T05:04:53.201Z | 2023-09-18T00:00:00.000 | {
"year": 2023,
"sha1": "03b5cf9b428ba38d10607a442cb286eca40068e3",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/ramb/a/w3bqMk5xgn9kpcQ7TnWjRhs/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "03b5cf9b428ba38d10607a442cb286eca40068e3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
210334471 | pes2o/s2orc | v3-fos-license | Pepsin promotes laryngopharyngeal neoplasia by modulating signaling pathways to induce cell proliferation
Pepsin plays an important role in laryngopharyngeal reflux (LPR), a risk factor for the development of hypopharyngeal squamous cell carcinomas (HPSCC). However, the role of pepsin in HPSCC is not clear. We show by immunohistochemistry that pepsin positivity occurs in a significant proportion of human primary HPSCC specimens, and in many cases matched adjacent uninvolved epithelia are negative for pepsin. Pepsin positivity is associated with nodal involvement, suggesting that pepsin may have a role in metastasis. Treatment of FaDu cancer cells with pepsin increased cell proliferation, possibly by inducing G1/S transition. We also observed significant changes in expression of genes involved in NF-kappaB, TRAIL and Notch signaling. Our data suggest that pepsin plays an important role in HPSCC and that targeting pepsin could have potential therapeutic benefits.
Introduction
Hypopharyngeal squamous cell carcinomas (HPSCC) have the worst prognosis among head and neck squamous cell cancers (HNSCC), likely because patients are already presented with late stage disease at time of diagnosis [1]. An important risk factor for the development of extra-esophageal cancers is laryngopharyngeal reflux (LPR) [2], in which pepsin is believed to have an important role [3][4][5]. In a case-control study conducted by Sereg-Bahar and colleagues [6], total pepsin in saliva of patients with HNSCC was found to be significantly higher than that of control subjects. No significant differences in pH were observed, suggesting that nonacidic pepsin reflux is associated with HNSCC. Pepsin is taken up by laryngeal epithelial cells by receptor-mediated endocytosis at neutral pH and detected in intracellular vesicles such as Golgi bodies of low pH [7]. In vitro studies have shown that pepsin can induce a dose-and time-dependent increase in proliferation of hypopharyngeal cells in parallel with changes in expression of microRNA and genes known to be involved in tumorigenesis [8]. Treatment of a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 cells with nonacid pepsin also increased anchorage-independent growth and migration, demonstrated by an increase in colony formation [9].
Taken together, these data strongly suggest that chronic exposure to pepsin promotes the development of laryngopharyngeal cancer.
The role of pepsin in laryngopharyngeal carcinogenesis remains unclear. Reflux esophagitis is associated with the progression of Barrett's metaplasia to esophageal adenocarcinomas [10,11], with a potential involvement of NF-κB signaling [12]. It has been speculated that refluxed pepsin and bile stimulate the release of inflammatory cytokines from esophageal squamous cells, resulting in recruitment of lymphocytes to the submucosa and subsequently to the luminal surface of the esophagus [10]. These events lead to activation of NF-κB signaling in Barrett's cells, enabling these cells to resist apoptosis in spite of DNA damage [10]. Treatment of Fadu cells with pepsin in a nonacidic environment induced the expression of several proinflammatory cytokines and receptors, including those involved in inflammation of esophageal epithelium in response to reflux [13].
In this study, we evaluated primary human HPSCC and adjacent noninvolved tissues for pepsin staining by Immunohistochemistry (IHC). We also investigated the in vitro effect of nonacidic-pepsin on signal transduction pathways and cellular functions in an effort to understand the role of pepsin in HPSCC pathogenesis.
Patients and tissue samples
Primary HPSCC specimens were obtained from 70 patients at the first affiliated hospital of Jilin University (Changchun, China) between August 2013 and August 2016. The inclusion criteria for patient selection were: 1) previously untreated hypopharyngeal cancer, 2) histologically confirmed squamous cell carcinoma, and 3) no distant metastasis at the initial visit. Of the 70 patients, 68 were men and 2 were women. The median age of patients was 54.5 years (range, 45-76 years). Tumor samples were obtained from radical resection of the HPSCCs. Pathological evaluation indicated that these tumors ranged from stage I to stage IV. Uninvolved tumor-adjacent tissues were obtained 1cm away from the cancers and were confirmed as non-cancerous by a certified pathologist. For negative control samples, we used mucosa from 4 pediatric patients who received tonsillectomy and had no clinical signs or symptoms of LPR as determined by the reflux finding score (RFS) and the Reflux Symptom Index (RSI) questionnaire. For positive controls, normal stomach tissues were obtained from patients with esophageal cancer. All tissue samples were formalin fixed and paraffin embedded (FFPE). This study was approved by the independent ethics committee of the Jilin University under project number 2013/091 (June 2013). Written consents were obtained from all patients involved in the study.
Immunohistochemical analysis of pepsin expression
IHC was performed using the SP-kit (Bioss, Beijing, China) following the instructions of the manufacturer. Briefly, 3.0 μm sections were placed on glass slides, dewaxed, and rehydrated. Antigen retrieval was performed in citrate-buffered saline using a microwave oven for heating. Endogenous peroxidase activity was blocked by incubating sections in 3% H 2 O 2 for 15 min at RT. Sections were blocked in 5% goat serum for 10 min and incubated with rabbit polyclonal anti-pepsin primary antibody (1:100, EIAab Science, China) overnight at 4˚C. Sections were washed 3 times in PBS, incubated with biotinylated goat anti-mouse secondary antibody for 20min, and then with streptavidin-HRP conjugate for 20min. Sections were thoroughly washed in PBS and incubated with DAB substrate for 10 min for signal detection. Negative and positive control samples (see above) were routinely included in the IHC analysis. Table 1 outlines the scoring system used to assess pepsin staining in primary HPSCC specimens. The scoring was based on a combination of staining intensity and percentage of pepsin-positive tumor cells. Specimens with a point score of 3 or higher were considered as pepsin positive.
Cell culture
Human HPSCC FaDu cells (ATCC, Manassas, VA) were grown in Minimum Essential Medium-Eagle with Earle's Balanced Salt adjusted to 1.5 g/L sodium bicarbonate. The growth medium was supplemented with 0.1 mM nonessential amino acids, 1.0 mM sodium pyruvate, and 10% fetal bovine serum (ATCC). Cultures were incubated at 37˚C under 5% CO 2 , and sub-cultured when reached 70% confluence.
Cell viability. FaDu cells were treated with porcine pepsin (0.2 or 0.4ug/ml; Sigma-Aldrich, St. Louis, MO) for 0.5, 1, 2 or 4 hours in 96-well plates. Treatments were performed at pH7, and 5 technical replicates were included for each treatment condition. After the treatment, cells were washed with PBS and grown in fresh growth media. Cell viability was determined using a CCK-8 solution (Beyotime Biotechnology, China) 24 hours after treatment. At a pre-determined assay time point, a 10% (v/v) CCK-8 solution was added to each well and incubated for 1 hour. Absorbance at 450 nm was measured in a Bio-Rad 480 microplate reader (Bio-Rad Laboratories, Hercules, USA). To determine the effect of irreversibly inactivated pepsin on cell proliferation, pepsin was inactivated at pH 8.0 for 15 minutes at 37˚C and returned to pH to 7.0 for cell treatments.
Flow cytometry. FaDu cells were treated with porcine pepsin (0.2 mg/mL, pH 7.0, 37˚C) for 0.5 or 1 hour, washed three times in fresh media, and incubated for a further 24 hours in complete growth media. Cells were fixed in 70% ethanol, incubated with a propidium iodide/ Triton X-100 staining solution containing RNase A (50μg/ml PI +200μg /mL RNase A), and assessed for cell cycle distribution using the Click-iT EdU Alexa Fluor 647 Flow Cytometry Assay Kit (Beyotime Biotechnology, China) according to manufacturer's instructions.
Human Signal Transduction PathwayFinder
The Human Signal Transduction PathwayFinder™ RT2 Profiler™ PCR Array (PAHS-014Z, Qiagen, Frederick, MD, USA) was used to evaluate the expression of a panel of 84 genes representative of ten different signal transduction pathways, in FaDu cells treated with pepsin. Total RNA was isolated from pepsin treated Fadu cells and control cells using the Qiagen RNeasy Mini Kit, following manufacturer's protocol. RNA was quantified using the Nanodrop 2000 (Gene Company Limited, Hong Kong, China), and quality was assessed based on the integrity of 18 S and 28 S ribosomal RNA bands in 1% agarose gels. First-strand cDNA was mixed with 2 × RT2 SYBR Green qPCR Master Mix and ddH 2 O. qPCR was performed in the Applied Biosystems (ABI) 7500 system using the following conditions: 95˚C for 10 min followed by 40 cycles of 95˚C for 15 sec and 60˚C for 1 min. Each array contained five independent housekeeping genes (Actb, B2m, Hprt1, Ldha and Rplp1) that were used for data normalization.
Semi-quantitative analysis of immunofluorescent staining
FaDu cells were grown on slides, fixed in 4% paraformaldehyde in PBS for 15 min at RT and washed with PBS. Cells were permeabilized with 0.2% Triton X-100 in PBS for 10 min. Cells were blocked with 5% goat serum for 1 hour, and incubated with rabbit polyclonal anti-p21 (1:100; Bioss Antibodies), rabbit polyclonal anti-C-Myc (1:100; Bioss Antibodies), rabbit monoclonal anti-NFκB p65 (1:100, Abcam) overnight at 4˚C. The next day cells were washed 3 times with PBS, and incubated with goat Alexa 555 conjugated anti-rabbit IgG (1:400, Abcam) for 1 hour at room temperature in the dark. Cells were mounted in 70% glycerol and images were taken by laser confocal microscopy (Fluo-View FV1000; Olympus, Japan). Detection of the fluorescent intensity (FI) of FaDu cells stained with anti-p21 or anti-Cmyc antibodies were preformed under a laser scanning confocal microscope. Positive signals were analyzed as mean fluorescent intensity (MFI) using the FV10-ASW 4.0 software (Fluo-View FV1000; Olympus, Japan). In brief, 100 cells from each treatment group were analyzed in a blinded manner. All images were captured under the same camera settings.
Western analysis
FaDu cells grown in 6-well plates were treated with pepsin (0.2 mg/mL) in pH7.0 for 30 min. Levels of phosphorylated of IκB and p65 were evaluated by Western analysis. Rabbit polyclonal anti-p65, anti-phospho-p65, anti-IκB, and anti-phospho-IκB antibodies were purchased from Abcam (Cambridge, UK). The secondary antibodies were Goat anti-rabbit antibodies conjugated with horseradish peroxidase purchased from Abcam (Cambridge, UK). Signals were visualized by ChemiDoc XRS+ using the Image LabTM Software (Bio-Rad Laboratories, Munich, Germany). Protein levels were quantified by scanning densitometry.
Statistical analysis
Proliferation assays. Data from five biological replicates for dose-response experiments were analyzed by one-way analysis of variance and Tukey multiple comparisons post-test. Data are expressed as mean±standard deviation. Microarray data was normalized against the house keeping genes by calculating the ΔCt for genes of interest. Fold changes in expression levels were analyzed using the RT2 PCR array data analysis web portal version 3.5 (http://pcrdataanalysis. sabiosciences.com/pcr/arrayanalysis.php). Genes with more than a 1.5-fold change in expression levels between pepsin-treated and control groups were considered significant.
Pepsin staining in primary HPSCC tumors and adjacent epithelia
Levels of pepsin protein in human primary HPSCC and corresponding uninvolved adjacent tissues were assessed by IHC. As shown in Fig 1, gastric oxyntic mucosa and tonsil tissues were used as positive and negative controls, respectively. Pepsin staining was localized to the cytoplasm of tumor cells as well as adjacent epithelial cells (Fig 1). Table 2 summarizes pepsin staining in primary HPSCC and matched uninvolved adjacent epithelium specimens. Of the 70 paired specimens, 21 had positive pepsin staining in both tumor and adjacent tissues. We observed 18 cases where pepsin was detected in the tumor but not in the adjacent epithelium. Pepsin was not present in both tumor and adjacent tissues in 31 cases.
Pepsin is associated with nodal metastasis in HPSCC patients
To understand the relevance of pepsin positivity in HPSCC pathogenesis, correlative studies were performed using available clinical and pathological data. As summarized in Table 3, there was no association between pepsin positivity and alcohol and tobacco consumption. There was also no association between pepsin staining and tumor stage and grade. On the other hand, we observed a statistically significant association between pepsin positivity and nodal involvement (P = 0.027, χ2 test). Whereas 35% of HPSCC patients without nodal metastasis were presented with pepsin positive tumors, 64% of tumors from patients with nodal metastasis were positive for pepsin.
Pepsin induced G1/S transition resulting in increased proliferation of FaDu cells
Treatment of FaDu cells with pepsin at a concentration of 0.2 mg/ml (pH 7.0) for 30 mins resulted in a significant increase in cell number (Fig 2A). At this concentration of pepsin, extending the duration of treatment to 1-4 hours did not result in further increase in cell number. When the concentration of pepsin was increased to 0.4 mg/mL (pH 7), treatment of FaDu cells for 2 hours also resulted in a significant increase in cell number (Fig 2B). However, there was no effect on cell number when cells were treated for 30 min, 1 hour, or 4 hours. Treatment of FaDu cells with irreversibly inactivated pepsin under the same conditions had no effect on cell number (Fig 2C and 2D). Therefore, subsequent experiments were performed at a pepsin concentration of 0.2 mg/ml. To further investigate the effect of pepsin on cell proliferation, we determined cell cycle distribution of FaDu cells following pepsin treatment. Cells were treated with 0.2 mg/ml (pH 7) of pepsin for 30 mins or 1 hour, fixed, and stained with propidium iodide for analysis by flow cytometry. When cells were treated with pepsin for 30 min, we observed a significant increase in percentage of cells entering the S phase and a corresponding decrease in percentage of cells in the G1 phase in comparison with control cells (P � < 0.05, Fig 3). Treating cells with pepsin for 1 hour resulted in a significant decrease in percentage of cells in the G1 phase (P < 0.05), but there was no significant difference in percentage of cells in the S or G2/M phase when compared to controls (Fig 3)
Involvement of endosome/.lysosome in pepsin intracellular reactivation
To investigate a potential involvement of the endosomes in pepsin reactivation, pepsin-treated FaDu cells were stained with Lysotracker red to track acidic organelles. We observed co-localization of the Lysotracker red and pepsin, consistent with localization of pepsin to the lysosome (Fig 4). Pepsin remained localized to the lysosome up to 36 hours post-treatment (Fig 4).
Gene expression readouts of signaling pathways in pepsin-treated FaDu cells
To investigate the effects of pepsin on signaling pathways, we used the Human Signal Transduction Pathway Finder RT 2 Profiler PCR Array to profile pepsin-treated and control FaDu cells. Expression of 84 genes were evaluated to provide readouts for a range of signaling transduction pathways (S1 Appendix). Our experiments confirmed that pepsin treatment resulted in a >1.5-fold increase in expression of TNF-α and BCL2A1 compared to control cells (P<0.05, Table 4). In contrast, treatment with pepsin resulted in a 1.93-fold and 2.83-fold decrease in expression of TNFSF10 and HES5, respectively (P<0.05, Table 4).
Pepsin treatment activates NF-κB signaling in FaDu cells
To confirm our PCR array results, we used immunofluorescent staining and Western blotting to assess the protein levels of selected genes in FaDu cells treated with pepsin in comparison to control. Semi-quantitative analysis of immunofluorescent signals confirmed that treatment of FaDu cells with pepsin induced the protein levels of NF-κB p65, p21 and c-Myc (Fig 5, p < 0.05). We further investigated the effect of pepsin on NF-κB signaling by assessing levels of phosphorylated p65 and IκB in FaDu cells treated with 0.2 mg/mL of pepsin for 30 minutes. Western analysis showed that treatment with pepsin induced levels of phospho-p65 and phospho-IκB, consistent with activation of NF-κB signaling (Fig 6, P < 0.05).
Discussion
A high prevalence of LPR has been reported in patients with HPSCC, but whether it contributes to cancer growth and metastasis remains unclear. Pepsin is considered as an important clinical marker for LPR when detected in the upper aerodigestive tract [14][15][16]. Pepsin induces the proliferation of hypopharyngeal cancer cell lines in a dose-and time-dependent manner [8], and has also been shown to inhibit apoptosis [9]. Using a hamster buccal model, Adams et al [17] demonstrated that chronic exposure to pepsin together with DMBA resulted in a higher incidence of dysplasia than DMBA treatment alone. Results from this study confirm a role for pepsin in the promotion of HPSCC development and further suggest that this may driven in part by activation of NF-κB signaling. We evaluated pepsin expression in 70 primary human HPSCC specimens and matched adjacent uninvolved epithelial tissues by IHC. Pepsin was detected in 39 of the 70 HPSCC specimens (Fig 1, Table 1). Of the 39 pepsin-positive HPSCC specimens, pepsin was also detected in the adjacent epithelial in 21 of the cases. However, in 18 cases pepsin was detected in HPSCC but not in adjacent epithelial samples. Importantly, pepsin expression in primary HPSCC was associated with nodal involvement, suggesting that pepsin may play a role in metastasis. Due to unavailable clinical follow-up data at this time, we were unable to determine whether pepsin positivity is associated with disease-free or overall survival.
Our in vitro studies showed that treatment with pepsin at concentrations of 0.2mg/ml and 0.4mg/ml for 30 min induced proliferation of the FaDu hypopharyngeal cancer cells. Extending pepsin treatment time did not further increase cell proliferation.
Previous studies showed that pepsin is taken up by hypopharyngeal epithelial cells by receptor-mediated endocytosis at neutral pH and localizes at late endosome and trans-reticular Golgi up to 6hrs post-treatment [18]. We observed that within cells pepsin is localized to the lysosome where the pH is in the acidic range (~pH 4.0) [19], suggesting that the lysosome may be the organelle within which inactive pepsin is reactivated. Together with the observation that irreversibly inactivated pepsin did not induce cell proliferation, we speculate that the enzymatic activity of pepsin has an important role in inducing cell proliferation.
Data from our PCR arrays revealed that pepsin treatment up-regulated NF-κB signaling related genes TNF-alpha, TNFSF10 (TRAIL), and BCL2A1 compared to control. TNF family cytokines trigger a variety of NF-κB-dependent responses that can be specific to both cell type and signaling pathway [20,21]. The roles of NF-κB in determining chronic inflammation and carcinogenesis have been well demonstrated [22], and both of these functions may be crucial to head and neck carcinogenesis. During the development of HNSCC, NF-κB is frequently upregulated from premalignant lesions to invasive cancer [23,24], and has been associated with tumor invasion and metastasis [25]. In an in vitro model of gastroduodenal reflux model, Sasaki et al [26] observed Bcl-2 overexpression and significant transcriptional deregulation of NF-κB-related genes with oncogenic function in hypopharyngeal cancer cells treated with acid/ bile. In another in vitro study performed by Sasaki el al [27], weakly acidic-pepsin (pH 5.0) and neutral-pepsin (pH 7.0) were found could induce mild activation of NF-κB with increase in TNF-α mRNAs in human hypopharyngeal primary cells, that is in accordance with our study, but no oncogenic transcriptional activity was detected in their study. This could be explained that the mild increase of NF-κB activity may be related to stress reaction [27] and in vitro cellular study could not mimic the dynamic events in tissue response to selected ranges of acidified pepsin.
NF-κB and TRAIL signaling pathways are important in the regulation of proliferation and apoptosis. NF-κB has many cellular functions and targeting NF-κB for therapeutic applications may lead to severe side effects. Targeting of TRAIL on the other hand can selectively induce apoptosis in cancer cells without affecting normal cells [28], indicating that TRAIL may be suitable target for anti-cancer therapy [29]. Using a panel of HNSCC cell lines, Ren et al [30] showed that targeting of TRAIL and Smac bypassed NF-κB activation to induce cancer cell death, raising the potential benefit of co-targeting strategies involving TRAIL for treating HNSCC.
We identified three target genes (HES5, HEY1, HEY2) of the NOTCH pathway to be significantly altered at RNA levels in FaDu cells treated with pepsin (Table 4). NOTCH signaling is mediated through binding of ligands (JAG1 and -2, and DLL1, -3, and -4) to the NOTCH receptor (NOTCH1, -2, -3, and -4). We found that the expression of NOTCH1 and JAG1 were not affected by the pepsin treatment. After binding of ligand to NOTCH, γ-secretase complex releases the NOTCH intracellular domain (NICD), which moves to the nucleus, resulting in the transcriptional activation of NOTCH target genes [31]. HES, HEY, CCND1, MYC, BCL-2, and p21 are among a large number of NOTCH target genes [32]. The role of NOTCH signaling in promoting or suppressing the development of HNSCC remains controversial [33]. HES5 has an important role in regulating mammalian neuronal differentiation and maintaining neural stem cells [34]. A recent study showed that HES5 silencing is an early and recurrent event in prostate tumorigenesis [35]. In addition, Upadhyay et al [36] showed that Notch pathway activation is essential for maintenance of stem-like cells in early tongue cancer, and the effect of Notch was enhanced by TNF-alpha [37]. However, Wirth et al [38] found that high levels NOTCH1 mRNA is associated with better survival in HNSCC. Mutations studies in HNSCCs have identified loss-of-function mutations in the NOTCH signaling pathway [39], consistent with the observation that inactivation of canonical Notch signaling drives head and neck carcinogenesis in mouse models of keratinizing HNSCC [40]. Additional studies will be needed to elucidate the exact role of pepsin in modulating the NOTCH pathway.
Our research was performed in a cancer-derived cell line that might responds to pepsin differently compared with normal epithelial cells. This is one major limitation of our research and will be resolved in the future study using cultured primary epithelial cells.
Conclusion
Results presented in this study suggest that pepsin reflux induces a dose-dependent increase in proliferation of hypopharyngeal cancer cells, and this effect is mediated by the enzymatic activity of pepsin. Although the exact role of pepsin in hypopharyngeal cancer development is not fully understood, our data suggest that NF-κB, TRAIL and NOTCH signaling, representing major mediators of cell proliferation, differentiation and apoptosis, are likely to be involved. The development of pharmacological inhibitors to specifically target pepsin could potentially modulate these signaling pathways and have therapeutic value for treating HPSCC. | 2020-01-16T09:04:49.714Z | 2020-01-15T00:00:00.000 | {
"year": 2020,
"sha1": "a5d0d1ce5bab703f538c16c0d22d2c584c8901f8",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0227408&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c028533e60d75ddd5b67a02ef35f34983f69788d",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118953758 | pes2o/s2orc | v3-fos-license | The chemical structure of the Class 0 protostellar envelope NGC 1333 IRAS 4A
It is not well known what drives the chemistry of a protostellar envelope, in particular the role of the stellar mass and the outflows on its chemical enrichment. We study the chemical structure of NGC 1333 IRAS 4A in order to (i) investigate the influence of the outflows on the chemistry, (ii) constrain the age of our object, (iii) compare it with a typical high-mass protostellar envelope. In our analysis we use JCMT line mapping and HIFI pointed spectra. To study the influence of the outflow on the degree of deuteration, we compare JCMT maps of HCO+ and DCO+ with non-LTE (RADEX) models in a region that spatially covers the outflow activity of IRAS 4A. To study the envelope chemistry, we derive empirical molecular abundance profiles for the observed species using the radiative transfer code (RATRAN) and adopting a 1D dust density/temperature profile from the literature. We compare our best-fit observed abundance profiles with the predictions from the time dependent gas grain chemical code (ALCHEMIC). The CO, HCN, HNC and CN abundance require an enhanced UV field which points towards an outflow cavity. The abundances (wrt H2) are 1 to 2 orders of magnitude lower than those observed in the high mass protostellar envelope (AFGL 2591), while they are found to be similar within factors of a few with respect to CO. Differences in UV radiation may be responsible for such chemical differentiation, but temperature differences seem a more plausible explanation. The CH3OH modeled abundance profile points towards an age of>4x10^4 yrs for IRAS 4A. The spatial distribution of H2D+ differs from that of other deuterated species, indicating an origin from a foreground colder layer (<20 K). The observed abundances can be explained by passive heating towards the high mass protostellar envelope, while the presence of UV cavity channels become more important toward the low mass protostellar envelope.
Introduction
During low-mass (< 2 M ⊙ ) star formation a rotating cloud of gas and dust collapses under gravitational forces. The central protostar increases in mass through the accretion disk that surrounds it. The main mechanisms that retard the gravitational collapse are the thermal pressure, magnetic fields, and turbulence (Hennebelle & Motte 2009;Evans 2011;Luhman 2012;Tan 2015). Turbulence can be enriched by energetic outflows from young stellar objects (YSOs) which may further trigger star formation in nearby gas (Quillen et al. 2005).
Molecular outflows are prominent during the earliest stages of star formation, especially when collimated jets are driven in the youngest (10 3 -10 4 years) embedded protostars (Arce et al. 2007). Class 0 protostars are still in their main accretion phase and they also drive the most powerful outflows. The impact of the ejected material on the surrounding cloud causes shock ⋆ Based on Herschel observations. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA. fronts. These lead to changes in the chemical composition and the enhancement of the abundance of several species in the surroundings. Fontani et al. (2014) have found enhancement of HDCO/H 2 CO towards the shock location of a Class 0 object, L1157 mm (d=250 pc), reporting a deuterated molecule as a shock tracer for the first time.
The strong outflow activity and winds that YSOs produce result in high-velocity gas, but also in the evacuation of regions near the protostar. Such cavities have been previously seen as a "hole" in the 1.3 mm continuum emission (e.g., near NGC 1333 IRAS4 and SVS13; Lefloch et al. 1998). UV radiation from the protostellar system (mainly due to accretion) is expected to play a crucial role in such environments (Stäuber et al. 2004;Visser et al. 2012).
During the cold and dense pre-collapse phase, molecular complexity increases by rapid ion-molecule gas-phase reactions followed by gradual freeze-out and build up of ices (H 2 O, CO, NH 3 ) and surface processes. While collapsing, the radiation that comes from the forming protostar heats the inner parts of the envelope making surface radicals mobile and highly reac-tive. Later, these freshly formed complex ices thermally desorb, further boosting rich chemical processes in the gas and creating a "hot corino" (e.g., Ceccarelli 2004;Garrod & Herbst 2006;Ceccarelli 2008). The hot corinos refer to inner regions (<200 au) with an increase of the temperature above 100 K, as a result of passive heating from the protostar. An analog characterizes the high-mass protostars. Previous independent studies on high-and low-mass protostellar envelopes suggest that there is an abundance difference of a few orders of magnitude in several species (e.g., H 2 CO, CH 3 OH van der Maret et al. 2004Maret et al. , 2005. In addition, Herbst & van Dishoeck (2009) present complex organic molecule abundances relative to CH 3 OH that are similar (within factors of a few) for low-and high-mass YSOs.
In this work we are interested in a direct abundance comparison among high-mass and low-mass protostellar envelopes with the use of datasets from the same instruments and similar methodology. In particular, we aim to answer a) what the chemical structure of low mass protostellar envelopes is and how it compares to high mass protostellar envelopes, b) how does the temperature profile affect the abundance profile of several species in the inner envelope ("hot corino"), the freeze-out zones, and the outer parts of protostellar envelopes, c) how do the outflows influence the chemistry of the surroundings of a protostellar envelope and what is the role of outflow cavities in the observed abundances and finally, d) if the deuteration can be used as an outflow/shock tracer. For this purpose we use the low-mass protostellar envelope IRAS 4A, which appears as the brightest (sub)-mm continuum object in NGC 1333 IRAS 4 region and is classified as a Class 0 object (André et al. 1993). IRAS 4A is a prototypical well studied Class 0 object and of great interest as it is among the first (Mathieu 1994) proto-binary systems ever detected. NGC 1333 is one of the nearest (D=235 pc; Hirota et al. 2008) and youngest (< 1 Myr; Gutermuth et al. 2008) star forming regions. IRAS 4A is a binary system, consisting of two deeply embedded Class 0 YSOs with a separation of 1. ′′ 8 (420 au at a distance of 235 pc). The binary nature of IRAS 4A was first observed in 0.84 mm CSO-JCMT interferometric high-resolution submillimeter continuum observations (Lay et al. 1995) and resolved at millimeter wavelengths using the BIMA array by Looney et al. (2000). They were also found to share a common circumbinary envelope (Looney et al. 2003).
In addition, a spectral line and continuum survey using the SMA interferometer was performed by Jørgensen et al. (2007) where inverse P-Cygni 13 CO 2-1 line profiles have been found. These profiles indicate infall motions, which are also characteristic of the Class 0 stage. Di Francesco et al. (2001) reported inverse P-Cygni profiles in CS and H 2 CO, tracing high-density gas as observed with IRAM Plateau de Bure. IRAS 4A has been suggested to have a "hot corino" (Maret et al. 2004). Multitransition observations of species such as H 2 CO and CH 3 OH towards 4A revealed abundance enhancements in the warmest inner regions (>100 K) by up to 2 orders of magnitude (Maret et al. 2004(Maret et al. , 2005. The same abundance enhancement can also occur in outflows on larger scales as a result of ice mantle sputtering in shocks (Bachiller & Pérez Gutiérrez 1997;Tafalla et al. 2000). Mantle sputtering is thought to play a role when outflow speeds reach about 10 km s −1 and is independent of gas density. In faster shocks with speeds as high as 20-25 km s −1 the mantles vaporize completely (Guillet et al. 2011) while grain core sputtering is still inefficient. H 2 CO and CH 3 OH have been found to trace outflow activity of IRAS 4A (Jørgensen et al. 2007; Koumpia et al. 2016), which makes it dif-ficult to distinguish between a "hot corino" chemistry and the enhancement due to shocks caused by protostellar outflows.
The highly collimated outflows from IRAS 4A have been mapped in several CO transitions (Knee & Sandell 2000;Jørgensen et al. 2007;Yıldız et al. 2012). IRAS 4A shows two bipolar outflows, one with a N-S orientation and the other with P.A. ∼45 • . The usual interpretation of the CO and SiO outflow has been that a N-S component arises in A 2 which becomes bent to a P.A. ∼45 • angle at short distance to the North. There is no evidence for a flow from A 1 , the brighter of the millimeter sources. Marvel et al. (2008) have used water masers to trace the small-scale motions of the IRAS 4A outflows, with somewhat puzzling results. One interpretation has been that there may be a third component of the system quite close to A 2 , which drives the outflow.
Our study mainly aims to a) constrain the chemical structure of the protostellar envelope of IRAS 4A and compare it with a high-mass protostellar envelope, b) investigate the presence of deuteration towards the outflow, and c) investigate the influence of the temperature over outflow activity to the observed abundance profiles. In this article we present HIFI and JCMT observations of a range of chemically diverse species towards IRAS 4A. To study the outflow chemistry, we estimate the excitation temperature and column density of H 2 CO in the envelope and outflow using population diagrams. We proceed with modeling our observed maps with RADEX in order to determine the DCO + /HCO + abundance ratio and test the enhancement observed by Fontani et al. (2014). To study the envelope chemistry, we apply the 1D physical model determined by that takes into account the temperature and density gradients of the envelope and we run RATRAN in order to fit our observations. We apply constant and jump-like abundance profiles. In addition, we run chemical models that similarly take into account the physical structure of the protostellar envelope and apply the abundance profiles we determine to our RATRAN models.
We compare this low-mass case with the high-mass case of AFGL 2591 for which a similar analysis was performed by Kaźmierczak-Barthel et al. (2015). Lastly, we try to constrain the chemical age of the IRAS 4A from our time-dependent chemical models. (Roelfsema et al. 2012). The spectral scan was performed in Dual Beam Switch (DBS) mode with a normal chop frequency of 0.17 Hz. Each sky frequency was covered four times to facilitate the double sideband deconvolution process. The instrument stability settings were calculated in HSPOT by fixing the minimum and maximum goal resolutions to 1.1 MHz and setting the 1 GHz reference option without continuum optimization.
Observations and data reduction
The observations were processed with the pipeline at the Herschel Science Center with HIPE 7.1.0 and retrieved from the Herschel Science Archive. Further post-pipeline level 2 processing was done in HIPE 8.0. Since we are interested in full spectral coverage, only the WBS spectra were considered, although many HRS spectra in narrower ranges were also available. Spectral regions affected by 'spurs' and not automatically detected by the HIFI pipeline were flagged out and ignored. Polynomial baselines were subtracted using the FitBaseline task by masking the lines interactively. The overlapping sidebands were deconvolved with the doDeconvolution task in HIPE by applying the default settings. The observed line intensities in units of antenna temperature were corrected for loss in the sidelobes by converting them to main beam temperatures using the main beam efficiency of η mb (B e f f /F e f f ) of 0.75 (Roelfsema et al. 2012).
James Clerk Maxwell Telescope (JCMT): observations and reduction
These observations are a part of the JCMT Spectral Legacy Survey (SLS; Plume et al. 2007). The Auto-Correlation Spectral Imaging System (ACSIS) was used at the James Clerk Maxwell Telescope (JCMT 1 ) on Mauna Kea, Hawaii. We have 2 ′ ×2 ′ maps from the HARP-B instrument, which provides high velocity resolution (1 MHz, ∼ 1 km s −1 ). These observations cover the frequency window between 330 and 373 GHz. The angular resolution of JCMT is ∼ 15 ′′ at 345 GHz, which is equivalent to ∼ 3500 au at the distance of NGC 1333 IRAS 4 (Choi et al. 2004). The beam efficiency is 0.63 (Buckle et al. 2009).
Details regarding the reduction and line detections of this dataset can be found in Koumpia et al. (2016).
Line detections
The Single Side Band (SSB) H and V-polarization HIFI spectra were searched independently for line detections. We consider as safe detections the signals detected in both polarizations with > 3×RMS (RMS ∼ 0.01 K-0.04 K) after averaging, and with widths of at least two channels > 0.9 km s −1 (single channel; 1.1 MHz, ∼0.47 km s −1 ). Table A.1 presents the line list with secure detections. The detected lines were identified by producing single temperature LTE models in CASSIS 2 of the species detected in the JCMT Spectral Legacy Survey by Koumpia et al. (2016). The species that were identified following this process are: CO, 13 CO, C 18 O, CS, HCN, HCO + , N 2 H + , H 2 CO, CH 3 OH, H 2 S and H 2 O. We inspected our HIFI spectra for lines of SO, SO 2 , SiO, HNC, and H 2 O isotopologs, but found nothing. This is not surprising considering that the predicted intensity of the transitions in the HIFI range, assuming a beam filling of unity, is comparable with the measured RMS for HNC and a few orders of magnitude lower than the measured RMS for SO, SO 2 and SiO (RMS<0.03 K). The transitions of these latter species are expected to show such weak lines in the 626-801 GHz regime mainly because of their a) high critical densities (>10 8 cm −3 ) and/or b) low Einstein coefficients (<10 −5 s −1 ) compared to the observed transition in the JCMT regime (330-373 GHz).
The JCMT observations have been described in more detail in Koumpia et al. (2016), where the detected species towards IRAS 4A are also presented. In addition to the species detected in the HIFI survey and excluding H 2 S and H 2 O, we detect the deuterated species DCO + , HDCO, D 2 CO, the S-bearing species SO, SO 2 , OCS and finally SiO, HNC, CN, C 2 H. Finally, we detect o-H 2 D + emission in the vicinity of IRAS 4A, but not directly towards the source. The overall rms noise level in the JCMT data ranges between 0.005 and 0.05 K at a velocity resolution of 0.9 km s −1 .
To quantify the difference between the line profiles, they were decomposed with a simultaneous fit of three Gaussians: broad emission, narrow emission, and narrow absorption components (Table A.1). Our observations show that there is considerable variation in the widths of the broad emission line components. With median FWHM values of 4.6 and 5.1 km s −1 , the H 2 CO and CH 3 OH lines are narrower than the H 2 O line (11 km s −1 ). The H 2 O line itself is in fact asymmetric, and can be fitted with an additional, narrower Gaussian (V LS R = 0.55 km s −1 , FWHM=3.86 km s −1 ). Finally, it is worth mentioning that the peak positions of the narrow N 2 H + lines are similar to the absorption component seen in CO 6-5 and H 2 S (∼7.5 km s −1 ). The existence of such variation in the shape of the lines can be a result of the different regions that these lines trace. A broad component is indicative of outflow activity, a narrow component arises from dynamically quiescent gas (i.e., envelope) and the absorption is due to infall motions or the presence of foreground material. This foreground material can be a foreground cloud not eventually bended to the source itself, but it can also be the external and colder part of the envelope which absorbs the emission of the inner and warmer parts of this envelope.
LTE column densities
The protostellar envelope of NGC 1333 IRAS 4A has previously been studied by various authors (e.g., Maret et al. 2004Maret et al. , 2005, which is not the case for its associated outflow. In this section we aim to estimate the column densities and excitation temperatures from the observed emission in the envelope and outflow of IRAS 4A and compare them. In our approach we also add the observed HIFI transitions that were not available in previous studies. A widely used method to estimate the column density of a molecule is the population diagram (Goldsmith & Langer 1999). When LTE applies, the T ex equals the gas kinetic temperature, otherwise it provides only a lower limit.
The column density of the upper state N u and the rotational temperature T rot (=T ex ) are determined by: where x = 8.591×10 37 8πk/hc 3 , N u the column density of the upper energy level (cm −2 ), g u the degeneracy of the upper energy level, T mb the main beam temperature (K), dV the velocity range (km s −1 ), ν the frequency (Hz), µ the dipole moment, S the line strength, N T the total column density (cm −2 ), T rot the rotational temperature (K), Q the partition function and E u the upper energy level. Plotting ln(N u /g u ) versus E u /k results in a straight line with a slope of 1/T rot .
In non-LTE excitation the population of each level may be characterized by a different excitation temperature T ex ( T rot ). The population diagrams can also take into account optical depth and beam effects due to different angular resolutions among the lines by using the modified equation: where T ex is the excitation temperature, C τ = τ/(1-e −τ ) is the optical depth correction factor, and f the beam dilution which is defined as the size of the telescope beam over the size of the emitting region and which is assumed equal for all lines. A more detailed description of the method and formulas used can be found in Goldsmith & Langer (1999).
The velocity information of our lines allows us to fit multiple Gaussians and separate, at first order, the outflow activity from the emission that comes from the dynamically quiescent envelope. For a more accurate determination of the temperature, species with many observed transitions are preferred. We perform this analysis for H 2 CO for which we have observed 12 transitions with JCMT (Koumpia et al. 2016) and HIFI (current study). In this approach we also include four extra transitions as observed using IRAM and presented in Maret et al. (2004).
Figures 4 and 5 present the population diagrams for the envelope and the outflow respectively, and the resulting excitation temperature (∼rotational temperature), column density, optical depth and size of the emitting area. We find T ex ∼ 35±0.5 K and N(H 2 CO) of 8.5 +1.2 −0.8 ×10 14 cm −2 for the envelope and T ex ∼ 41.5±4 K and N(H 2 CO) of 1.2 +0.5 −0.2 ×10 15 cm −2 for the outflow. The error estimates have been computed during a χ 2 minimization procedure. Maret et al. (2004) modeled only the envelope and found a factor of 2 lower column density and a factor of 1.2 lower T ex . We attribute these differences to the use of our HIFI transitions, which probe denser/warmer regions. Maret et al. (2004). The plot presents the resulting excitation temperature (T ex ∼T rot ), column density (N tot ), optical depth (tau) and size of the emitting area. The red symbols represent the observed data, the green symbols represent the best fitted data, and the blue line represents the rotation diagram fit. The three right panels show the resulting values after using the χ 2 method to converge to the solution. The excitation temperature provides only a lower limit for the gas kinetic temperature when the source is not in LTE. Our results point towards ∼20% lower T ex and ∼30% lower H 2 CO column density for the envelope compared to the outflow, which is significant given our error estimates (∼10% and ∼14% respectively).
More recently, deuterated species have also been employed as shock tracers by Fontani et al. (2014). They found an enhancement of HDCO/H 2 CO (∼10%) towards the eastern wall of the cavity excavated by the shock associated with the Class 0 object, L1157 mm. This is at least an order of magnitude larger than the HDCO/H 2 CO ratio of the surrounding material. We aim to study the distribution of the deuterated species in the region and examine the deuteration towards the outflow of NGC 1333 IRAS 4A, which is also a Class 0 object.
In search of ions and deuterated species towards the outflow
The outflow activity of IRAS 4A has been traced by many species. SiO is expected to be a tracer of outflow shocks due to sputtering of Si off dust grains (Schilke et al. 1997). Choi (2005) found it to trace the jet in the case of NGC 1333 IRAS 4A. They have presented a map of SiO 1-0 that traces the outflow activity of IRAS 4A and the spatial distribution of a narrow line component offset at ∼7.6 km s −1 . Our dataset shows that SiO 8-7 has its primary peak at the shock position north of IRAS 4A (R1; Santangelo et al. 2014), while SO also emits significantly at that position ( Figure 6). The same figure shows the integrated intensity map of C 2 H (core; from +5 to +9 km s −1 ) which was found to trace the envelopes of the three protostars. Its three peaks are in alignment with the continuum peaks as observed by Sandell & Knee (2001). Figures 8 and 9 present the spatial distribution of the observed deuterated species D 2 CO, HDCO, DCO + and H 2 D + as overplotted with C 2 H. HDCO and D 2 CO trace mainly the proto-stellar envelopes and their distribution covers part of the outflowshock area (white rectangle) north-northeast of IRAS 4A (R1; Santangelo et al. 2014). DCO + traces the envelopes but it also emits in a more extended area between the sources in the NW-SE direction. Interestingly, H 2 D + shows a very different spatial distribution compared to the other deuterated species. It does not follow the distribution of the envelopes and it mainly emits in the NW-SE direction in the space between the three sources. The spatial distribution of o-H 2 D + as presented in Figure 9 appears to be parallel with the narrow component of SiO as presented in Figure 1 by Choi (2005). Lastly, van der Tak et al. (2002) detected the rare ND 3 species towards the DCO + peak (+23,-06 ′′ offset from IRAS 4A) which is about 15 ′′ to the South of the H 2 D + peak (+38,+09 ′′ offset from IRAS 4A). Both ND 3 and H 2 D + (Fig. 7) are characterized by narrow emission (1.0<FWHM<1.7 km s −1 ), indicative of quiescent gas or an outflow component perpendicular to the line of sight. Fig. 9. The spatial distribution of deuterated species in IRAS 4 region. Integrated intensity map (core; from +5 to +9 km s −1 ) of C 2 H which traces the envelope in colors, overplotted with H 2 D + in green contours and D 2 CO in yellow contours. The contour levels are set to 0.014, 0.016, 0.023 and 0.03 K (rms∼0.005 K).
Origin of o-H 2 D + emission
H 2 D + is expected to arise from very cold gas. At very low temperatures (<20-30 K) the reaction H + 3 + HD ←→ H 2 D + + H 2 + ∆E is not balanced by the backward process, increasing the abundance of H 2 D + . In addition, the freeze out of CO and N 2 normally boosts H + 3 (Roberts & Millar 2000), and increases the H 2 D + production rate (e.g., Bacmann et al. 2003;Caselli et al. 2003Caselli et al. , 2008Albertsson et al. 2013).
Given the "peculiar" spatial distribution of H 2 D + , the logical follow-up is to investigate the spatial distribution of HCO + and N 2 H + and their deuterated isotopologs. HCO + and N 2 H + are produced through the reactions: H + 3 + CO =⇒ HCO + + H 2 and H + 3 + N 2 =⇒ N 2 H + + H 2 . In dense and very cold environments both CO and N 2 are expected to be depleted and thus so do HCO + and N 2 H + . Given the fact that N 2 H + is also destroyed by CO, the emission from the two species is usually spatially anti-correlated. Previous observations of N-bearing species (e.g. N 2 H + ) towards prestellar cores have shown depletion resistance compared to CO, and that N 2 depletes at later times compared to CO (Bergin & Tafalla 2007;Pagani et al. 2012). This contradicts the expectation of a similar behavior of the two species in terms of freezing out and desorption mechanisms, which is based on the ratio of their binding energies (∼1 under astrophysical conditions; Bisschop et al. 2006). More recent calculations and experiments on the topic are presented in Boogert et al. (2015). Figure 10 presents the integrated intensity map of HCO + overplotted with H 2 D + and N 2 H + in contours. This figure shows that N 2 H + and HCO + have a similar spatial distribution, tracing mostly the protostellar envelopes and they emit significantly in almost half of the H 2 D + slab towards the N-NW axis.
The presence of N 2 H + and HCO + in part of the H 2 D + slab could be a result of the outflow-shock activity in the region that removes ices such as CO and N 2 from grains. If we assume a single layer of gas, such activity should also make H 2 D + less abundant, but this is not what we observe. Previous studies including Choi et al. (2004) and Koumpia et al. (2016) Fig. 10. HCO + integrated intensity map overplotted with H 2 D + in green contours (0.017, 0.026 K; rms∼0.005 K) and N 2 H + in yellow contours (0.14, 0.27, 0.41 and 0.54 K; rms∼0.005 K). and IRAS 4B are part of a smaller embedded cloud at 6.7 km s −1 . The H 2 D + line is shifted by ∼1.5 km s −1 relative to the rest velocity of this cloud (Fig. 11). This could be an indication that the narrow component of SiO 1-0 (∼7.6 km s −1 ) and the H 2 D + (∼8 km s −1 ) originate from the same foreground layer at the offset velocity. Such narrow SiO emission has been discovered in more regions (e.g., G035.39-00.33) where it has been suggested that it originates from cold gas associated with a low-velocity shock (Duarte-Cabral et al. 2014). H 2 D + emission requires very cold conditions though and is unlikely to be associated with shock activity due to the presence of N 2 H + and HCO + . This emission probably originates from a colder layer in the foreground and the co-existence of the other two species is rather a projection effect. Fig. 11. Central velocity map of H 2 D + . The main layer of H 2 D + is characterized by velocities of ∼8 km s −1 . The emission in the S-E lobe is > 3 RMS and it seems to be real.
Modeling the deuteration
Although high deuteration is usually connected with very cold environments, deuterated species have also been associated with shocks in a few studies (Lis et al. 2002;Fontani et al. 2014;Lis et al. 2016). We aim to model the distribution of the [DCO + ]/[HCO + ] ratio in the area covered by our JCMT maps, and examine the deuteration towards the outflow of NGC 1333 IRAS 4A.
In order to explore spatial variations in molecular D/H abundance ratios of the region, especially the [DCO + ]/[HCO + ] ratio, we use the kinetic temperature map as derived by Koumpia et al. (2016) in the non-LTE radiative transfer program RADEX (van der Tak et al. 2007) after adopting a constant H 2 density of 3×10 5 cm −3 as suggested in the same work. In addition to the optically thick HCO + (Koumpia et al. 2016) we use its optically thin isotopolog H 13 CO + which helps us derive an accurate We present the [DCO + ]/[HCO + ] ratio in Figure 12. We find ratios varying between ∼0.01 and 0.07±0.016 around the protostellar envelopes and the surrounding gas covering the outflow towards the north of IRAS 4A. The error estimate reflects the average of the observational uncertainties, which takes its higher values as one moves to the edges of our maps where the signal from the lines becomes more comparable to the RMS. In particular, we find a ∼3 times higher [DCO + ]/[HCO + ] abundance ratio towards the N-NE part of the H 2 D + slab compared to IRAS 4A and IRAS 4B while the N-NW part of the slab does not show an enrichment in deuteration. The shock position on the north though (R1) is characterized by values equal and up to two times higher in deuteration compared to IRAS 4A. The observed differences are significant given our error estimates. The observed [DCO + ]/[HCO + ] abundance ratio is 3 orders of magnitude higher than the cosmic ratio [D]/[H] of 1.5×10 −5 (Linsky et al. 1995) and it reflects the strong deuteration occurring at those early embedded stages of star formation.
An enhancement of DCO + abundance requires an enhancement of H 2 D + abundance and the presence of CO in the gas phase, since it is produced through the reaction H 2 D + + CO =⇒ DCO + + H 2 . As we described in Section 4.2.2, H 2 D + is enhanced in very cold conditions(< 20-30 K). This is in agreement with the estimated kinetic temperature towards the shock position (20-30 K). Under the observed conditions CO freezes out though. To explain the observed DCO + abundance enhancement towards the shock position we conclude that this emission is dominated by deuterated species originally formed in the gas phase after the removal of CO from grains at low gas kinetic temperature (< 20 K). Such release of CO into the gas phase at such low temperatures can occur during the passage of a shock wave.
RATRAN -model setup
In order to estimate molecular abundance profiles through the envelope of NGC 1333 IRAS 4A, we ran the Monte Carlo radiative transfer code RATRAN (Hogerheijde & van der Tak 2000) and produced synthetic line emission. RATRAN takes into account the physical structure of the source including temperature and density gradients, kinematics and continuum dust emission and absorption.
The line spectra of NGC 1333 IRAS 4A are generally broad (>5 km s −1 ) and the wings are very prominent in many species. The physical models are determined using continuum observations that characterize the protostellar envelope. Therefore we focus on the narrow component of the lines. For this purpose we perform a multi-Gaussian fit and use only the narrow component for modeling. Some of the lines show heavy self-absorption making their Gaussian fitting very inaccurate and thus, we chose not to include them in our models.
The H 2 O 2 11 -2 02 line shows only a broad component ( Figure 1) and we observe no isotopologs, making the identification of the envelope component unreliable. The situation for SiO is similar, thus we do not model these species. We preferably model the isotopologs when present, and otherwise model the narrow component of the main species when multiple Gaussian fitting is possible (e.g., H 13 CO + , C 17 O and C 18 O, H 13 CN). We use fixed local ISM isotopic ratios of 12 C/ 13 C = 60, 16 O/ 17 O = 2000, and 16 O/ 18 O = 560 (Wannier 1980;Wilson & Rood 1994;Wilson 1999). In addition we assume an ortho to para ratio of 3 for the two collisional partners o-H 2 and p-H 2 . Dust continuum radiation is taken into account using the dust opacity OH5 taken from Ossenkopf & Henning (1994), which corresponds to dust grains with thin ice mantles.
We ran RATRAN applying the density and temperature radial profiles of NGC 1333 IRAS 4A (Figure 13) as determined by . The density profile is assumed to be a power-law, n(r) = n 0 ×(r/r 0 ) −p , with a best-fit index of p = 1.8. The density and temperature profiles were derived from the best-fit dust model assuming that the gas is entirely molecular and using a mean molecular mass of 2.4 amu and a gas-to-dust ratio of 100. For our calculations we defined a grid of 19 spherical shells from r in = 5×10 14 cm (33 au) up to r out = 7.7×10 16 cm (5147 au) where the dust temperature is 250 K and 10 K and the H 2 density is 3.05×10 9 cm −3 and 3.5×10 5 cm −3 , respectively. The original model extends further out as seen in Figure 13 but our adopted profiles better represent the size and outer temperature of the envelope of a protostar (∼5000 au, 10 K). We also assumed thermal equilibrium between dust and gas at those high densities.
The papers from which the collisional rate coefficients of the main isotopologs with H 2 were adopted are presented in Table 1. The collisional data of H 2 S are scaled from the files of ortho-and para-H 2 O as calculated by Dubernet et al. (2009);Daniel et al. (2010Daniel et al. ( , 2011. For isotopologs and the deuterated species, the same collision data were used as for the main isotopolog. We assumed a static envelope without infall or expansion and we used a range of abundances typically varying between 10 −7 and 10 −12 in order to constrain the abundance profile that best fits the observations. The turbulent line width was fixed to 1.9 km s −1 , which is the average value we found for the narrow component of most species after fitting multiple Gaussians. Modeling the observed lines with a constant abundance is the only way when it comes to species that show only a few transitions, but might not always be a realistic approach. Several molecules have been suggested to be present in volatile ice mantles on dust grain surfaces at temperatures < 20-110 K. The exact temperature depends on species and the surface composition (Bisschop et al. 2006;Herbst & van Dishoeck 2009). We chose to apply jump-like abundances at 100 K for H 2 CO and CH 3 OH for which more transitions are available and for which a constant abundance does not result in a good fit.
RATRAN -model results
The resulting abundances from the process described above are presented in Table 2 which compares this low-mass case with a high-mass case from the literature (AFGL 2591; Kaźmierczak-Barthel et al. 2015). We find observed abundance profiles for the low-mass protostellar envelope (NGC 1333 IRAS 4A) that are systematically 1-2 orders of magnitude lower than the high-mass protostellar envelope (AFGL 2591). Although, one can extract this information from previous studies on high and low-mass protostellar envelopes (e.g., H 2 CO, CH 3 OH van der Tak et al. 2000; Maret et al. 2004Maret et al. , 2005, this study is the first direct comparison among high-mass and low-mass protostellar envelopes with the use of datasets of the same instruments and similar methodology (radiative transfer and chemical models). We attribute the observed differences to the absence of a freeze-out zone (i.e., temperature differences) towards the high-mass protostellar envelope. Although AFGL 2591 is a much more distant object (3.3 kpc) than NGC 1333 IRAS 4A (235 pc), which could potentially affect our results, our models take the different distances into account.
The observed data for CO, H 2 CO, and CH 3 OH do not fit well for constant abundances and are discussed below.
Drop models
We chose to work with CO isotopologs (C 17 O and C 18 O) because they are optically thin and have narrow line-widths and thus trace the quiescent envelope. The fact that we do not use the optically thick broad 13 CO lines avoids systematic errors in our solution. Figure 14 shows the observed line profiles of C 17 O 3-2 and C 18 O 6-5 with the best fit abundance profile model overplotted after adopting a constant CO abundance of 3×10 −5 (scaled accordingly for the isotopologs) and a drop model, where the CO abundance profile shows a drop in the freeze-out zone and increases again, for comparison. Using a constant abundance, it was impossible to reproduce the intensities of both C 17 O and C 18 O lines. In particular, the constant CO abundance reproduces the C 18 O 6-5 transition very well, but it overproduces the C 17 O 3-2 by a factor of ∼2. Having additional transitions would help to better constrain the abundance profile. The fact that the higher-J C 18 O 6-5 can be reproduced by a constant abundance profile indicates that this emission does not originate from the snowline of CO, which is characterized by a drop in the abundance. Taking into account the freeze-out zone one would expect a drop of the CO abundance but this alone is also not able to reproduce the line intensities. Thus, we use a similar drop profile as suggested by Yıldız et al. (2010Yıldız et al. ( , 2012, in which the abundance of CO drops in the freeze-out zone but rises again in the outer envelope. For the evaporation temperature of CO we chose to use the lower limit from the laboratory, which is ∼25 K (Collings et al. 2003;Muñoz Caro et al. 2010;Luna et al. 2014). We find the best fit model is one with an inner abundance for T>25 K of 1×10 −4 which drops to 3×10 −6 at the coldest part of the envelope and rises again to the canonical value of 1×10 −4 towards the outer envelope where external UV radiation becomes important. This is in good agreement with the additional C 18 O transitions from Yıldız et al. (2012) who found a CO abundance of 6×10 −5 for the warmest part of the envelope (50<T<250 K) dropping to 3×10 −6 for the coldest part (<30 K) where depletion is prominent and a jump up to 3×10 −4 again for the outer envelope. overplotted with the modeled line profiles resulting in RATRAN after applying a) ∼1.5 orders of magnitude drop in CO abundance at the snowline (blue) or b) CO constant abundance of 3×10 −5 (red) and the best-fit abundance profile as derived from our chemical models for UV=10×ISRF at A V =1mag (black line in Figure 19).
Jump models
The assumption of a constant abundance does not work well for all transitions of H 2 CO and CH 3 OH, indicating that a jump model is required. In jump models an abundance enhancement by up to a few orders of magnitude is applied in the inner warmest regions, where the temperature is sufficient to evaporate the grain mantles (>100 K). Figure 15a shows the integrated intensities of the observed H 2 CO transitions using HIFI and JCMT towards IRAS 4A overplotted with the convolved (∼15-33 ′′ ) synthetic emission as calculated with RATRAN for a constant abundance of [H 2 CO] = 4×10 −10 and jump models at ∼100 K. Our jump model at ∼100 K results in X IN = 4×10 −8 and X OUT = 4×10 −10 and reproduces the lower-energy lines better but the higher-energy lines slightly worse compared to the constant abundance model. Maret et al. (2004) found H 2 CO abundance about a factor of two lower for the inner envelope (2×10 −8 ) and a factor of two lower for the outer envelope compared to our jump model. Differences up to 2 orders of magnitude between H 2 CO abundance of the inner and outer low-mass protostellar envelopes have been observed before (IRAS 16293-2422; Ceccarelli et al. 2000). Figure 15b shows the integrated intensities of the observed CH 3 OH transitions using HIFI and JCMT towards IRAS 4A overplotted with the convolved (∼15-33 ′′ ) synthetic emission as calculated with RATRAN for [CH 3 OH] = 1×10 −8 . Maret et al. (2005) found an upper limit of 1×10 −8 for the CH 3 OH abundance in the inner envelope and the same abundance as we do in the outer envelope.
The models we used are characterized by the same power law for the density profiles and the temperature profiles also follow a similar pattern. The main differences in the two methods are the additional HIFI transitions we use, the new collisional data, and the new physical model derived by including PACS observations, which were the smallest-scale used data. Maret et al. (2004Maret et al. ( , 2005) used the physical model derived by Jørgensen et al. (2002).
Deuteration
Using the abundances reported in Table 2 we find an HDCO/H 2 CO ratio of ∼10% which is comparable with the one found by Loinard et al. (2000Loinard et al. ( , 2001 towards IRAS 16293-2422 (∼10-15%) and the value reported for Orion KL (∼15%) (Liu et al. 2011). We also find a very high D 2 CO over H 2 CO ratio (∼10%) although this is based on only one transition. Very high values between ∼5-10% of the relative D 2 CO abundance have been found before towards the low-mass protostar IRAS 16293-2422 (Ceccarelli et al. 1998;Loinard et al. 2000), which is approximately 2 orders of magnitude higher than in Orion KL. The average gas kinetic temperature of IRAS 4A is ∼45 K (Koumpia et al. 2016) and thus such high deuteration cannot be explained if D 2 CO is only formed in the gas-phase. On the other hand, D 2 CO may have been enriched in the dust grains in low-mass protostellar environments during the cold, dense pre-collapse period followed by its evaporation thus it is probably formed via the grain surface reactions. Lastly, we find DCO + /HCO + = 1% in agreement with what is reported by Loinard et al. (2000Loinard et al. ( , 2001 for IRAS 16293-2422.
Pseudo time-dependent model
In this section we compare the empirical abundance profiles of the studied species obtained with RATRAN with those resulting from chemical models for the same adopted physical structure of NGC 1333 IRAS 4A. Our goal is to a) better understand the chemical processes that take place in NGC 1333 IRAS 4A and b) constrain its age.
To compare the observationally derived abundance profiles, we used the same 1D physical model as in Section 5.1 and the same gas-grain time-dependent chemical model as in Kaźmierczak-Barthel et al. (2015). This allows us to compare chemistry modeling results between a low-mass and a high-mass protostellar envelope. The chemical model is based on the chemical kinetics ALCHEMIC code of Semenov et al. (2010) and a gas-surface network with deuterium fractionation (Albertsson et al. 2013(Albertsson et al. , 2014. The original, non-deuterated chemical network stems from the osu.2007 ratefile developed by the group of Eric Herbst 3 (Garrod & Herbst 2006). The network is supplied with a set of approximately 1 000 high-temperature neutral-neutral reactions from Harada et al. (2010Harada et al. ( , 2012 and updated as of June 2013, using the KIDA database 4 . All H-bearing species reactions in this network were cloned by adding D, with the exception of molecules with the −OH functional group. Primal isotope exchange reactions for H + 3 as well as CH + 3 and C 2 H + 2 from Roberts & Millar (2000); Roberts et al. (2004); Roueff et al. (2005) were included. In cases where the position of the deuterium atom in a reactant or in a product was ambiguous, a statistical branching approach was used (for further details please consult Albertsson et al. 2013). In Albertsson et al. (2014) this deuterium network was further extended by adding ortho-and para-forms of H 2 , H + 2 and H + 3 isotopologs and the related nuclear spin-state exchange processes. We adopted the cosmic ray ionization rate of 5 × 10 −17 s −1 , as derived by van der Tak & van Dishoeck (2000) and Indriolo et al. (2015). In addition, we ran chemical simulations with a higher value of 10 −16 s −1 . The photodissociation and photoionization rates in the model are adopted for a 1D slab model from van . To mimic the presence of an outflow cavity in this source, we also considered several other models with enhanced UV irradiation. For that, we use a scaled interstellar UV radiation field in Draine (1978) units and moderate dust extinctions of 10, 3, 2, and 1 mag. The pre-computed photoionization and dissociation rates for a 1D plane-parallel slab model were used (see Eq. 5;Semenov et al. 2010). The selfand mutual-shielding of CO and H 2 from external dissociating radiation was calculated as in Semenov & Wiebe (2011). Water self-shielding is neglected.
The grains are assumed to be uniform and spherical and made of amorphous olivine with a density of 3 g cm −3 and a radius of 0.1 µm. If grains were on average bigger than 0.1 µm it would slow down freeze-out which would increase the CO gas/ice ratio, especially for short timescales (1000 years; Figure B.2). The same effect would be seen for all other species that can freeze out. This would enhance the overall gas-phase molecular abundances with respect to our standard model.
Each grain provides ≈ 1.88 × 10 6 surface sites (Biham et al. 2001) for freeze-out of gaseous molecules. We use the desorption energies E des from (Garrod & Herbst 2006). We compute the diffusion energies of surface reactants by multiplying their 3 http://web.archive.org/web/20081204232936/ 4 http://kida.obs.u-bordeaux1.fr binding (desorption) energies by 0.4. In the current literature, factors of 0.3-0.8 are commonly used, motivated by existing data on stable species (e.g., Cuppen et al. 2017), although a firm physical basis for such a scaling is lacking.
The gas-grain interactions include sticking of neutral species and electrons to dust grains with 100% probability and desorption of ices by thermal, cosmic ray, and UV-driven processes. In our model we use a single E des for H 2 = 450 K, which properly describes H 2 diffusion over the dust surface and the H 2 binding to water or silicate surfaces. However, the H 2 -H 2 binding energy is much lower (23 K). This means that as soon as there is a H 2 monolayer on a surface, further freeze-out of H 2 would be compensated by immediate desorption of H 2 back to the gas phase. Our models cannot capture this process well and thus we simulate it by not allowing H 2 , HD, and D 2 to stick to grains. There is still enough H 2 , HD, and D 2 forming on grains though, and thus their surface abundances are not zero (See also ;Hincelin et al. 2015;Wakelam et al. 2016).
The UV photodesorption yield of 10 −3 was used (e.g., Öberg et al. 2009;Fayolle et al. 2011Fayolle et al. , 2013. This is consistent for CO and some other light species, but it drops to ∼10 −5 or lower for anything bigger (e.g., CH 3 OH; Martín-Doménech et al. 2016;Bertin et al. 2016). This is one of the limitations of our method. Photodissociation processes of solid species are taken from Garrod & Herbst (2006).
Surface recombination is assumed to proceed through the classical Langmuir-Hinshelwood mechanism (e.g., Hasegawa et al. 1992). We do not allow tunneling of surface species via the potential wells of the adjacent surface sites. To account for hydrogen tunneling through barriers of surface reactions, we have employed Eq. (6) from Hasegawa et al. (1992), which describes a tunneling probability through a rectangular barrier with a thickness of 1 Å.
For each surface recombination, we assume there is a 1% probability for the products to leave the grain due to the partial conversion of the reaction exothermicity into breaking the surface-adsorbate bond . Following experimental studies on the formation of molecular hydrogen on amorphous dust grains by Katz et al. (1999), the standard rate equation approach to the surface chemistry is utilized. In addition, dissociative recombination and radiative neutralization of molecular ions on charged grains and grain re-charging are taken into account.
With this network and 10 −5 relative and 10 −20 absolute tolerances, the 1D IRAS 4A model takes about 1 minute using Core-i7 2.5 GHz CPU (OS X 10.11, gfortran 6-x64) to compute over 10 6 years. This time span encompasses the likely age of this object.
Initial abundances
To set the initial abundances, we calculated the chemical evolution of a 0D molecular cloud with n H = 2 × 10 4 cm −3 , T = 10 K, and A V = 10 mag over 1 Myr (model "PSC-LM"). For that, the neutral "low metals" (LM) elemental abundances of Graedel et al. (1982); Agúndez & Wakelam (2013) were used, with the solar C/O = 0.44, initial ortho/para H 2 of 3:1, hydrogen being fully in molecular form, and deuterium locked up in HD (see Table 5).
Error estimations
The problem of uncertainties of the calculated abundances is well known in chemical studies of various astrophysical environments, ranging from dark clouds to hot cores (see, e.g., Dobrijevic et al. 2003;Vasyunin et al. 2004;Wakelam et al. 2005;Vasyunin et al. 2008). The error budget of the theoretical abundances is determined by both the uncertainties in physical conditions in the object and, to a larger degree, by uncertainties in the adopted reaction rate coefficients and their barriers. Poorly known initial conditions for chemistry may also play a role here.
In order to estimate the chemical uncertainties rigorously, one needs to perform a Monte Carlo modeling by varying reaction rates within their error bars and re-calculating the chemical evolution of a given astrophysical environment. We do not attempt to perform such a detailed study and use the estimates from previous works.
In previous studies of chemical uncertainties it was found that the uncertainties are in general larger for bigger molecules as their evolution involves more reactions compared to simpler molecules. For simpler, key species such as CO and H 2 involved in a limited cycle of reactions it is easier to derive the reaction rates with a high accuracy of ∼ 25%. In addition these species are formed in the gas, which is better known than the surface part. Consequently, their abundances are usually accurate within 10−30% in modern astrochemical models. On the other hand, for other diatomic and triatomic species such as CN, HCO + , HCN, CCH, and so on, uncertainties are usually about a factor of 3-4 (see Vasyunin et al. 2004Vasyunin et al. , 2008Wakelam et al. 2010). The chemical uncertainties are higher for more complex molecules like methanol due to the fact that gas phase reactions are less known and the surface chemistry less well understood. These uncertainties can reach orders of a magnitude, with the factor of 10 being a likely lower limit.
Moreover, for S-bearing species, for which many reaction rates have not been properly measured or calculated or included in the networks, these intrinsic uncertainties and hence the uncertainties in their resulting abundances are higher, > ∼ factor of 10 even for simple species such as SO, OCS, and SO 2 (Loison et al. 2012). Also, the incompleteness of astrochemical networks with regard to the chemistry of Cl-and F-bearing molecules makes their calculated abundances rather unreliable.
In our study we assume that the uncertainties in the abundances of ortho-and para-H 2 and CO are within a factor of 30%. For HCO + , H 2 CO, CN, N 2 H + , C 2 H, NO, OH, C, C + , O, CH, NH 3 , H 2 O, HCN, and HNC the uncertainties are within a factor of three, and for S-bearing species, CH 3 OH, and HCl they are within a factor of ten.
The result of this process is the abundance profiles from the species of interest. This means that we get the abundance of species over the radius adopted from the physical models for a range of timescales. In our study we do not use the abundance profiles of SO 2 , SO, and OCS from the chemical model because the chemical network for S-bearing species is too inaccurate with respect to these species and thus lacks predictive power.
Standard approach
Figures 16-18 present the results of the standard chemical modeling compared to the observed abundances of NGC 1333 IRAS 4A from Sec. 5.1 and the high-mass protostellar envelope AFGL 2591 from Kaźmierczak-Barthel et al. (2015). The observed abundances for most species appear to be in agreement with the modeled abundances in the outer envelope while they are systematically 1 to 2 orders of magnitude lower than the high-mass protostellar envelope. In contrast, the predicted CN, HCN, and HNC abundances are 1 to 2 orders of magnitude higher than the observed values for the outer envelope. Our chemical models do not take into account shielding of CN by H 2 , as well as FUV scattering, which can be important. In addition our models use the CN photodissociation rates taken by van Dishoeck et al. (2006). A more recent study by el-Qadi & Stancil (2013) presents CN cross-sections with values several times smaller than those from van . From all the parameters, it seems that the strongest effect in those modeled abundances is due to the assumed FUV intensity and the C/O ratio. The C/O = 1.1 gives X(CN) of 3×10 −7 while X(HCN) ∼ X(HNC) ∼ 8×10 −10 . If one assumes a strong FUV field of 10 4 of the ISRF UV with a modest extinction of Av=1 mag, the fit is much better: X(CN) ∼1-4×10 −10 , X(HCN) ∼<10 −11 , X(HNC) ∼> 2×10 −11 . Without any additional FUV one gets X(CN) = 10 −14 , X(HCN) ∼ 2×10 −9 , X(HNC) ∼ 7×10 −11 . The standard model alone cannot explain the observed abundances for more than one of the modeled species, making the development of a more advanced model necessary.
The necessity of an outflow cavity
We observe a drop of only ∼2 orders of magnitude towards the snowline of CO, compared to ∼6 orders of magnitude predicted by our chemical models. A plausible explanation for such a discrepancy between our chemical model and the observations is the presence of the outflow, which is not accounted for in the 1D model (Bruderer et al. 2009). The way to approximately simulate the outflow, the UV-irradiated outflow walls, and the envelope in the 1D approximation is to add more UV radiation to the chemical model. For that, additional FUV components with intensities of 1 and 10 Draine's units and moderate dust extinctions of 10 and 3 mag were considered. We find that only lower extinction influences the resulting CO abundance, increasing it by approximately 1 order of magnitude. The standard model without extra UV radiation and extinction of 10 mag produces the same abundance profiles.
Figures 19-21 present the results of the models with additional 10 Draine UV fields and for dust extinctions of 1, 2 and 3 mag. The extreme case of a 100 Draine UV field and A V = 1 was also considered resulting in an almost constant CO abundance profile. Our observed CO abundance profile appears to be reproduced reasonably well by UV= 10×ISRF and A V =1 mag (green line; Figure 14). Although such a model makes the overall fit better for CO, HCO + and DCO + , it does not really improve the fit for other species (e.g., CS, CN, C 2 H) and it actually makes the overall fit worse for H 2 CO, HDCO, D 2 CO and CH 3 OH, by decreasing the abundance of the outer envelope by up to 4 orders of magnitude. Thus, we do not consider it to be our best-fit (or standard) model.
The modeled HCO + abundances generally follow those of its parent molecule, CO, and show a strong decline for radii between ∼ 334 au and 5350 au, where the CO freeze-out zone is located. HCO + abundances also drop strongly in the inner, dense and dark envelope region at r < ∼ 134 au, where the ionization degree drops due to fast recombination processes. Not surprisingly, the modeled N 2 H + abundances also drop in the very inner envelope, like those of HCO + . In contrast to HCO + , N 2 H + thrives in the CO freeze-out zone, where a key destruction reaction of N 2 H + ions by CO molecules is no longer effective. In Figures 17 and 20 the DCO + abundances are compared, and the modeled DCO + profile follows the case of HCO + and agrees with observations in two areas: 1) The inner envelope at ∼67-267 au and 2) the outer envelope with r > ∼ 6685 au. In contrast to HCO + , DCO + is slightly overproduced in the no-UV chemical model in the inner part of the envelope, but is well fitted by the model with additional UV due to the outflow cavity.
The poor fit and dependence on UV irradiation of CN, HCN, and HNC have already been discussed above. Their formation mainly proceeds via neutral-neutral gas-phase reactions involving light hydrocarbons like C 2 H and other N-bearing species (e.g., NO). Thus the no-UV model that fails to fit the CN, HCN, and HNC data is also not able to fit the C 2 H observed abundances.
The modeled CS abundance profile shows a poor fit to the data as well. As we also mentioned above, this is due to a general lack of predictive power of current astrochemical models for S-bearing species. Still, H 2 S modeled values are in good agreement with the observed data.
The observationally-driven H 2 CO abundances are only well reproduced in the outer envelope at r > ∼ 2005 au, and are lower than the observed values by up to 3-5 orders of magnitude in the inner part. This is also likely caused by the same approximation of the outflow and UV-irradiated outflow cavity walls in our 1D chemical model as for CO. Alternatively, our observations may lack the necessary resolving power and sensitivity (that interferometers can provide) to uncover and unbiasedly constrain the underlying physical structure of the inner NGC 1333 IRAS 4A envelope, which comes out in the lack of agreement between the data and the chemical predictions. A similar behavior is shown by H 2 CO isotopologs and the chemicallyrelated methanol molecule.
In Figures 19-21 we show the effect of including an additional UV component in the chemical model, as our attempt to represent the UV-irradiated outflow cavity material. As discussed above, additional UV radiation lowers the degree of CO depletion and brings us much closer to agreement between the observed and modeled CO abundances. The same effect is seen for our N-bearing species (CN, HCN, HNC). Unfortunately, for all other observed species (HCO + , N 2 H + , CS, H 2 CO isotopologs, CH 3 OH, C 2 H, S-bearing species) the modeled abundance profiles have poorer agreement with observations than in the standard model. The enhanced UV irradiation leads to overly rapid destruction of less tightly bound molecules than CO, and limits the efficiency of surface chemistry by desorbing ices too efficiently. The potential solution to such a chemical discrepancy is to perform chemical modeling using a more realistic 2D or 3D physical structure of the NGC 1333 IRAS 4A envelope, including the outflow and outflow cavity wall and performing UV radiative transfer.
Time dependence and different input parameters
Our models are time dependent so we also investigate the influence of different timescales on the produced chemical abundance profiles. Figure B.1 in the appendix presents the time dependent abundance profiles of several species, demonstrating the insignificant influence of time in the short timescales that characterize a Class 0 object (10 4 -10 5 yrs; e.g. Enoch et al. 2009). Our modeled methanol abundance though is in better agreement with the observed abundance for ≥ 4×10 4 yrs (Figures 17, B.1), while other species do not show significant abundance variation on these timescales and thus we cannot use them as additional constraints for age. We provide a lower limit to the age of IRAS 4A which is at least four times older than the one given by Maret et al. (2002) and potentially in agreement with the value of 9×10 4 yrs given by Gonçalves et al. (2008) based on the morphology of the observed magnetic field. We should point out that the derived best-fit age of our object is dependent on the frame-work of model that we use. If in reality conditions are different (e.g., presence of a disk), or our chemical network misses some key reactions, the best-fit age value can vary significantly. A rigorous way to do such modeling would require running numerous models with varying temperature, density, CR ionization rate, and reaction rates, which would give a best-fit chemical age plus its error bars. A previous study towards young high-mass star-forming regions took these factors into account and found chemical ages that were characterized by uncertainties of a factor of 2-3 (Gerner et al. 2014).
To test the dependence of our results on the adopted physical conditions, we have run models with a twice higher cosmic ray ionization rate, a grain growth up to 0.5 µm, and different initial abundances for timescales between 10 3 and 10 6 yrs. To set the different initial abundances for the chemical modeling, we calculated the chemical evolution of a 0D model of an infrared dark cloud with n H = 2× 10 5 cm −3 , T = 15 K, ζ CRP = 5×10 −17 s −1 , H 2 OPR = 3:1 and A V = 10 mag over 1 Myr. The neutral "low metals" elemental abundances of Graedel et al. (1982) and Agúndez & Wakelam (2013) were used. The resulting abundance profiles of this process can be seen in Figure B.2.
We find that the CO abundance profiles are not strongly affected by increasing the cosmic ray ionization rate. In contrast, the different initial abundances can cause a decrease of 1-2 orders of magnitude in the abundance profiles at the inner or outer envelope but do not significantly affect the CO abundance at the snow line. Only in combination with the short timescale of 10 3 yrs do we see 0.5-1 order of magnitude higher abundance at the snow line compared to the other timescales. This timescale is too short for a Class 0 object though. Lastly, the model with bigger grain sizes of 0.5 µm shows an increase in CO abundances by 1-2 orders of magnitude at the snow line. This is because the total dust surface area per unit gas volume is smaller in this model compared to the standard case of 0.1 µm grains, lowering the pace and hence the degree of the CO freeze-out. The effect is particularly dramatic at shorter chemical ages of ∼ 10 3 years, where the difference in CO abundances between the 0.1 µm and 0.5 µm models is ∼ 3 − 4 orders of magnitude. When grains are on average bigger than 0.1 µm it slows down freezeout which would increase the CO gas/ice ratio especially for short timescales (1000 years; Figure B.2). The same effect would be seen for all other species that can freeze out. This would enhance the overall gas-phase molecular abundances with respect to our standard model. Although IRAS 4A is still a very young object (∼10000 yrs) its lifespan cannot be only 1000 years. Even if we were to assume such a short lifespan, it could not explain a grain growth up to 0.5 µm, but rather an ISM-like size up to 0.2-0.3 µm (Bianchi et al. 2003). Some studies of mm observations have shown that larger, mm-sized grains may exist in the envelopes of Class 0 protostars (Kwon et al. 2009;Chiang et al. 2012). However, Class 0 objects are highly embedded, making it difficult to eliminate the optically thick emission, which can cause an overestimation on the grain size. In addition our modeled abundances cannot explain the observed ones for timescales of 10000 yrs only by applying a grain growth.
In conclusion, the time-dependent models show no significant differences in abundance profiles for the timescales that are relevant to our object. Therefore, adopting an age of 10000 yrs does not introduce significant systematic errors. The same applies for the variation of the cosmic ray ionization rate. In contrast, we find that grain growth and different initial abundances have a more significant influence in the resulting abundance profiles, especially at short timescales (1000 yrs). Such short chem-ical age in combination with significant grain growth is rather unlikely though for a Class 0 object such as IRAS 4A.
Conclusions
We used HIFI and JCMT data to constrain the chemical structure of a low-mass protostellar envelope and compare it with a highmass equivalent.
Results
-Constant or jump/drop-like empirical abundance profiles reproduce well our single-dish submillimeter observations. The abundance in the outer envelope is supported for most species by the predictions of our 1D time-dependent gasgrain chemical model. -The presence of an outflow cavity with a strong UV field can explain the observed abundances of several species (e.g., CO, HCO + , DCO + ) in the outer low-mass protostellar envelope, while passive heating is sufficient towards a high-mass protostellar envelope. -The empirical abundance (with respect to H 2 ) profiles for the low-mass protostellar envelope (NGC 1333 IRAS 4A) are systematically 1 to 2 orders of magnitude lower than the high-mass protostellar envelope (AFGL 2591). The overall warmer temperature profile of high-mass protostellar envelopes seems to drive this result. We find similar empirical abundances as soon as we estimate them with respect to CO. -Population diagrams for H 2 CO indicate 20% lower T ex and 30% lower H 2 CO column density for the envelope compared to the outflow. -High D 2 CO over H 2 CO ratio (10%) towards IRAS 4A points towards formation via grain surface reactions during the cold phase and not gas-phase chemistry. This is in agreement with what has been observed before towards IRAS 16293-2422. -We find an enrichment of DCO + over HCO + ratio towards the shock position compared to the protostellar envelope. We attribute this result to the CO originally formed in the grains and later released into the gas phase (at T<20 K) during the passage of the shock wave.
-H 2 D + shows a different spatial distribution compared to the other deuterated species and a peak velocity at ∼8 km s −1 . The most prominent explanation is that it is located in a different layer of gas than the clump that contains the protostars. -The abundance profile of CH 3 OH provides a lower limit for the age of NGC 1333 IRAS 4A of 4×10 4 yrs.
Discussion
The modeled chemical abundance profiles of the inner envelope are a few orders of magnitudes lower than the observed ones for all species. We find a decrease of about 2 orders of magnitude in the abundance of species with more observed transitions such as CO, H 2 CO and CH 3 OH in the CO depletion zone (outer envelope). Similar drops have been seen before by interferometric studies of other low-mass protostars (e.g., IRAS 16293-2422, L1448-C; Schöier et al. 2004;Jørgensen et al. 2005). Jørgensen et al. (2007) suggest that the emission of H 2 CO and CH 3 OH are related to the shocks caused by the protostellar outflows rather than being the result of compact low-mass "hot corinos". More transitions of the other ob-served species will help us in the future to better constrain their observed profile. Furthermore, we tried to simply simulate an outflow cavity by increasing the UV radiation that the observed species are exposed to. We found that this approach improved the fit among the theoretical abundance profiles and the observed for several species (e.g., CO, HCO + ), thus a more detailed 2D/3D chemical modeling that takes into account disk structure and outflow cavities is expected to be more accurate. Such a model requires more transitions and interferometric (high sensitivity and high spatial resolution) observations that trace the inner region of the protostellar envelope.
Lastly, we attribute the observed abundance difference with respect to H 2 among the low-and the high-mass protostellar envelope to the higher temperatures that characterize the high-mass case and the absence of a freeze-out zone. For safer comparison, further studies of the same nature between high-mass and lowmass protostellar envelopes are necessary. In particular, the similarity in the observed abundances with respect to CO, suggests that gas-phase CO/H 2 measurements are essential, as all other species are off by within factors of a few as the one of CO. Fig. 16. Observed and modeled abundance profiles of CO, HCO + , N 2 H + , CS, HCN and HNC at the minimum representative timescale of 4×10 4 yrs as predicted from the time-dependent CH 3 OH models. The red dashed lines show the abundance profile of the outer envelope of the high-mass case, AFGL 2591 (Kaźmierczak- Barthel et al. 2015) for comparison with NGC 1333 IRAS 4A (blue). The black solid lines represent the abundance profiles from the 1D chemical model. The green solid lines represent the abundance profiles from the 1D chemical model that aims to take into account outflow cavities by applying an extra UV radiation of 10×ISRF at A V =3 mag. The angular resolution of the observations varies between ∼15 ′′ and ∼35 ′′ , which corresponds to 2.5 -6.3 × 10 16 cm (1670-4210 au) in the models. 19. Observed and modeled abundance profiles of CO, HCO + , N 2 H + , CS, HCN and HNC at the minimum representative timescale of 4×10 4 yrs as predicted from the time-dependent CH 3 OH models. The red dashed lines show the abundance profile of the outer envelope of the high-mass case, AFGL 2591 (Kaźmierczak- Barthel et al. 2015) for comparison with NGC 1333 IRAS 4A (blue). The solid lines represent the abundance profiles from the 1D chemical model applying an extra UV radiation of 10×ISRF A V =1mag, 2mag and 3mag, and the extreme case of 100×ISRF and A V =1mag. The angular resolution of the observations varies between ∼15 ′′ and ∼35 ′′ , which corresponds to 2.5 -6.3 × 10 16 cm (1670-4210 au) in the models. As Fig. 19, but for H 2 CO, CH 3 OH, C 2 H, CN, HDCO, CO and DCO + . The deuterated species, HDCO and DCO + were not observed towards AFGL 2591. Time-dependent 1D chemical models for timescales 10 4 -10 5 yrs, which is the predicted lifespan of Class 0 objects, such as NGC 1333 IRAS 4A. The observed variations are not significant and thus we adopt the 10 4 models to compare with our empirical models. | 2017-05-02T11:13:06.000Z | 2017-05-02T00:00:00.000 | {
"year": 2017,
"sha1": "a5ceb949bb8e44348857b864faca2e07725d5320",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2017/07/aa30160-16.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "a5ceb949bb8e44348857b864faca2e07725d5320",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
258665714 | pes2o/s2orc | v3-fos-license | Current SIDS research: time to resolve conflicting research hypotheses and collaborate
Abstract From the earliest publications on cot death or sudden infant death syndrome (SIDS) through to this day, clinical pathology and epidemiology have strongly featured infection as a constant association. Despite mounting evidence of the role of viruses and common toxigenic bacteria in the pathogenesis of SIDS, a growing school of thought featuring a paradigm based on the triple risk hypothesis that encompasses vulnerability through deranged homoeostatic control of arousal and/or cardiorespiratory function has become the mainstream view and now dominates SIDS research. The mainstream hypothesis rarely acknowledges the role of infection despite its notional potential role as a cofactor in the triple hit idea. Decades of mainstream research that has focussed on central nervous system homoeostatic mechanisms of arousal, cardiorespiratory control and abnormal neurotransmission has not been able to provide consistent answers to the SIDS enigma. This paper examines the disparity between these two schools of thought and calls for a collaborative approach. Impact The popular research hypothesis explaining sudden infant death syndrome features the triple risk hypothesis with central nervous system homoeostatic mechanisms controlling arousal and cardiorespiratory function. Intense investigation has not yielded convincing results. There is a necessity to consider other plausible hypotheses (e.g., common bacterial toxin hypothesis). The review scrutinises the triple risk hypothesis and CNS control of cardiorespiratory function and arousal and reveals its flaws. Infection-based hypotheses with their strong SIDS risk factor associations are reviewed in a new context.
INTRODUCTION
There are two leading research hypotheses used to explain sudden infant death syndrome (SIDS). The mainstream popular research hypothesis features the triple risk hypothesis 1 with central nervous system (CNS) homoeostatic mechanisms controlling arousal and cardiorespiratory function and invokes prone sleep position as playing a causal role. 2 The other is the common bacterial toxin hypothesis, [3][4][5] which utilises experimental and epidemiological evidence indicating viral infection combined with bacterial toxaemia and prone positioning may produce a fatal outcome through super antigenic shock. The review scrutinises these hypotheses and suggests a different way forward.
THE COMMON BACTERIAL INFECTION HYPOTHESIS
From the earliest epidemiological studies on cot death or as it was later defined 6-8 as SIDS, there were clear indications that infection, especially respiratory viral, was associated with these deaths. [9][10][11][12][13] The common bacterial toxin hypothesis was developed on the basis that a viral infection (along with prone positioning) induced upper respiratory tract changes conducive to toxin production by toxigenic bacteria (including Staphylococcus aureus, Streptococcus pyogenes and Escherichia coli), all of which were commonly found to colonise the nasopharynx. [14][15][16] In >50% of cases, Staphylococcal toxins were demonstrated in SIDS babies' tissues. [17][18][19][20] These were identified in tissues of 33/62 (53%) SIDS infants from three different countries: Scotland (10/19, 56%); France (7/13, 55%); Australia (16/30, 53%). In the Australian series, toxins were identified in only 3/19 (16%) non-SIDS deaths (χ 2 = 5.42, P < 0.02). 17 Harrison et al. 18 demonstrated that sleeping prone caused pooling of secretions and increased numbers of toxigenic bacteria in the nasopharynx and Malony et al. 19 showed prone sleeping increased the local temperature into ranges known to induce bacterial toxin production. 19 The hypothesis suggested viral infection acted as a trigger for events leading to super antigenic toxic shock through T-cell activation by staphylococcal enterotoxins or toxic shock syndrome toxin-1. Staphylococcal enterotoxin-like proteins also act as superantigens. 20 and could also be involved in SIDS. A mouse model developed by Nobel Laureate Peter Doherty and colleagues showed that mice infected with the respiratory zoonotic pathogen lymphocytic choriomeningitis virus (LCMV) were unharmed, but in virally infected mice given an intraperitoneal injection of Staphylococcal enterotoxin B, this was rapidly lethal. Staphylococcal toxin injection alone was non-lethal. 21 The respiratory tract in SIDS frequently shows evidence of inflammatory involvement of the airways and lungs. 11,22,23 The inflammatory process may involve platelet aggregation and obstruction of the lung capillaries by blood platelet aggregates and leucocytes. 24 This could provide clues to the pathogenesis of intrathoracic petechial haemorrhages observed in 80-90% of SIDS cases. Intrathoracic petechial haemorrhages have been explained by mainstream researchers as resulting from agonal changes in intrathoracic pressure. 25 Animal experimentation has failed to affirm this idea. 26 My interest in SIDS research was aroused through my colleague, the late Dr Karl A. Bettelheim who had demonstrated in a paper given at a meeting in Auckland in the early 1980s that sera obtained from cases of SIDS was lethal to infant mice upon intraperitoneal injection. Whether the mice were also congenitally infected with an enzootic virus was not at the time a consideration. Karl had published widely on E. coli and human infant disease. Knowledge of the various toxins of E. coli and the common finding of the bacterium in the respiratory tract of SIDS babies led us to investigate the possible role of E. coli in SIDS. Interesting but inconclusive correlations were found. 27,28 As mentioned, S. aureus is also commonly found in the upper and lower respiratory tract of SIDS cases. 18,29 Significantly greater proportions of SIDS compared with control/comparison babies were positive for S. aureus (68.4% vs. 40.5%) and for staphylococcal enterotoxin genes (43.8% vs. 21.5%), suggesting a possible role in SIDS. 30 The further analysis enabled us to demonstrate a significant relationship between colonisation with S. aureus and the risk factor of prone sleep position in SIDS. 31 The work showed numerous combinations of the nine enterotoxins in the cases of SIDS. However, the DNA extracts used in the Highet et al. study 31 were re-examined using an lllumina MiSeq platform by Leong et al. 32 In this study, the frequency of detection of S. aureus did not differ significantly from the comparison babies. 32 We explain the disparity between the studies on methodological differences.
Derived from the staphylococcal enterotoxin study, 30 we proposed that contamination of the baby's sleeping surface with S. aureus might explain the relationship with prone sleeping, given that potentially contaminated sleeping surfaces such as the parental bed, 33 sofa, 33,34 and used cot mattresses 35 were established risk factors for SIDS.
The idea that prone positioning in relation to SIDS could affect the vagus nerve 36 and its multitudinous functions, including influence on the gut microbiota, on gut hormones and the cholinergic antiinflammatory pathway, were based on the vagus nerve inflammatory reflex, known to prevent cytokine-induced tissue damage and death. Vagal stimulation in animal models prevents cytokine release and damage during sepsis, shock, endotoxemia, etc. Prone positioning may affect vagal neurophysiology adversely. This subject remains unexplored in the context of SIDS.
REAPPRAISAL OF THE POPULAR MAINSTREAM SIDS RESEARCH HYPOTHESIS
The triple risk hypothesis 1 formed the basis for hypotheses centred on the CNS/brainstem control of arousal, respiration, and cardiac function as well as a focus on the prone sleep position and the sleeping environment. 2 The paradigm explains prone sleep position as playing a causal role; 37 this seems disingenuous given that babies die in supine and side positions which should necessarily dictate different mechanisms of demise. Rather, it would be logical to consider a prone sleep position increasing the risk of SIDS through an unknown mechanism. Airway obstruction in prone sleepers would make it implausible to attribute non-prone SIDS deaths to a similar mechanism. An explanation may reside in an increased risk in prone over other positions. As alluded to previously, such increased risk could relate to prone sleep position increasing the likelihood of colonisation by toxigenic bacteria from the sleeping surface and the increased likelihood of induction of bacterial lethal toxins. This is discussed further below.
In a different context, the attribution of sleep position with causality has led to an argument for a causal relationship between supine sleep position and autism spectrum disorder; based on the increase in autism rates following the introduction of the Back-to-Sleep (BTS)/Reducing-the-Risk (RTR) campaign in five different countries. 38 Association does not equal causation.
NEUROPATHOLOGY AND SIDS
In 1990, Oehmichen 39 described the state of SIDS neuropathological research as 'Due to differences in the findings as well as methodologic and interpretative problems, no definitive pathogenetic concept based on the available neuropathologic findings can be formulated at present, even though many observations tend to indicate that the brainstem, as the central organ controlling respiration, is probably of prime importance in SIDS. Even the classification of the described phenomena as primary and secondary changes can be and is disputed. No diagnostic criteria for classification of SIDS and control cases could be established, since all obtained criteria are nonspecific, and the described criteria are not present in all SIDS cases'. Two decades on and the same message applies with the possible role of the CNS in SIDS remaining confused. Findings involving neurotransmitters (e.g., 5HT, its receptors and gene polymorphisms) 40 have not led to conclusive results. While hypoxic-ischaemic neuronal injury (and neuronal apoptosis) is generally thought to be common in SIDS cases, [41][42][43] none of the authors have considered a role for sepsis in these processes. Sepsis is an established leading cause of hypoxia/ischaemia and neuronal apoptosis. 44,45 The researchers consider that the described neuropathology is a primary phenomenon and have rarely considered that these changes could be the result of a secondary effect, say, from cytokine responses to viral infection or effects of bacterial toxaemia/ super antigenic shock. Many of the CNS findings seen in SIDS cases are also observed in control babies. 46 In rare attempts to correlate CNS findings with epidemiological risk factors have not resulted in substantial success. Examples of such correlation include male sex and age for a restrictive pattern of neuropathological findings. 47 On the other hand, Duncan et al. 43 found no male gender relationship with various neuropathological/neurotransmitter findings in SIDS brains. 43 Suffice to say, the role of infection in SIDS has been largely ignored by mainstream researchers.
EXPLAINING THE PRONE POSITION RISK FACTOR
Blackwell et al. 48 and Goldwater 49,50 listed the genetic, developmental and environmental SIDS risk factors, all indicating susceptibility to infection. This list, with some modifications, is shown in Table 1. This information might help convince researchers of the importance of infection in SIDS.
A convincing explanation of the risk factor of prone sleep position has not been achieved by the mainstream. There is, however, a compelling explanation provided in two well-designed and independent, geographically disparate epidemiological studies (Tasmanian 51 and Scandinavian 52 ) that link infection (with prone sleep position) to SIDS. In the Tasmanian study, infection and prone sleep position featured strongly: the study revealed a 10-fold increased risk of SIDS if prone-sleeping babies were ill with features of an infection, but it was associated with only a slight increase in risk among infants considered well. The Scandinavian study revealed a 29-fold increase in risk if prone-sleeping babies had an infection. Both studies showed that exposure to cigarette smoke increased the risk of SIDS. Smoke and infection combine with lethal consequences: in general, bacterial and viral infections can be synergistic 53,54 and both are exacerbated by exposure to smoke. 55 There are laboratory findings on SIDS which point to the underlying infection. These are set out in Table 2.
PRONE SLEEP POSITION AND THE BACK-TO-SLEEP/REDUCING-THE-RISKS CAMPAIGNS
The BTS and RTR campaigns have drawn some of their success from an anomaly of how SIDS deaths were recorded in the 1970s, 1980s and 1990s. There is compelling evidence of diagnostic shifting during those decades resulting in a possible exaggerated rise in SIDS numbers in the 1980s and a complimentary fall in the 1990s. [56][57][58][59][60][61][62] The introduction of new infant vaccines in 1990 could possibly have contributed. The apparent relationship between the BTS/RTR campaigns and the reduction in SIDS deaths has not been subjected to rigorous scientific scrutiny. Assumptions have been accepted without question. This is not to say that putting babies on their backs to sleep has not had beneficial effects. However, the effect of supine sleeping in the USA and several Table 2. Laboratory findings in SIDS cases (for references see refs. [48][49][50]. other countries has plateaued and SIDS numbers remain unacceptably high. 63 Moreover, SIDS deaths significantly increased between 2019 and 2020. 64 It is yet to be determined whether SARS-Cov-2 virus played a role. SIDS is largely a disease of poverty, poor hygiene, overcrowding, prematurity, exposure to smoke in pregnancy and postnatally. These are features common to many transmissible infectious diseases. Sleeping prone on second-hand mattresses, 33 the parental bed, 31 or sofa 32 (contaminated surfaces) increases the risk of SIDS, as do male sex 65 and high birth order with older siblings bringing viral infection home. 65 SIDS is more frequent in rural areas 66 and tends to occur more frequently in winter. 67,68 These facts should alert us to the possibility of an epizootic agent playing a role, in addition to seasonal respiratory viruses. LCMV would fit well here. 69 As mentioned, a convincing SIDS animal model has been demonstrated with this virus. 19 CONCLUSION All research should be founded on logical and scientifically plausible constructs. Without these, a successful conclusion would be impossible. The apparent lack of progress in determining a cause or causes of SIDS (despite the help of twenty-first-century science and technology) should call for a reappraisal of the fundamental mainstream hypotheses.
SIDS research is encumbered with unusual limitations; 70 these include ethical issues regarding consent for obtaining and retaining tissue, and the problem of difficulty in obtaining suitable control material for meaningful research. Notwithstanding these, infection, a key pointer in the SIDS story, has been largely ignored by mainstream research or given minimal attention. Few, if any, of the key infection-related papers on SIDS mentioned above are ever cited in mainstream papers. Is this citation amnesia 71 or the 'disregard syndrome?' 72 Both are well described in many areas of scientific research and are counterproductive and unethical. The basis of this failure to acknowledge established evidence of the role of infection in SIDS is difficult to understand, but its origins are likely to involve the politics of research grant funding and restrictive thinking. Continuation of such a narrowed approach will delay the explanation of the tragic enigma of SIDS. It is surely time to reconsider and collaborate. The items listed in Tables 1 and 2 provide fertile ground upon which to develop productive research outcomes. The overwhelming number of infectionrelated factors, including risk factors (age, sex, immunity, smoke exposure, seasonality, rural preponderance, etc.), would surely invite serious investigation. Using contemporary application of Koch's postulates 73 interpretation of key infection-related findings such as staphylococcal toxins in SIDS tissues [9][10][11][12][13][14][15] (especially when these are found in cases from three different geographical regions 15 ) would, on the evidence, be regarded by infectious diseases experts as 'the main cause of death' in babies meeting the SIDS definition. Paradoxically, if a multidisciplinary death review panel agreed that a staphylococcal toxin was the cause of death, then, based on the Bajanowski et al. recommendations, 20 the case would then be classified as an explained infant death. It is reasonable to ask why the staphylococcal toxin findings 9-15 in more than 50% of cases have been ignored for so long and that routine testing for these toxins had not been widely applied by those responsible for investigating sudden unexpected infant deaths? Given the findings of this review, a way forward could benefit from a broader collaborative approach to this singularly challenging task. | 2023-05-14T15:17:15.064Z | 2023-05-12T00:00:00.000 | {
"year": 2023,
"sha1": "ca358a6981262aa2cc85b8c112e057ac2235a77e",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "8fdf4f06198601ba1835555ec7993c4d905b79a4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
56481411 | pes2o/s2orc | v3-fos-license | Confounding pain and function: the WOMAC's failure to accurately predict lower extremity function
Background Investigations have revealed the Western Ontario and McMaster Universities Osteoarthritis Index’s (WOMAC) inability to provide distinct assessments of pain and function. The Lower Extremity Functional Scale (LEFS) has not displayed this deficiency. Our purposes were to investigate further the WOMAC physical function's (WOMAC-PF) ability to accurately assess lower extremity mobility in patients undergoing total knee arthroplasty (TKA) and to establish a relationship between pre- and post-TKA WOMAC-PF and LEFS scores that accounts for the apparent bias WOMAC pain scores impose on WOMAC-PF scores. Methods WOMAC, LEFS, and Timed-up-and-go measures were administered before TKA and 4 days, 6 weeks, and 3 months after TKA. To evaluate the WOMAC-PF and LEFS ability to provide a distinct assessment of pain and function, a paired t-test compared pre-TKA and 4 days after TKA values. Generalized estimating equation (GEE) analysis assessed the relationship between pre- and post-TKA values: dependent variable WOMAC-PF scores; independent variables LEFS scores, and measurement occasions. Results Timed-up-and-go and LEFS demonstrated a reduction in lower extremity function (P < .001); pain decreased (P < .001); and there was no significant change in WOMAC-PF scores (P = .61). GEE analysis revealed a linear relationship between WOMAC-PF and LEFS with similar slope coefficients for all four occasions. The relationship between WOMAC-PF and LEFS scores was virtually identical for the postarthroplasty assessment occasions. Conclusions Our findings support previous investigations that showed the WOMAC-PF’s inability to provide a valid assessment in change in function. The GEE analysis coefficients can be used to convert LEFS scores to WOMAC-PF scores that adjust for the bias between pre- and post-TKA assessments.
Introduction
Pain and decreased lower extremity functionddefined as the ability to move around [1]dare two important determinants of knee arthroplasty in persons with osteoarthritis of the knee. Recognizing their importance, Outcome Measures in Rheumatoid Arthritis Clinical Trials III identified pain and function as two of four core outcomes to be assessed independently in phase III trials [2]. Further support for the independent assessment of pain and function is found in the Outcome Measures in Rheumatoid Arthritis Clinical Trials-OsteoArthritis Research Society International responder criteria that recommend moderate improvement in two of the following three categories: (1) pain, (2) function, and (3) patient's global rating [3].
The Western Ontario and McMaster
No author associated with this paper has disclosed any potential or pertinent conflicts which may be perceived to have impending conflict with this work. For full disclosure statements refer to https://doi.org/10.1016/j.artd.2018.09.003.
Universities Osteoarthritis Index (WOMAC) [4], both in its original and embedded form in the Knee injury and Osteoarthritis Outcome Score [5], is one of the most widely used patient-reported outcome measures applicable to patients with osteoarthritis of the knee undergoing arthroplasty. However, a number of studies have consistently demonstrated the WOMAC's inability to distinguish between pain and function [6][7][8][9], and in an evidence-based era, the continued prominence of this measure is puzzling.
There are several possible explanations for this practice. First, we suspect that conformity is an influential determinant of the WOMAC's continued notoriety. For example, the WOMAC is often included in joint registries, and this practice facilitates the comparison of patients' outcomes over time as interventions evolve. A second explanation is a lack of awareness of both the inability of the measure to distinguish between pain and function and the consequence of this deficiency. For example, studies supporting the validity of the WOMAC's ability to assess function and change in function [10][11][12][13][14] frequently, if not uniformly, do not cite studies that challenge the measure's ability to accurately assess function. Also, study designs supporting the WOMAC's ability to detect valid change in pain and function after arthroplasty typically select reassessment intervals (ie, >2 months after arthroplasty) where pain and function are expected to display similar change trajectories, thus masking the measure's ability to provide a unique and accurate assessment of each [10][11][12][13][14]. A consequence of not being able to distinguish pain and function is that a clinician may form an inaccurate impression of a patient's status, particularly as it applies to mobility. For example, Parent et al [8] reported a 20-point improvement in WOMAC physical function (WOMAC-PF) scores when measured 2 months after arthroplasty compared with a 39-meter decrease in 6-minute walk distance and similar gait speed and stair ascent times compared with preoperative values. Calatayud et al [15] reported slower Timed-upand-go (TUG) and stair test times but a significant improvement (ie, !9 points) in WOMAC-PF scores 1 month after arthroplasty compared with preoperative values. Stratford et al [16] have found similar WOMAC-PF scores before and 16 days after arthroplasty assessments; however, time to complete performance measures (ie, timed stair test, TUG test, and a self-paced walk test) increased more than double. To determine the extent to which WOMAC pain and function subscales provide distinct assessments, a study design that provides noticeably different change trajectories for pain and function is required.
One design would be to take advantage of the natural or clinical history after total joint arthroplasty (TKA) [17]. For example, a number of studies have reported a significant increase in performance measure times and a reduction in pain when early postarthroplasty values are compared with preoperative values [16,[18][19][20][21]. However, investigations of the WOMAC-PF and its embedded version in the Knee injury and Osteoarthritis Outcome Score (ADL scale) have shown an inability to detect significant deterioration in mobility over this period [15][16][17]21,22]. In contrast to the WOMAC-PF's limited ability to provide an accurate representation of lower extremity mobility, the Lower Extremity Functional Scale (LEFS) has been able to detect significant reductions in the ability of patients to move around in the early weeks after arthroplasty [16,17].
Our purposes were to contribute further information concerning the WOMAC-PF's ability to accurately assess lower extremity mobility in patients undergoing TKA,and to determine whether a relationship between pre-and post-TKA WOMAC-PF and LEFS scores could be established that takes into account the apparent bias WOMAC pain scores impose on the interpretation of WOMAC-PF scores. The intent of the latter goal was to assess the feasibility of converting LEFS scores to WOMAC-PF scores accounting for the limited ability of WOMAC-PF scores to accurately represent mobility distinct from pain.
Participants
Participants were those who took part in a randomized clinical trial which was approved by the Sunnybrook Health Sciences Centre (Toronto, Canada) research ethics board that examined the effect of perioperative gabapentin on patients undergoing TKA [23]. Patients were eligible for the trial if they were between the ages of 18 and 75 years; had an American Society of Anesthesiologists score of I, II, or III; and provided written informed consent. Patients were ineligible if they had a known allergy to medications being used, a history of drug or alcohol abuse, a history of being on chronic pain medications, rheumatoid arthritis, a psychiatric disorder, a history of diabetes with impaired renal function, a body mass index > 40, or were unable or unwilling to use a patient-controlled analgesia pump [23].
Study design
The original randomized clinical trial's purpose was "to examine whether, in the context of preoperative spinal anesthesia, femoral and sciatic nerve blocks, and celecoxib coadministration, a 4-day perioperative regimen of gabapentin vs placebo improves knee function on performance and self-reported measures of physical function, and movement evoked pain on postoperative day 4 and at 6 weeks and 3 months after surgery" [23]. Given there was no between-group difference in performance or self-reported outcomes, the present study viewed the entire sample as a single group that was assessed at four fixed occasions (before arthroplasty, at 4 days, at 6 weeks, and at 3 months). The TUG test was applied to obtain a performance-based assessment of change in the patients' abilities to move around before and 4 days after arthroplasty. Similarly, WOMAC-PF and LEFS change scores taken before and 4 days after arthroplasty were compared. The extent to which a consistent relationship existed between WOMAC-PF and LEFS scores was assessed by comparing the association between their scores at each of the four measurement occasions.
Timed-up-and-go
The TUG is an OsteoArthritis Research Society Internationalrecommended performance test [24,25]. Patients start seated in a chair and are required to rise, walk 3-meters, turn, return to the chair, and sit down. The outcome is the time to perform the test in seconds. The minimal detectable within-patient change is approximately 2.5 seconds [26].
Western Ontario and McMaster Universities Osteoarthritis Index (LK3.1 version)
The WOMAC is a patient-reported measure conceived for persons with osteoarthritis of the lower extremity. It has subsequently been used frequently to assess patients before and after arthroplasty [1].
The pain subscale consists of 5 items each scored 0 to 4. Total scores can vary from 0 to 20, with lower scores representing lower pain levels. The minimal within-patient change is approximately 4 points [27]. The physical function subscale consists of 17 items each scored 0 to 4. Total scores can vary from 0 to 68, with lower scores representing higher levels of functional status. The minimal detectable within-patient change is 9 points [28].
Lower Extremity Functional Scale
The LEFS is a patient-reported measure of lower extremity functional status [29]. It was designed to be applicable to persons with a spectrum of lower extremity problems and ability levels. Validation studies have supported its use in a variety of populations including patients with sports injuries, stroke, osteoarthritis, and hip or knee arthroplasty [28,[30][31][32][33][34][35]. The LEFS consists of 20 items with each scored 0 to 4. Total scores can vary from 0 to 80, with higher scores representing higher levels of functional status. The minimal detectable within-patient change is approximately 9 points [29].
Statistical analysis
We performed a secondary analysis of data obtained from a clinical trial reported previously [23]. Our sample size was one of convenience and dictated by the clinical trial. Descriptive statistics were summarized as frequency counts for categorical data, mean and standard deviation, or median and first and third quartiles if the data were skewed. To address our first purpose, we applied a paired t-test or Wilcoxon's Signed Rank test to test for a difference between prearthroplasty and 4-day postarthroplasty measurements. We applied a generalized estimating equation (GEE) analysis for longitudinal data to address our second purpose which examined the relationship between WOMAC-PF and LEFS scores across occasions. The dependent variable was WOMAC-PF scores, and the independent variables were LEFS scores and occasions at 4 levels (before arthroplasty, after arthroplasty at 4 days, at 6 weeks, and at 3 months). Our model-building approach was as follows: (1) establish the relationship between WOMAC-PF and LEFS scores (eg, linear or polynomial); (2) test for a LEFS-by-occasion interaction; and (3) identify the most appropriate covariance structure. After the final model was established, we evaluated the stability of the relationship by testing whether the occasion-specific regression lines were coincident, parallel but not coincident, or not parallel. Analyses were performed in STATA v15.1 (StataCorp, College Station, TX), and an effect was considered statistically significant at P < .05 (95% confidence interval excluded zero).
Participants
The sample size for this study consisted of 176 patients with an equal distribution of males and females. The sample's mean age (standard deviation) and body mass index were 62.9 (6.8) kg and 31.7 (5.4) kg/m 2 , respectively. Further details concerning the sample can be found in the previously reported clinical trial [23]. Table 1 provides a summary of the measures' scores at each of the four occasions. A comparison of prearthroplasty and 4-day postarthroplasty scores revealed a significant increase in TUG times (mean difference d ¼ 29:0 s; 95% CI : 24:2 À 33:9; P < .001) and decrease in LEFS scores (d ¼ 12:3; 95% CI : 8:6 À 16:0; P < .001), both representative of a reduction in lower extremity functional status. A comparison of WOMAC pain scores showed a decrease in pain (d ¼ 1:5; 95% CI : 0:8 À 2:1; P < .001) and no appreciable change in WOMAC-PF scores (d ¼ 0:7; 95% CI : 1:9 À 3:3; P ¼ .61). Given the sample sizes were different for the LEFS and WOMAC-PF (Table 1), we recalculated the change estimates for the identical sample and obtained a similar result (WOMAC-PF: d ¼ 1:1; 95% CI : 2:1 À 4:3; P ¼ .50; LEFS: d ¼ 11:9; 95% CI : 7:8 À 16:0; P < .001). Table 2 summarizes the GEE results that applied an exchangeable covariance structure. This analysis revealed a linear relationship between WOMAC-PF and LEFS scores. There was no evidence of a LEFS-by-occasion interaction (P ¼ .76); all regression lines were parallel. However, a significant difference was noted among occasions (P < .001), and a contrast analysis showed that all postarthroplasty occasion coefficients differed from the prearthroplasty coefficient by approximately 7 WOMAC-PF points. A further contrast analysis revealed that the three postarthroplasty coefficients did not differ (P ¼ .90); these three regression lines were coincident. Given there was no difference among the three postarthroplasty regression coefficients, we dichotomized the occasion variable to prearthroplasty and postarthroplasty and reran the GEE analysis as described previously. These results are also reported in Table 2 and used to generate Figure 1. In addition to showing the relationship between prearthroplasty and postarthroplasty WOMAC-PF and LEFS scores, the figure also includes the 95% confidence bands (shaded area around each line). The confidence bands convey the likely location of the WOMAC-PF population's mean score for a given LEFS score.
Discussion
A test or measure is useful to the extent that it allows valid inferences to be drawn from its measured values. The purpose of the current manuscript was to contribute information concerning the WOMAC-PF's ability to accurately assess lower extremity mobility as defined by the WOMAC-PF and to determine whether a relationship between pre-and post-TKA WOMAC-PF and LEFS scores could be established that takes into account the apparent bias WOMAC pain scores impose on the interpretation of WOMAC-PF scores. When assessed 4 days after arthroplasty, we found that WOMAC-PF scores did not detect deterioration in lower extremity mobility compared with prearthroplasty scores. In contrast, we found strong evidence of decreased mobility based on a marked increase in TUG times and a significant reduction in LEFS scores. To be clear, we were not interested in what happens specifically between pre-and 4-day post-TKA, but rather applied these time points to expose a deficiency which cannot be disentangled when pain and function have similar change trajectories. With respect to Table 1 Occasion-specific summary statistics.
Measure
Measurement occasion Preoperative mean (SD), n Day 4 mean (SD), n 6 weeks mean (SD), n 3 months mean (SD), n our second purpose, we found a linear relationship between WOMAC-PF and LEFS with a postarthroplasty WOMAC-PF score being approximately 7 points less than the prearthroplasty score for a given LEFS value. The relationship between WOMAC-PF and LEFS scores was virtually identical for the three postarthroplasty assessment occasions.
Our finding that the WOMAC-PF was not capable of detecting deterioration in function in the early days after arthroplasty is consistent with the results of previous investigations [15,16,21,22]. It is likely that the compromised ability of the WOMAC-PF to accurately represent a change in mobility is not a function of its items alone but rather of the structure of the measure and the similarity of pain and function item content. Approximately one-half of the WOMAC-PF items address activities similar to those included in the WOMAC pain subscale, which patients encounter first when completing the measure. Given the similarity of item phrasing and content, we suspect that responses to pain items bias responses to similar function items. Support for this premise is found in a previous study that examined the ability of two subsets of WOMAC-PF items to address change using a study design similar to that of the current investigation. Patients were assessed before arthroplasty and within 16 days after arthroplasty [36]. WOMAC-PF items were divided into 8 items with content similar to that of pain items (ie, descending and ascending stairs, rising from sitting, standing, walking on a flat surface, rising from bed, lying in bed) and 8 items dissimilar to pain items' content (bending to the floor, getting in or out of a car, going shopping, putting on your socks or shoes, getting in or out of the bath, getting on or off the toilet, performing heavy domestic duties, performing light domestic duties). Three performance measures (TUG, stair test, self-paced walk) were also applied. Consistent with the results from the performance measures, the sum of the dissimilar 8 items demonstrated a significant decrease in function after arthroplasty (ie, higher WOMAC scores), whereas the sum of the similar 8 items showed a modest improvement in function [36].
The longitudinal analysis revealed similar slope coefficients for all four measurement occasions, suggesting a stable relationship between WOMAC-PF and LEFS scores. However, a systematic difference of approximately 7 WOMAC-PF points was noted between prearthroplasty and postarthroplasty values for a given LEFS score. The interpretation of the occasion-specific regression coefficients reported in Table 2 (ie, À7.14, À6.62, À6.86) is that the bias identified between prearthroplasty and 4-day postarthroplasty values is, likely owing to the influence of pain, the same at 6 weeks and 3 months. Thus, although it was not possible to disentangle a change in pain from a change in function at 6 weeks and 3 months because both are expected to improve, the extent to which a bias exists was consistent with the 4-day assessment. We are unaware of previous investigations that have modeled the relationship between WOMAC-PF and LEFS scores in a context similar to our study. A consistent slope coefficient and bias exists between prearthroplasty and postarthroplasty WOMAC-PF and LEFS scores, allows for a simple conversion of LEFS scores to WOMAC-PF scores (ie, WOMAC-PF ¼ 51.1 e 0.6 [LEFS] e 6.8 [1 if after arthroplasty]). Also, given the narrow width of the confidence bands, our results suggest the conversion of LEFS scores to mean WOMAC-PF scores can be done with a high level of confidence.
The following vignette is offered to illustrate how information from Table 2, and the figure can be applied. A future investigator conducts a longitudinal study of patients undergoing TKA and applies the LEFS as the only patient-reported outcome measure of function. LEFS mean scores preoperatively, at 6 weeks, and at 3 months, were 30, 35, and 50, respectively. These values convert to WOMAC-PF scores of 33, 23, and 14, respectively, which can be compared to mean values reported in historical publications and joint registries.
A limitation of our study is that it represents a secondary analysis of data obtained from a clinical trial. As such, LEFS measurements were not mandated on day 4, and this resulted in an imbalance in measure-specific sample sizes. With respect to the study component that examined the measures' abilities to detect deterioration between prearthroplasty and 4-day postarthroplasty, we supplemented our analysis with one that contained identical samples (n ¼ 58). Our results showed that the WOMAC-PF was unable to detect deterioration and that the upper 95% confidence limit on the change estimate did not include the value of a clinically important change which has been estimated to be 9 points [28]. In contrast, the LEFS was able to detect a deterioration in function, and the point estimate of change exceeded the clinically important change value of 9 points (lower 95% confidence limit was approximately 8) [29]. The interpretation of these confidence limits is that despite the sample size, there is strong evidence to suggest that the WOMAC-PF cannot detect an important deterioration in function when WOMAC pain improves; however, the LEFS can detect worsening. The smaller LEFS sample size at day 4 affected the longitudinal data analysis such that a wider slope coefficient confidence interval was noted for this occasion than the other postarthroplasty measurement occasions (Table 2). Also, our results do not discern whether the prearthroplasty WOMAC-PF score is inflated owing to pain or the postarthroplasty score "optimistically" improved because of a reduction in pain.
Conclusions
In summary, our results relating to the WOMAC-PF's inability to provide a distinct assessment of a patient's ability to move around is consistent with previous reports, and this deficiency may lead to invalid inferences being drawn from a score or change score. Our findings also showed a consistent relationship (ie, slope coefficient) between WOMAC-PF and LEFS scores that was biased by approximately 7 WOMAC-PF points (ie, occasion coefficient) after arthroplasty. This relationship allows a transformation of LEFS scores to WOMAC-PF scores that accounts for the bias between pre-and post-TKA assessments. Given this is the first study to investigate the relationship between WOMAC-PF and LEFS scores before and after arthroplasty, further investigations are necessary to support our findings. Finally, it is essential to stress that measurement properties are context specific. Accordingly, future investigations are required to determine whether our findings are generalizable to other surgeries such as hip replacement. | 2019-01-22T22:20:09.868Z | 2018-10-15T00:00:00.000 | {
"year": 2018,
"sha1": "b956c361dddfb89779997bd7db144d53f87833e3",
"oa_license": "CCBYNCND",
"oa_url": "http://www.arthroplastytoday.org/article/S2352344118301067/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b956c361dddfb89779997bd7db144d53f87833e3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
53733737 | pes2o/s2orc | v3-fos-license | α-Synuclein Trafficking in Parkinson’s Disease: Insights From Fly and Mouse Models
Protein aggregation and accumulation are common pathological hallmarks in neurodegenerative diseases. To efficiently clear and eliminate such aggregation becomes an important cellular strategy for cell survival. Lewy bodies inclusion and aggregation of α-Synuclein (α-Syn) during the pathogenesis of Parkinson’s disease (PD) serve as a good example and are potentially linked to other pathological PD features such as progressive dopaminergic neuron cell death, behavioral defects, and nonmotor symptoms like anosmia, cognitive impairment, and depression. Years of research have revealed a variety of mechanisms underlying α-Syn aggregation, clearance, and spread. Particularly, vesicular routes associated with the trafficking of α-Syn, leading to its aggregation and accumulation, have been shown to play vital roles in PD pathogenesis. How α-Syn proteins propagate among cells in a prion-like manner, either from or to neurons and glia, via means of uptake or secretion, are questions under active investigation and have been of central interest in the field of PD study. This review covers components and pathways of possible vesicular routes involved in α-Syn trafficking. Events including but not limited to exocytosis and endocytosis will be discussed within the context of an overall cellular trafficking theme. Recent advances on α-Syn trafficking mechanisms and their significance in mediating PD pathogenesis will be thoroughly reviewed, ending with a discussion on the advantages and limitations of different animal PD models.
Introduction
History of Parkinson's disease (PD) began with the very first description appeared in a monograph titled An Essay on the Shaking Palsy around 200 years ago, by the English surgeon James Parkinson (Parkinson, 2002). At the time, it was merely clinical observations on patients described to exhibit involuntary tremulous motion and lessened muscular power, with shaking of the limbs in particular. Over the past decades, efforts bridging translational and basic research greatly expand our knowledge on the cause, development, and the overall process of this neurological disorder. These discoveries elaborate on the initial clinical observations and give us a better understanding on the neuropathological underpinnings of this disease. Despite so, a complete picture of the disease progression and issues on finding a probable cure have not been fully resolved.
As the most common movement disorder in the world, PD is clinically characterized with symptoms such as resting tremor, bradykinesia, postural instability, accompanying nonmotor symptoms like cognitive impairment and autonomic dysfunction. In the brain, a series of neuropathological changes appears throughout the course of PD development, ultimately leading to the diagnosis hallmark: the aggregation of intracellular inclusions named Lewy bodies (LBs) and Lewy neurites (LNs), and the loss of dopaminergic (DA) neurons in the substantia nigra pars compacta (SNpc). This progressive brain pathology can be staged by the LB appearance in different regions of the brain (Braak et al., 2003;Braak et al., 2006), with initial detection of LB in the periphery such as dorsal motor nucleus of the glossopharyngeal and vagal nerves or the olfactory bulb (Lewy, 1912), followed by the appearance in the SNpc DA neurons in mid-stage, finally other parts of the brain. Interestingly, this topographic spread of LB pathology correlates with the progression of PD clinical symptoms (Braak et al., 2006), raising the interest to search for spread pattern of LB pathology and correlative relationship leading to PD progression.
Whereas most of the PD cases are sporadic, studies on rare familial cases identified possible genetic causes for PD. The central component for LB and LN, a-Synuclein (a-Syn), is the gene product from SNCA (PARK1), the first PARK gene identified in the studies of rare familial PD cases. a-Syn contains an amphipathic N-terminal region, followed by a non-Ab component (NAC) of AD plaques domain responsible for aggregation, and a C-terminal acidic tail ( Figure 1). Physiologically, a-Syn is a 140 amino acid protein enriched in presynaptic nerve terminals (Iwai et al., 1995). As a vesicular protein, a-Syn was shown to directly interact with VAMP2 and promotes SNARE complex assembly (Burre et al., 2010) and interacts with CSPa (Chandra et al., 2005;Ninkina et al., 2012). Significant reduction in excitatory synapse size and synaptic function were detected in mice lacking all three synuclein homologues (a, b, and c) (Burre et al., 2010;Greten-Harrison et al., 2010). These findings first implicate a role for a-Syn in synaptic vesicle trafficking.
As the central component for LB and LN, the properties of a-Syn assembling into different forms, particularly the fibrils and aggregates, have raised substantial interest in the PD research community. Physiologically, a-Syn forms helically folded tetramers that resist aggregation (Bartels et al., 2011;Wang et al., 2011). In pathological conditions, however, oligomeric a-Syn species and the fibrillary aggregates have been shown to be the toxic culprits for PD pathology (Cremades et al., 2012;Bengoa-Vergniory et al., 2017). Moreover, a-Syn assembles into distinct strains displaying differential seeding capacities and strain-specific pathological consequences Peelaerts et al., 2015). On the other hand, a-Syn aggregation was also detected in LB of sporadic PD patients (Spillantini et al., 1997(Spillantini et al., , 1998b, suggesting that abnormal clearance and degradation of a-Syn might occur. These findings altogether reinforce a-Syn as a necessary component in LB pathology and PD, and both its intrinsic aggregation propensities and extrinsic regulation on its protein levels matter. Clues on the intrinsic assembling properties of a-Syn and the conversion of a-Syn from a normal to toxic form (oligomers, fibrils, ribbons, etc.) that seed a-Syn aggregation have uncovered similarity between a-Syn and the prion protein, leading to the proposed prion-like propagation mode for a-Syn spread in PD pathology. In this mode, converted toxic a-Syn has a higher tendency to aggregate and propagate as part of the LB components throughout different areas of the brain (Kordower et al., 2008;Li et al., 2008;Desplats et al., 2009 Figure 1. a-Syn protein structure. a-Syn protein containing the Nterminal amphipathic region (yellow), the NAC domain (green), and the C-terminal acidic tail (red) were illustrated in a linear diagram (a) or a representative monomeric structure (b). Note that the locations of point mutations listed in Table 1 were labeled. NAC ¼ non-Ab component. 2010; Hansen et al., 2011;Kordower et al., 2011;Angot et al., 2012;Luk et al., 2012;Mougenot et al., 2012;Masuda-Suzukake et al., 2013). At the cellular level, the idea of spread comes from two significant observations: (a) a-Syn has been detected extracellularly, suggesting that it is secreted by cells in a mobile way (Ueda et al., 1993;Jakes et al., 1994). Years later, the presence of a-Syn at nanomolar concentration in the cerebrospinal fluid and plasma of PD patients was described, further supporting the existence of extracellular a-Syn (Borghi et al., 2000;El-Agnaf et al., 2003). (b) a-Syn filamentous deposits in oligodendrocytes, the glial cytoplasmic inclusions, are prominent in the atypical Parkinsonism multiple system atrophy (Spillantini et al., 1998a(Spillantini et al., , 1998bTu et al., 1998;Watts et al., 2013;Reyes et al., 2014;Prusiner et al., 2015). Due to the dominant expression of a-Syn in neurons, these glial a-Syn aggregates are considered exogenous and potentially delivered from nearby neurons. Thus, knowing how a-Syn propagates among different cells and their means of transfer is crucial for understanding the mechanism in the spread of LB pathology and PD progression. Echoing the Braak model, a-Syn spread from neurons to neurons have been suggested to act in a way of trans-synaptic release mediated by Hsp70 and the cochaperone DnaJ (Danzer et al., 2011;Fontaine et al., 2016). The modes and mechanisms of transmission from neurons to glia or vice versa, however, are far less clear and await further investigation. Despite sparse hints from the literature, mechanisms of a-Syn intercellular propagation, their routes of secretion and uptake operated by different cell types, and their differential dictation in Parkinson-related pathology are all within areas of interest and remain to be further explored.
This review provides a comprehensive overview on the vesicular mechanisms of a-Syn trafficking during LB pathology and PD progression. Insights from fly and mouse PD models will be covered, and specific cell-type mechanisms for a-Syn secretion and uptake will be discussed and compared. a-Syn trafficking, rather than its clearance and degradation, is emphasized as the latter has been a subject for a number of excellent reviews recently (Kinghorn et al., 2017;Pitcairn et al., 2018). Of note, impaired trafficking and defects in other trafficking components, in addition to a-Syn, also play significant roles in PD pathology.
Transgenic a-Syn overexpression models
Traditional approaches that model PD use toxins such as 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine to kill DA neurons and recapitulate PD symptomatic phenotypes so that potential drug targets are identified to ameliorate these symptoms. These toxin-based models, which have been used both in flies and mice, are usually acute, rapid, and with a drawback that does not model the molecular pathology of PD. Virus vector-based rat or primate models are also available to manipulate a-Syn expression. On the other hand, transgenic PD models built by our knowledge on potential genetic PD risk factors recapitulate the disease lesions and exhibit more strength in understanding the underlying molecular mechanism of PD. Most of the mouse transgenic models overexpress the human wild-type or mutant a-Syn, to model different dosage of SNCA gene expression. Nonetheless, only a few of these rodent models recapitulate the cardinal features of PD. Lack of neurodegeneration, a-Syn aggregation, or locomotor deficits suggests that these mice do not perfectly model the PD pathology, raising the concern that additional animal models might be needed to investigate the basic pathogenic mechanisms.
The diverse behaviors and sophisticated genetic tools have made Drosophila melanogaster a powerful in vivo system to examine PD pathology. Fly a-Syn transgenic models are mainly based on overexpression of human wild-type or mutant a-Syn due to the lack of endogenous fly a-Syn. Such strategy implements elevated a-Syn levels in the system and mimics the increased SNCA gene expression in the PD patients carrying a-Syn mutation or duplication/triplication, making the transgenic flies well suited for modeling PD. Recently, a number of reviews have summarized different types of PD animal models (Vanhauwaert and Verstreken, 2015;Hewitt and Whitworth, 2017;Koprich et al., 2017;Xiong and Yu, 2018). Here, we selectively discuss fly and mouse transgenic models based on a-Syn overexpression, with an attempt to lay the groundwork for subsequent discussion on a-Syn vesicular mechanisms in PD (Table 1).
Most of the transgenic mouse models were established based on expression of different a-Syn forms under a variety of promoters. Human wild-type a-Syn, pathogenic a-Syn A53T , and a-Syn A30P (ha-Syn, ha-Syn A53T , and ha-Syn A30P ) are frequently used for overexpression, and in some cases mouse a-Syn (Chandra et al., 2005;Rieker et al., 2011) or C-terminal truncated aSyn (Tofaris et al., 2006;Wakamatsu et al., 2008;Daher et al., 2009) are also used. Mice overexpressing ha-Syn under control of the human platelet-derived growth factor subunit B (PDGFB) promoter exhibit progressive development of a-Syn and ubiquitin inclusions starting as early as 3 months, with signs of DA neuron loss and locomotor defects (Masliah et al., 2000;Rockenstein et al., 2002;Amschl et al., 2013). On the other hand, mice overexpressing ha-Syn under the mouse thymus cell antigen 1 (Thy1) promoter show early and progressive sensorimotor anomalies and olfactory deficits (Fleming et al., 2004(Fleming et al., , 2008. Some degree of motor deficits and a-Syn aggregation were also detected in mice overexpressing ha-Syn A53T or ha-Syn A30P under the Thy1 (Kahle et al., 2000;Neumann et al., 2002;Freichel et al., 2007; WT þ À (Rockenstein et al., 2002;Fleming et al., 2004Fleming et al., , 2008Fleming et al., , 2011Kim et al., 2015) ( Rockenstein et al., 2002;Fleming et al., 2004Fleming et al., , 2008Fleming et al., , 2011Kim et al., 2015) À (Rockenstein et al., 2002;Fleming et al., 2004Fleming et al., , 2008Fleming et al., , 2011Kim et al., 2015) À À A30P þ À (Kahle et al., 2000;Neumann et al., 2002;Freichel et al., 2007) ( Kahle et al., 2000;Neumann et al., 2002;Freichel et al., 2007) À Rothman et al., 2013) or the gene encoding prion protein (prnp) promoter (Giasson et al., 2001; M. K. Lee et al., 2002;Gispert et al., 2003;Gomez-Isla et al., 2003;Yavich et al., 2005;Paumier et al., 2013;Bencsik et al., 2014). Interestingly, mice overexpressing ha-Syn under the tyrosine hydroxylase (Th) or SNCA promoter tend to lack a-Syn inclusions and other neurodegenerative features (Richfield et al., 2002;Tofaris et al., 2006;Wakamatsu et al., 2008;Kuo et al., 2010;Janezic et al., 2013). Moreover, a-Syn aggregation sometimes is absent in some of the models accompanying motor and degenerative deficits (Matsuoka et al., 2001;Gispert et al., 2003;Yavich et al., 2005), suggesting that a-Syn aggregation does not necessarily correlate with the occurrence of other PD symptomatic phenotypes as the Braak model proposed. Taken together, these observations suggest a greater variation in phenotypes recapitulated by different mouse models, leading to a pressing need for establishing other animal models to help with further investigation. Studies in flies provide significant insights and advance our understanding on the basic pathogenic mechanism of PD. Fly a-Syn models were first established when ha-Syn and its related pathogenic forms were overexpressed in all or specific DA neurons (Feany and Bender, 2000). The authors first demonstrated that ha-Syn overexpression leads to proteinaceous inclusions, locomotor deficits, and DA neuron degeneration in fly adult brains in an age-dependent manner. Subsequently, a series of studies revealed the importance of chaperone HSP70, proteasome-dependent degradation, and phosphorylation on regulating a-Syn toxicity and aggregation Chen and Feany, 2005;Chen et al., 2009;. Taking advantage of the fly PD models, ha-Syn-overexpressing flies were also treated with pharmaceutical agents to search for potential therapeutic drugs that ameliorate PD symptoms Pendleton et al., 2002;Auluck et al., 2005). Genetically, interacting factors of autophagy, lysosome, and ubiquitin-mediated degradation were also identified to cooperatively work with ha-Syn during PD pathology (Davies et al., 2014;Miura et al., 2014;Alexopoulou et al., 2016;M'Angale and Staveley, 2016).
Fly and mouse models have distinct advantages and limitations in elucidating the mechanism of PD. Experimentally, flies grow faster and have a greater sampling number, whereas mice are recognized for their longer life span, and the hassle in handling them has limited the potential to acquire more data in supporting hypothesis of PD. Anatomically, mice have higher similarity in brain structure and PD-affected regions to human, whereas Drosophila adult brains have conserved DA neuron clusters that also undergo degeneration. Motor activity is much easier to access for Richfield et al., 2002;Kachroo and Schwarzschild, 2012) flies than mice, whereas cognitive symptoms associated with PD are harder to evaluate in flies, and easier to assess in mice due to the knowledge in circuits controlling these behaviors. Findings from flies and mice, however, do not favor one or another in modeling PD. Interestingly, analyses of most mouse PD models do not recapitulate the cardinal features of PD, whereas transgenic flies modeling PD, despite its invertebrate origin, consistently show signs in some or all PD-associated features such as DA neurodegeneration, loss of motor activity and life span, and a-Syn aggregation. Some degree of consistency has been detected in observing DA neuron loss or motor deficits in these transgenic fly models. However, discrepancies arise over the strength of a-Syn-induced phenotypes. It is reported that different experimental approaches in mounting the fly brains might lead to different results on ha-Syn-mediated DA neuron loss; these findings are thoroughly discussed elsewhere (Pesah et al., 2005;Whitworth et al., 2006;Navarro et al., 2014). Complementary approaches utilizing these two animal models will probably strengthen and gain greater insights in PD pathology. For example, to look for genetic modifiers of PD, one could easily take on a high-throughput screening approach using flies. Once a target gene is identified, its function can be tested in transgenic mouse models so that more physiological-relevant evidence is acquired.
a-Syn in trafficking: Interaction with Rab proteins
Accumulating evidence has suggested that impairment in vesicular trafficking, such as the blockade by PDassociated mutations in lysosomal genes ATP13A2 (Ramirez et al., 2006) or glucocerebrosidase (GBA) (Sidransky et al., 2009;Schondorf et al., 2014), is a possible cause for PD (Abeliovich and Gitler, 2016). Defects in these trafficking pathways often couple with a-Syn accumulation and aggregation, suggesting that these routes are major means for a-Syn transmission (Gitler et al., 2009;Mazzulli et al., 2011). To investigate further, genetic and biochemical screens have uncovered functional interaction between a-Syn and the central trafficking proteins small GTPase Rabs (Dalfo et al., 2004;Cooper et al., 2006), and a-Syn was also shown to disrupt cellular Rab homeostasis (Gitler et al., 2008). These Rab proteins are relevant to a-Syn-induced defects in trafficking and regulate a-Syn aggregation/toxicity (Gao et al., 2018). For instance, Rab1, Rab7, Rab8, and Rab11 have all been shown to ameliorate a-Syn toxicity in Drosophila (Cooper et al., 2006;Yin et al., 2014;Breda et al., 2015;Dinter et al., 2016). Rab1, a regulator of ER-to-Golgi trafficking, rescues a-Syn-mediated DA neuron loss, compromised autophagy (Cooper et al., 2006;Winslow et al., 2010), and Golgi fragmentation upon a-Syn overexpression in nigral DA neurons (Coune et al., 2011;Rendon et al., 2013). Rab11, the endosomal recycling factor, has been implicated in a-Syn release and secretion (Liu et al., 2009;Hasegawa et al., 2011). In addition to colocalize with a-Syn in intracellular inclusions (Chutna et al., 2014), Rab11 modulates synaptic vesicle size, decreases a-Syn aggregation, and ameliorates several a-Syn-dependent phenotypes in Drosophila (Breda et al., 2015). Taken together, these findings suggest that a-Syn functions in vesicular trafficking via interaction with various Rab proteins, and at the same time serves as a possible target for trafficking via these pathways (Figure 1(a)).
a-Syn in trafficking: Uptake mechanisms by neurons
a-Syn has been found to localize within endosomes and aggregate preferentially in endocytic vesicles (H. J. Lee et al., 2005;Konno et al., 2012;Boassa et al., 2013), supporting the notion that a-Syn is trafficked in vesicles and its aggregation is intimately associated with these trafficking pathways. Nonetheless, how a-Syn is trafficked to and from different subcellular compartments or between cells remains largely elusive. Whereas a-Syn monomers pass directly through the plasma membrane (H. J. Lee et al., 2008b), internalization of oligomeric and fibrillary a-Syn species depends on endocytosis, as treatment with canonical endocytosis inhibitors consistently reduces a-Syn uptake (Desplats et al., 2009;Hansen et al., 2011). A number of studies have shown that a-Syn is internalized by neurons and glial cells via clathrin-mediated endocytosis (CME) (H. J. Lee et al., 2008aLee et al., , 2008bDesplats et al., 2009;Konno et al., 2012). Factors involved in CME, such as Rab5, have also been described to mediate a-Syn internalization, leading to subsequent intracellular LB inclusions (Sung et al., 2001). Recently, the homolog of a GWAS risk factor Cyclin-G-associated Kinase (GAK), auxilin, was first identified in flies to regulate life span, locomotor activity, and progressive DA neuron death (Song et al., 2017). Reducing auxilin expression in the presence of ha-Syn overexpression causes premature and enhanced DA neuron loss, suggesting that GAK/auxilin, a clathrinuncoating factor, might either directly regulate a-Syn endocytosis or participate in other ways to regulate trafficking of a-Syn. Furthermore, lymphocyte activating gene-3 (LAG-3), a leukocyte immunoglobulin protein expressed exclusively on neuronal surface, was found to facilitate the entry of fibrillar a-Syn via CME (Mao et al., 2016). Taken together, these findings strengthen the importance of CME in a-Syn uptake and provide insights on how a-Syn is trafficked to form LB inclusions in PD.
Despite the fact that CME is currently considered to be the major route for a-Syn uptake in neurons and glia, inhibition of this process does not completely block a-Syn uptake (H. J. Lee et al., 2008b;Desplats et al., 2009;Konno et al., 2012), suggesting that alternative routes exist. Interestingly, neurons also utilize a specialized form of endocytosis, macropinocytosis, to uptake a-Syn. Macropinocytosis-mediated actin ruffles mediate a-Syn entry, and this process requires heparan sulfate proteoglycans expressing on the cell surface (Holmes et al., 2013). Overall, these findings indicate that multiple types of mechanisms exist for a-Syn entry into neurons. Novel endocytic sites or receptors await to be identified to allow for a better understanding of the uptake mechanism, and these players serve as good candidates for therapeutic interventions to selectively block the uptake during disease progression (Figure 1(a)).
a-Syn in trafficking: Uptake mechanism by glia Similar to neurons, CME is also the major route for glia to uptake a-Syn (H. J. Lee et al., 2008a;Kisos et al., 2012;Valdinocci et al., 2017). Glial a-Syn inclusions such as the oligodendrocyte glial cytoplasmic inclusions in multiple system atrophy patients, together with the glial neuroinflammatory responses triggered by a-Syn (H. J. Lee et al., 2010;Hoffmann et al., 2016;Lim et al., 2016), implicate the significance of studying a-Syn trafficking into glial cells. Besides CME, different types of glial cells utilize distinct routes for uptake. Human astrocytes alternatively transfer aggregated a-Syn to healthy astrocytes via tunneling nanotubes (TNTs) to remove pathological inclusions instead of degrading them by lysosomes (Rostami et al., 2017). Astrocytic TNTs do so by utilizing F-actin-based thin protrusions to establish connections with neighboring cells for intercellular exchange and communication (Gousset et al., 2009;Agnati and Fuxe, 2014;Abounit et al., 2016). Furthermore, TNTs are not restricted to facilitating transport between cells of the same type, as they are detected in both neurons and astrocytes (Rustom et al., 2004;Zhu et al., 2005;Sun et al., 2012), thereby serving as an alternative means for a-Syn transmission.
Microglia commonly utilize the specialized form of endocytosis, phagocytosis, to uptake a-Syn (H. J. Lee et al., 2008a;Bliederhaeuser et al., 2016). Some of the receptors functioning in microglial phagocytosis and activation, like the Toll-like receptors (TLRs) TLR2 and TLR4, have been implicated in a-Syn uptake and a-Syn-mediated activation (Fellner et al., 2013;Kim et al., 2013;Valdinocci et al., 2017). Moreover, microglia have been observed in vitro to uptake a-Syn-containing exosomes released by oligodendrocytes via macropinocytosis (Fitzner et al., 2011). In addition to phagocytosis and macropinocytosis, other clathrin-independent routes such as monosialoganglioside (GM1)-dependent lipid rafts have also been shown to mediate microglial uptake of a-Syn . Reduced expression of DJ-1, another PD risk factor, reduces cell surface lipid raft expression in microglia and impairs their ability to uptake soluble a-Syn (Nash et al., 2017). It is worth mentioning that lipid rafts also mediate a-Syn localization at the synapses (Fortin et al., 2004;Kubo et al., 2005), supporting the notion that lipid rafts participate in the transsynaptic release and neuronal propagation of a-Syn. Thus, lipid rafts function in both neurons and glia to mediate a-Syn entry. Notably, little is known about microglia using TNTs to transfer or uptake a-Syn despite that macrophages, the functional equivalent of microglia in the immune system, use TNTs to mediate phagocytic clearance of foreign materials (Onfelt et al., 2004;Onfelt et al., 2006; Figure 1(b)).
a-Syn in trafficking: Release mechanism
a-Syn has been detected extracellularly, suggesting its release from cells. In addition to leakage from cell death or apoptosis, other types of secretory mechanism come into play such as cargo release in the form of exosomes. Exosomes are 50 to 100 nm vesicles that facilitate intercellular exchange by transporting specific proteins or RNAs (Valadi et al., 2007;Gibbings et al., 2009). Interestingly, exosomes have been utilized as one means of cargo transfer between neurons and glia. Oligodendrocytes release neuroprotective exosomes to support neuronal metabolism (Fitzner et al., 2011;Fruhbeis et al., 2013aFruhbeis et al., , 2013b, whereas astrocytes and neurons facilitate bidirectional transfer of mitochondria via exosomes to support neuronal homeostasis (Davis et al., 2014;Hayakawa et al., 2016). Moreover, microglia were shown to release inflammatory factors in exosomes. Taken together, these findings indicated that exosomes as a type of important secretory organelle for intercellular communication.
It has been shown that exosomes obtained from the plasma of PD patients display higher levels of a-Syn than those from control groups (Schneider and Simons, 2013;Shi et al., 2014). Not only exosomal lipids enhance the aggregation propensity of a-Syn (Grey et al., 2015), oligomeric a-Syn stored in the exosomes are also released by neurons and preferentially taken up via endocytosis than a-Syn fibrils (Danzer et al., 2012). Furthermore, increased calcium levels and dysfunction of autophagy lead to increased release of exosomes containing a-Syn oligomers (Emmanouilidou et al., 2010;Alvarez-Erviti et al., 2011), suggesting that exosomal release of a-Syn is highly regulated. Most of the studies describe exosomal release of a-Syn from neurons, as neuronal release of a-Syn-containing exosomes are often targets for glial uptake, serving a more efficient means of transfer than free a-Syn (Chistiakov and Chistiakov, 2017). Limited evidence, however, has been provided for glial exosomal release of a-Syn.
In addition to exosomes, other unconventional ways for neurons to release a-Syn have been described. For instance, a portion of newly synthesized a-Syn is rapidly secreted from cells via endoplasmic reticulum/Golgiindependent exocytosis in both normal and stressinduced conditions (H. J. Lee et al., 2005;Jang et al., 2010). By this route, intravesicular aggregated a-Syn prone to localize in vesicles are secreted out of cells (H. J. Lee et al., 2005;Jang et al., 2010). Furthermore, tubulin polymerization-promoting protein (TPPP/p25a), a factor that mediates autophagosome-lysosome fusion, is involved in the unconventional secretion of a-Syn through exophagy. The same study also identified Rab27A, a late endosome regulator for exocytosis, that regulates a-Syn secretion in the absence of TPPP/p25a (Ejlerskov et al., 2013). Finally, as discussed, Rab11 regulates a-Syn resecretion in a recycling pathway that differ from classical ER/Golgi-dependent exocytosis (Liu et al., 2009;Hasegawa et al., 2011). In toto, these findings suggest that there is great diversity in the mechanism of a-Syn release, and most of the routes for a-Syn release have been studied more extensively in neurons than in glia ( Figure 2).
a-Syn trafficking and degradation
Accumulating evidence has suggested that a-Syn trafficking is closely linked to its aggregation. As discussed earlier, a-Syn interacts with various Rab proteins and regulate their homeostasis. Altered expression of these trafficking genes such as Rab8b, Rab11a, Rab13, and Slp5 also regulates a-Syn secretion and aggregation (Goncalves and Outeiro, 2017). These results suggest that a-Syn secretion might be the most crucial step related to its aggregation as its secretion constitutes the key step for propagation. Thus, an agent targeting a-Syn trafficking, particularly its secretion, might have therapeutic potential by exerting an effect by either facilitating or preventing its aggregation.
Studies from flies and mice have reached a conclusion that silencing the expression of these trafficking genes increases a-Syn aggregation, and their overexpression reduces a-Syn aggregation, thus relieving the induced toxicity in cellular models (Liu et al., 2009;Hasegawa et al., 2011;Yin et al., 2014). These trafficking proteins have also been detected in the a-Syn inclusions, suggesting that a-Syn is being recruited together with the trafficking proteins to the inclusion site. These findings suggest that a-Syn trafficking regulates its aggregation and the clearance of aggregation is favored for toxicity to be ameliorated. Recently, the emerging idea that facilitating protein aggregation might help to relieve the toxicity by stopping protein trafficking and retain the toxicity in local has posed a new and interesting perspective. It is possible that facilitating a-Syn aggregation might allow the construction of a potentially more concentrative milieu for the degradation machineries to proceed (autophagy and proteasomal degradation), increasing the efficiency of a-Syn degradation and disappearance. In this case, a-Syn trafficking might be less favorable. Furthermore, a blocking in the trafficking route might help to accumulate local a-Syn aggregates. However, studies from flies and mice do not support a positive role for facilitating a-Syn aggregation. Toxicity is ameliorated only when trafficking genes are overexpressed and aggregations are reduced, suggesting an overall reduction in a-Syn aggregation is needed for lowering the induced toxicity.
Another plausible explanation for how facilitating a-Syn aggregation might help with the toxicity is that large cytoplasmic inclusions/aggregations are formed by reducing the toxic oligomeric a-Syn species. It has been shown that oligomeric a-Syn species are the most toxic culprits for pathogenic spread and enhance neurotoxicity in Drosophila (Karpinar et al., 2009). Moreover, histone deacetylase 6 (HDAC6) has been shown to promote the formation of large a-Syn inclusions by reducing a-Syn oligomers. In this means, HDAC6 suppresses a-Syn-induced DA neuron loss and locomotor dysfunction in flies (Du et al., 2010). Based on these findings, it is speculated that large cytoplasmic inclusions are formed by reducing a-Syn oligomers, thus reducing toxicity. To this end, these results support the idea that facilitating a-Syn aggregation (i.e., forming large cytoplasmic inclusions) might help to relieve the toxicity, as the toxic oligomeric a-Syn species are reduced by doing so.
a-Syn equilibrium in trafficking
Previous studies have indicated that a-Syn propagates in all forms, via different routes. It has been shown that monomeric a-Syn passes through the membrane directly, whereas oligomeric and fibrillary a-Syn enter and leave cells in a variety of ways. These different routes include CME, lipid raft-dependent macropinocytosis, exosomal release, and classical and nonclassical exocytosis. These diverse routes render a-Syn mobile and diverge into different paths. Different modulators along the way also exert differential effects in regulating the trafficking. Interestingly, no discrete evidence suggested a preference for cells to take up or release a specific form of a-Syn, despite the fact that oligomeric and fibrillary forms are more pathogenic. Therefore, the equilibrium among these different a-Syn forms will probably become significant when it comes to which of the available routes is used for their uptake or release, but not to the choice of what kind of a-Syn is being trafficked. Nonetheless, there are still factors that regulate the a-Syn equilibrium worth to be considered for possible effects on a-Syn trafficking. For instance, it has been shown that a-Syn oligomerization within specific cellular compartments alters the distribution of functional forms of monomeric a-Syn, sequestering the monomer into nonfunctional oligomeric forms (Colla et al., 2012). This piece of evidence suggests that the distribution of functional/nonfunctional monomeric/oligomeric a-Syn might be a factor regulating its own trafficking. On the other hand, a-Syn trafficking is possibly modulated via its structural changes caused by mutation. For example, mutation of a-Syn N-terminal region like A30P and A53T might alter its secondary structure and affect a-Syn trafficking via an increase in the shuttling between the cytoplasm and nucleus Figure 2. a-Syn trafficking mechanisms in PD. Oligomeric and/or fibrillary a-Syn aggregates are trafficked intracellularly in vesicles and interact with a number of Rab proteins for their function. (a) a-Syn trafficking in neurons: whereas a-Syn monomers pass directly through the plasma membrane, oligomeric or fibrillary a-Syn uptake is mediated by CME and heparan sulfate proteoglycans-dependent macropinocytosis. LAG-3 is implicated as the receptor for uptake. Genetic evidence from Drosophila also suggests that the clathrin-uncoating factor GAK/auxilin is a potential mediator for a-Syn uptake. On the other hand, a-Syn is released from neurons via exosomes, a process regulated by intracellular calcium levels and autophagy. Other unconventional ways of a-Syn release includes ER/Golgi-independent exocytosis, TPPP/p25a-dependent exophagy, and Rab11-mediated resecretion. (b) a-Syn trafficking in glia: a-Syn uptake is potentially mediated by TLRs-dependent phagocytosis and GM1-dependent lipid rafts in microglia, whereas TNTs are one of the major means for a-Syn transfer between astrocytes. a-Syn release via exosomes has been observed in oligodendrocytes. and (c) An overview on a-Syn trafficking among neurons, astrocytes, microglia, and oligodendrocytes. Routes discussed earlier that depend on TNTs, exosomes, or forms of endocytosis/exocytosis are designated. Note that other routes and mechanisms might be involved. TNT ¼ tunneling nanotube; TLR ¼Toll-like receptor; CME ¼ clathrin-mediated endocytosis. (Emamzadeh, 2016) or an alteration in the rate of selfaggregation.
Concluding Remarks
Therapeutic solutions that help to ameliorate symptoms of PD have mainly focused on strategies targeting the elimination and clearance of a-Syn aggregation, a process closely linked to a-Syn trafficking. To understand how a-Syn is trafficked, subsequently the temporal and spatial control of a-Syn aggregation, is therefore an important issue to be addressed. This review summarizes recent updates on the routes and mechanisms of a-Syn trafficking, discusses specific cell-type mechanisms in these routes, and how contributions from fly and mouse models differ or complement each other (Figure 2). These insights help with the investigation on discovering new means of a-Syn transfer and refining the known ones by identifying new players involved. Drug candidates designed by targeting components in these pathways will potentially be helpful in alleviating functional symptoms, at the same time advance our understanding of this mysterious disease over centuries.
Acknowledgments
My sincere apologies to colleagues in the field whose work I was not able to mention because of space limitations. We would like to thank all members of the M.H. Laboratory for comments and suggestions on the manuscript.
Author Contributions
J. C., Q. L., and L. S. wrote the manuscript. J. C. designed the illustration in Figure 1. Q. L. created Table 1. All authors read and approved the manuscript.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. | 2018-12-02T16:55:10.381Z | 2018-11-27T00:00:00.000 | {
"year": 2018,
"sha1": "472c950114680c287ad3f734355a53587d733854",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1759091418812587",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "472c950114680c287ad3f734355a53587d733854",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.